Your SlideShare is downloading. ×
0
Storage for Virtual Environments<br />Stephen Foskett<br />Foskett Services and Gestalt IT<br />Live Footnotes:<br /><ul><...
#VirtualStorage</li></li></ul><li>This is Not a Rah-Rah Session<br />
Agenda<br />
Introducing the Virtual Data Center<br />
This Hour’s Focus:What Virtualization Does<br />Introducing storage and server virtualization<br />The future of virtualiz...
Virtualization of Storage, Serverand Network<br />Storage has been stuck in the Stone Age since the Stone Age!<br />Fake d...
A Look at the Future<br />
Server Virtualization is On the Rise<br />Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 bus...
Server Virtualization is a Pile of Lies!<br />What the OS thinks it’s running on…<br />What the OS is actually running on…...
And It Gets Worse Outside the Server!<br />
The Virtual Data Center of Tomorrow<br />Management<br />Applications<br />The Cloud™<br />Applications<br />Legacy<br />A...
The Real Future of IT Infrastructure<br />Orchestration Software<br />
Three Pillars of VM Performance<br />
Confounding Storage Presentation<br />Storage virtualization is nothing new…<br />RAID and NAS virtualized disks<br />Cach...
Begging for Converged I/O<br />4G FC Storage<br />1 GbE Network<br />1 GbE Cluster<br />How many I/O ports and cables does...
Driving Storage Virtualization<br />Server virtualization demands storage features<br />Data protection with snapshots and...
“The I/O Blender” Demands New Architectures<br />Shared storage is challenging to implement<br />Storage arrays “guess” wh...
Server Virtualization Requires SAN and NAS<br />Server virtualization has transformed the data center and storage requirem...
Keys to the Future For Storage Folks<br />Ye Olde Seminar Content!<br />
Primary Production Virtualization Platform<br />Data: InformationWeek Analytics 2010 Virtualization Management Survey of 3...
Storage Features for Virtualization<br />
Which Features Are People Using?<br />Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers<br />
What’s New in vSphere 4 and 4.1<br />VMware vSphere 4 (AKA ESX/ESXi 4) is a major upgrade for storage<br />Lots of new fea...
What’s New in vSphere 5<br />VMFS-5 – Scalability and efficiency improvements<br />Storage DRS – Datastore clusters and im...
And Then, There’s VDI…<br />Virtual desktop infrastructure (VDI) takes everything we just worried about and amplifies it:<...
What’s next<br />Vendor Showcase and Networking Break<br />
Technical Considerations - Configuring Storage for VMs<br />The mechanics of presenting and using storage in virtualized e...
This Hour’s Focus:Hypervisor Storage Features<br />Storage vMotion<br />VMFS<br />Storage presentation: Shared, raw, NFS, ...
Storage vMotion<br />Introduced in ESX 3 as “Upgrade vMotion”<br />ESX 3.5 used a snapshot while the datastore was in moti...
vSphere 5: What’s New in VMFS 5<br />Max VMDK size is still 2 TB – 512 bytes<br />Virtual (non-passthru) RDM still limited...
Hypervisor Storage Options:Shared Storage<br />The common/ workstation approach<br />VMware: VMDK image in VMFS datastore<...
Hypervisor Storage Options:Shared Storage on NAS<br />Skip VMFS and use NAS<br />NFS or SMB is the datastore<br />Wow!<br ...
Hypervisor Storage Options:Guest iSCSI<br />Skip VMFS and use iSCSI directly<br />Access a LUN just like any physical serv...
Hypervisor Storage Options:Raw Device Mapping (RDM)<br />Guest VM’s access storage directly over iSCSI or FC<br />VM’s can...
Hypervisor Storage Options:Direct I/O<br />VMware ESX VMDirectPath - Guest VM’s access I/O hardware directly<br />Leverage...
Which VMware Storage Method Performs Best?<br />Mixed random I/O<br />CPU cost per I/O<br />VMFS,<br />RDM (p), or RDM (v)...
vSphere 5: Policy or Profile-Driven Storage<br />Allows storage tiers to be defined in vCenter based on SLA, performance, ...
Native VMware Thin Provisioning<br />VMware ESX 4 allocates storage in 1 MB chunks as capacity is used<br />Similar suppor...
Four Types of VMware ESX Volumes<br />Note: FT is not supported<br />What will your array do? VAAI helps…<br />Friendly to...
Storage Allocation and Thin Provisioning<br />VMware tests show no performance impact from thin provisioning after zeroing...
Pluggable Storage Architecture:Native Multipathing<br />VMware ESX includes multipathing built in<br />Basic native multip...
Pluggable Storage Architecture: PSP and SATP<br />vSphere 4 Pluggable Storage Architecture allows third-party developers t...
Storage Array Type Plug-ins (SATP)<br />ESX native approaches<br />Active/Passive<br />Active/Active<br />Pseudo Active<br...
Path Selection Plug-ins (PSP)<br />VMW_PSP_MRU – Most-Recently Used (MRU) – Supports hundreds of storage arrays<br />VMW_P...
vStorage APIs for Array Integration (VAAI)<br />VAAI integrates advanced storage features with VMware<br />Basic requireme...
VAAI Support Matrix<br />
vSphere 5: VAAI 2<br />Block<br />(FC/iSCSI)<br />T10 compliance is improved - No plug-in needed for many arrays<br />File...
vSphere 5: vSphereStorage APIs – Storage Awareness (VASA)<br />VASA is communication mechanism for vCenter to detect array...
Storage I/O Control (SIOC)<br />Storage I/O Control (SIOC) is all about fairness:<br />Prioritization and QoS for VMFS<br ...
Storage I/O Control in Action<br />
Virtual Machine Mobility<br />Moving virtual machines is the next big challenge<br />Physical servers are difficult to mov...
vSphere 5: Storage DRS<br />Datastore clusters aggregate multiple datastores<br />VMs and VMDKs placement metrics:<br />Sp...
What’s next<br />Lunch<br />
Expanding the Conversation<br />Converged I/O, storage virtualization and new storage architectures<br />
This Hour’s Focus:Non-Hypervisor Storage Features<br />Converged networking<br />Storage protocols (FC, iSCSI, NFS)<br />E...
Introduction: Converging on Convergence<br />Data centers rely more on standard ingredients<br />What will connect these s...
Drivers of Convergence<br />
Which Storage Protocol to Use?<br />Server admins don’t know/care about storage protocols and will want whatever they are ...
vSphere Protocol Performance<br />
vSphere CPU Utilization<br />
vSphere Latency<br />
Microsoft Hyper-V Performance<br />
Which Storage Protocols Do People Use?<br />Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers<br />
The Upshot: It Doesn’t Matter<br />Use what you have and are familiar with!<br />FC, iSCSI, NFS all work well<br />Most en...
The Storage Network Roadmap<br />
Serious Performance<br />10 GbE is faster than most storage interconnects<br />iSCSI and FCoE both can perform at wire-rat...
Latency is Critical Too<br />Latency is even more critical in shared storage<br />FCoE with 10 GbE can achieve well over 5...
Benefits Beyond Speed<br />10 GbE takes performance off the table (for now…)<br />But performance is only half the story:<...
Enhanced 10 Gb Ethernet<br /><ul><li>Ethernet and SCSI were not made for each other
SCSI expects a lossless transport with guaranteed delivery
Ethernet expects higher-level protocols to take care of issues
“Data Center Bridging” is a project to create lossless Ethernet
AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)
iSCSI and NFS are happy with or without DCB
DCB is a work in progress
FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)
QCN (Qau) is still not ready</li></ul>Priority Flow Control (PFC)<br />802.1Qbb<br />Congestion Management (QCN)<br />802....
FCoE CNAs for VMware ESX<br />No Intel (OpenFCoE) or Broadcom support in vSphere 4…<br />
vSphere 5: FCoE Software Initiator<br />Dramatically expands the FCoE footprint from just a few CNAs<br />Based on Intel O...
I/O Virtualization: Virtual I/O<br />Extends I/O capabilities beyond physical connections (PCIe slots, etc)<br />Increases...
I/O Virtualization: IOMMU (Intel VT-d)<br />IOMMU gives devices direct access to system memory<br />AMD IOMMU or Intel VT-...
Does SSD Change the Equation?<br />RAM and flash promise high performance…<br />But you have to use it right<br />
Flash is Not A Disk<br />Flash must be carefully engineered and integrated<br />Cache and intelligence to offset write pen...
The Tiered Storage Cliché<br />Cost and Performance<br />Optimized for Savings!<br />
Upcoming SlideShare
Loading in...5
×

Storage for Virtual Environments 2011 R2

2,288

Published on

The Storage for Virtual Environments seminar focuses on the challenges of backup and recovery in a virtual infrastructure, the various solutions that users are now using to solve those challenges, and a roadmap for making the most of all an organization’s virtualization initiatives.

This slide deck was used by Stephen Foskett for his

Published in: Technology, Business
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,288
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
232
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide
  • Mirror Mode paper: http://www.usenix.org/events/atc11/tech/final_files/Mashtizadeh.pdfhttp://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-2-storage-vmotion.html
  • http://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-1-vmfs-5.html
  • Up to 256 FC or iSCSI LUNsESX multipathingLoad balancingFailoverFailover between FC and iSCSI*Beware of block sizes greater than 256 KB!If you want virtual disks greater than 256 GB, you must use a VMFS block size larger than 1 MBAlign your virtual disk starting offset to your array (by booting the VM and using diskpart, Windows PE, or UNIX fdisk)*
  • Link Aggregate Control Protocol (LACP) for trunking/EtherChannel - Use “fixed” path policy, not LRUUp to 8 (or 32) NFS mount pointsTurn off access time updatesThin provisioning? Turn on AutoSize and watch out
  • http://www.techrepublic.com/blog/datacenter/stretch-your-storage-dollars-with-vsphere-thin-provisioning/2655http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf
  • http://virtualgeek.typepad.com/virtual_geek/2011/07/vstorage-apis-for-array-integration-vaai-vsphere-5-edition.htmlhttp://blogs.vmware.com/vsphere/2011/07/new-enhanced-vsphere-50-storage-features-part-3-vaai.html
  • http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is  20 – 30 MSFor SAS storage the recommended latency threshold is  20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/
  • http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is  20 – 30 MSFor SAS storage the recommended latency threshold is  20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/
  • http://www.slideshare.net/esloof/vsphere-5-whats-new-storage-drshttp://blogs.vmware.com/vsphere/2011/07/vsphere-50-storage-features-part-5-storage-drs-initial-placement.html
  • http://www.ntpro.nl/blog/archives/1804-vSphere-5-Whats-New-Storage-Appliance-VSA.html
  • http://jpaul.me/?p=2072
  • Transcript of "Storage for Virtual Environments 2011 R2"

    1. 1. Storage for Virtual Environments<br />Stephen Foskett<br />Foskett Services and Gestalt IT<br />Live Footnotes:<br /><ul><li>@Sfoskett
    2. 2. #VirtualStorage</li></li></ul><li>This is Not a Rah-Rah Session<br />
    3. 3. Agenda<br />
    4. 4. Introducing the Virtual Data Center<br />
    5. 5. This Hour’s Focus:What Virtualization Does<br />Introducing storage and server virtualization<br />The future of virtualization<br />The virtual datacenter<br />Virtualization confounds storage<br />Three pillars of performance<br />Other issues<br />Storage features for virtualization<br />What’s new in VMware<br />
    6. 6. Virtualization of Storage, Serverand Network<br />Storage has been stuck in the Stone Age since the Stone Age!<br />Fake disks, fake file systems, fixed allocation<br />Little integration and no communication<br />Virtualization is a bridge to the future<br />Maintains functionality for existing apps<br />Improves flexibility and efficiency<br />
    7. 7. A Look at the Future<br />
    8. 8. Server Virtualization is On the Rise<br />Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010<br />
    9. 9. Server Virtualization is a Pile of Lies!<br />What the OS thinks it’s running on…<br />What the OS is actually running on…<br />Physical Hardware<br />VMkernel<br />Binary Translation, Paravirtualization, Hardware Assist<br />Guest OS<br />VM<br />Guest OS<br />VM<br />Scheduler and Memory Allocator<br />vNIC<br />vSwitch<br />NIC Driver<br />vSCSI/PV<br />VMDK<br />VMFS<br />I/O Driver<br />
    10. 10. And It Gets Worse Outside the Server!<br />
    11. 11. The Virtual Data Center of Tomorrow<br />Management<br />Applications<br />The Cloud™<br />Applications<br />Legacy<br />Applications<br />Applications<br />Applications<br />CPU<br />Network<br />Backup<br />Storage<br />
    12. 12. The Real Future of IT Infrastructure<br />Orchestration Software<br />
    13. 13. Three Pillars of VM Performance<br />
    14. 14. Confounding Storage Presentation<br />Storage virtualization is nothing new…<br />RAID and NAS virtualized disks<br />Caching arrays and SANs masked volumes<br />New tricks: Thin provisioning, automated tiering, array virtualization<br />But, we wrongly assume this is where it ends<br />Volume managers and file systems<br />Databases<br />Now we have hypervisors virtualizing storage<br />VMFS/VMDK = storage array?<br />Virtual storage appliances (VSAs)<br />
    15. 15. Begging for Converged I/O<br />4G FC Storage<br />1 GbE Network<br />1 GbE Cluster<br />How many I/O ports and cables does a server need?<br />Typical server has 4 ports, 2 used<br />Application servers have 4-8 ports used!<br />Do FC and InfiniBand make sense with 10/40/100 GbE?<br />When does commoditization hit I/O?<br />Ethernet momentum is unbeatable<br />Blades and hypervisors demand greater I/O integration and flexibility<br />Other side of the coin – need to virtualize I/O<br />
    16. 16. Driving Storage Virtualization<br />Server virtualization demands storage features<br />Data protection with snapshots and replication<br />Allocation efficiency with thin provisioning+<br />Performance and cost tweaking with automated sub-LUN tiering<br />Improved locking and resource sharing<br />Flexibility is the big one<br />Must be able to create, use, modify and destroy storage on demand<br />Must move storage logically and physically<br />Must allow OS to move too<br />
    17. 17. “The I/O Blender” Demands New Architectures<br />Shared storage is challenging to implement<br />Storage arrays “guess” what’s coming next based on allocation (LUN) taking advantage of sequential performance<br />Server virtualization throws I/O into a blender – All I/O is now random I/O!<br />
    18. 18. Server Virtualization Requires SAN and NAS<br />Server virtualization has transformed the data center and storage requirements<br />VMware is the #1 driver of SAN adoption today!<br />60% of virtual server storage is on SAN or NAS<br />86% have implemented some server virtualization<br />Server virtualization has enabled and demanded centralization and sharing of storage on arrays like never before!<br />Source: ESG, 2008<br />
    19. 19. Keys to the Future For Storage Folks<br />Ye Olde Seminar Content!<br />
    20. 20. Primary Production Virtualization Platform<br />Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010<br />
    21. 21. Storage Features for Virtualization<br />
    22. 22. Which Features Are People Using?<br />Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers<br />
    23. 23. What’s New in vSphere 4 and 4.1<br />VMware vSphere 4 (AKA ESX/ESXi 4) is a major upgrade for storage<br />Lots of new features like thin provisioning, PSA, any-to-any Storage VMotion, PVSCSI<br />Massive performance upgrade (400k IOPS!)<br />vSphere 4.1 is equally huge for storage<br />Boot from SAN<br />vStorage APIs for Array Integration (VAAI)<br />Storage I/O control (SIOC)<br />
    24. 24. What’s New in vSphere 5<br />VMFS-5 – Scalability and efficiency improvements<br />Storage DRS – Datastore clusters and improved load balancing<br />Storage I/O Control – Cluster-wide and NFS support<br />Profile-Driven Storage – Provisioning, compliance and monitoring<br />FCoE Software Initiator<br />iSCSI Initiator GUI<br />Storage APIs – Storage Awareness (VASA)<br />Storage APIs – Array Integration (VAAI 2) – Thin Stun, NFS, T10<br />Storage vMotion - Enhanced with mirror mode<br />vSphere Storage Appliance (VSA)<br />vSphere Replication – New in SRM<br />
    25. 25. And Then, There’s VDI…<br />Virtual desktop infrastructure (VDI) takes everything we just worried about and amplifies it:<br />Massive I/O crunches<br />Huge duplication of data<br />More wasted capacity<br />More user visibility<br />More backup trouble<br />
    26. 26. What’s next<br />Vendor Showcase and Networking Break<br />
    27. 27. Technical Considerations - Configuring Storage for VMs<br />The mechanics of presenting and using storage in virtualized environments<br />
    28. 28. This Hour’s Focus:Hypervisor Storage Features<br />Storage vMotion<br />VMFS<br />Storage presentation: Shared, raw, NFS, etc.<br />Thin provisioning<br />Multipathing (VMware Pluggable Storage Architecture)<br />VAAI and VASA<br />Storage I/O control and storage DRS<br />
    29. 29. Storage vMotion<br />Introduced in ESX 3 as “Upgrade vMotion”<br />ESX 3.5 used a snapshot while the datastore was in motion<br />vSphere 4 used changed-block tracking (CBT) and recursive passes<br />vSphere 5 Mirror Mode mirrors writes to in-progress vMotions and also supports migration of vSphere snapshots and Linked Clones<br />Can be offloaded for VAAI-Block (but not NFS)<br />
    30. 30. vSphere 5: What’s New in VMFS 5<br />Max VMDK size is still 2 TB – 512 bytes<br />Virtual (non-passthru) RDM still limited to 2 TB<br />Max LUNs per host is still 256<br />
    31. 31. Hypervisor Storage Options:Shared Storage<br />The common/ workstation approach<br />VMware: VMDK image in VMFS datastore<br />Hyper-V: VHD image in CSV datastore<br />Block storage (direct or FC/iSCSI SAN)<br />Why?<br />Traditional, familiar, common (~90%)<br />Prime features (Storage VMotion, etc)<br />Multipathing, load balancing, failover*<br />But…<br />Overhead of two storage stacks (5-8%)<br />Harder to leverage storage features<br />Often shares storage LUN and queue<br />Difficult storage management<br />VM<br />Host<br />Guest<br />OS<br />VMFS<br />VMDK<br />DAS or SAN<br />Storage<br />
    32. 32. Hypervisor Storage Options:Shared Storage on NAS<br />Skip VMFS and use NAS<br />NFS or SMB is the datastore<br />Wow!<br />Simple – no SAN<br />Multiple queues<br />Flexible (on-the-fly changes)<br />Simple snap and replicate*<br />Enables full Vmotion<br />Link aggregation (trunking) is possible<br />But…<br />Less familiar (ESX 3.0+)<br />CPU load questions<br />Limited to 8 NFS datastores (ESX default)<br />Snapshot consistency for multiple VMDK<br />VM<br />Host<br />Guest<br />OS<br />NAS<br />Storage<br />VMDK<br />
    33. 33. Hypervisor Storage Options:Guest iSCSI<br />Skip VMFS and use iSCSI directly<br />Access a LUN just like any physical server<br />VMware ESX can even boot from iSCSI!<br />Ok…<br />Storage folks love it!<br />Can be faster than ESX iSCSI<br />Very flexible (on-the-fly changes)<br />Guest can move and still access storage<br />But…<br />Less common to VM folks<br />CPU load questions<br />No Storage VMotion (but doesn’t need it)<br />VM<br />Host<br />Guest<br />OS<br />iSCSI<br />Storage<br />LUN<br />
    34. 34. Hypervisor Storage Options:Raw Device Mapping (RDM)<br />Guest VM’s access storage directly over iSCSI or FC<br />VM’s can even boot from raw devices<br />Hyper-V pass-through LUN is similar<br />Great!<br />Per-server queues for performance<br />Easier measurement<br />The only method for clustering<br />Supports LUNs larger than 2 TB (60 TB passthru in vSphere 5!)<br />But…<br />Tricky VMotion and dynamic resource scheduling (DRS)<br />No storage VMotion<br />More management overhead<br />Limited to 256 LUNs per data center<br />VM<br />Host<br />Guest<br />OS<br />I/O<br />Mapping File<br />SAN Storage<br />
    35. 35. Hypervisor Storage Options:Direct I/O<br />VMware ESX VMDirectPath - Guest VM’s access I/O hardware directly<br />Leverages AMD IOMMU or Intel VT-d<br />Great!<br />Potential for native performance<br />Just like RDM but better!<br />But…<br />No VMotion or Storage VMotion<br />No ESX fault tolerance (FT)<br />No ESX snapshots or VM suspend<br />No device hot-add<br />No performance benefit in the real world!<br />VM<br />Host<br />Guest<br />OS<br />I/O<br />Mapping File<br />SAN Storage<br />
    36. 36. Which VMware Storage Method Performs Best?<br />Mixed random I/O<br />CPU cost per I/O<br />VMFS,<br />RDM (p), or RDM (v)<br />Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc.,ESX 3.5, 2008<br />
    37. 37. vSphere 5: Policy or Profile-Driven Storage<br />Allows storage tiers to be defined in vCenter based on SLA, performance, etc.<br />Used during provisioning, cloning, Storage vMotion, Storage DRS<br />Leverages VASA for metrics and characterization<br />All HCL arrays and types (NFS, iSCSI, FC)<br />Custom descriptions and tagging for tiers<br />Compliance status is a simple binary report<br />
    38. 38. Native VMware Thin Provisioning<br />VMware ESX 4 allocates storage in 1 MB chunks as capacity is used<br />Similar support enabled for virtual disks on NFS in VI 3<br />Thin provisioning existed for block, could be enabled on the command line in VI 3<br />Present in VMware desktop products<br />vSphere 4 fully supports and integrates thin provisioning<br />Every version/license includes thin provisioning<br />Allows thick-to-thin conversion during Storage VMotion<br />In-array thin provisioning also supported (we’ll get to that…)<br />
    39. 39. Four Types of VMware ESX Volumes<br />Note: FT is not supported<br />What will your array do? VAAI helps…<br />Friendly to on-array thin provisioning<br />
    40. 40. Storage Allocation and Thin Provisioning<br />VMware tests show no performance impact from thin provisioning after zeroing<br />
    41. 41. Pluggable Storage Architecture:Native Multipathing<br />VMware ESX includes multipathing built in<br />Basic native multipathing (NMP) is round-robin fail-over only – it will not load balance I/O across multiple paths or make more intelligent decisions about which paths to use<br />Pluggable Storage Architecture (PSA)<br />VMware NMP<br />Third-Party MPP<br />VMware SATP<br />Third-Party SATP<br />VMware PSP<br />Third-Party PSP<br />
    42. 42. Pluggable Storage Architecture: PSP and SATP<br />vSphere 4 Pluggable Storage Architecture allows third-party developers to replace ESX’s storage I/O stack<br />ESX Enterprise+ Only<br />There are two classes of third-party plug-ins:<br />Path-selection plug-ins (PSPs) optimize the choice of which path to use, ideal for active/passive type arrays<br />Storage array type plug-ins (SATPs) allow load balancing across multiple paths in addition to path selection for active/active arrays<br />EMC PowerPath/VE for vSphere does everything<br />
    43. 43. Storage Array Type Plug-ins (SATP)<br />ESX native approaches<br />Active/Passive<br />Active/Active<br />Pseudo Active<br />Storage Array Type Plug-Ins<br />VMW_SATP_LOCAL – Generic local direct-attached storage<br />VMW_SATP_DEFAULT_AA – Generic for active/active arrays<br />VMW_SATP_DEFAULT_AP – Generic for active/passive arrays<br />VMW_SATP_LSI – LSI/NetApp arrays from Dell, HDS, IBM, Oracle, SGI<br />VMW_SATP_SVC – IBM SVC-based systems (SVC, V7000, Actifio)<br />VMW_SATP_ALUA – Asymmetric Logical Unit Access-compliant arrays<br />VMW_SATP_CX – EMC/Dell CLARiiON and Celerra (also VMW_SATP_ALUA_CX)<br />VMW_SATP_SYMM – EMC Symmetrix DMX-3/DMX-4/VMAX, Invista<br />VMW_SATP_INV – EMC Invista and VPLEX<br />VMW_SATP_EQL – Dell EqualLogic systems<br />Also, EMC PowerPath and HDS HDLM and vendor-unique plugins not detailed in the HCL<br />
    44. 44. Path Selection Plug-ins (PSP)<br />VMW_PSP_MRU – Most-Recently Used (MRU) – Supports hundreds of storage arrays<br />VMW_PSP_FIXED – Fixed - Supports hundreds of storage arrays<br />VMW_PSP_RR – Round-Robin - Supports dozens of storage arrays<br />DELL_PSP_EQL_ROUTED – Dell EqualLogic iSCSI arrays<br />Also, EMC PowerPath and other vendor unique<br />
    45. 45. vStorage APIs for Array Integration (VAAI)<br />VAAI integrates advanced storage features with VMware<br />Basic requirements:<br />A capable storage array<br />ESX 4.1+<br />A software plug-in for ESX<br />Not every implementation is equal<br />Block zeroing can be very demanding for some arrays<br />Zeroing might conflict with full copy<br />
    46. 46. VAAI Support Matrix<br />
    47. 47. vSphere 5: VAAI 2<br />Block<br />(FC/iSCSI)<br />T10 compliance is improved - No plug-in needed for many arrays<br />File<br />(NFS)<br />NAS plugins come from vendors, not VMware<br />
    48. 48. vSphere 5: vSphereStorage APIs – Storage Awareness (VASA)<br />VASA is communication mechanism for vCenter to detect array capabilities<br />RAID level, thin provisioning state, replication state, etc.<br />Two locations in vCenter Server:<br />“System-Defined Capabilities” – per-datastore descriptors<br />Storage views and SMS API’s<br />
    49. 49. Storage I/O Control (SIOC)<br />Storage I/O Control (SIOC) is all about fairness:<br />Prioritization and QoS for VMFS<br />Re-distributes unused I/O resources<br />Minimizes “noisy neighbor” issues<br />ESX can provide quality of service for storage access to virtual machines<br />Enabled per-datastore<br />When a pre-defined latency level is exceeded on a VM it begins to throttle I/O (default 30 ms)<br />Monitors queues on storage arrays and per-VM I/O latency<br />But:<br />vSphere 4.1 with Enterprise Plus<br />Disabled by default but highly recommended!<br />Block storage only (FC or ISCSI)<br />Whole-LUN only (no extents)<br />No RDM<br />
    50. 50. Storage I/O Control in Action<br />
    51. 51. Virtual Machine Mobility<br />Moving virtual machines is the next big challenge<br />Physical servers are difficult to move around and between data centers<br />Pent-up desire to move virtual machines from host to host and even to different physical locations<br />VMware DRS would move live VMs around the data center<br />The “Holy Grail” for server managers<br />Requires networked storage (SAN/NAS)<br />
    52. 52. vSphere 5: Storage DRS<br />Datastore clusters aggregate multiple datastores<br />VMs and VMDKs placement metrics:<br />Space - Capacity utilization and availability (80% default)<br />Performance – I/O latency (15 ms default)<br />When thresholds are crossed, vSphere will rebalance all VMs and VMDKs according to Affinity Rules<br />Storage DRS works with either VMFS/block or NFS datastores<br />Maintenance Mode evacuates a datastore<br />
    53. 53. What’s next<br />Lunch<br />
    54. 54. Expanding the Conversation<br />Converged I/O, storage virtualization and new storage architectures<br />
    55. 55. This Hour’s Focus:Non-Hypervisor Storage Features<br />Converged networking<br />Storage protocols (FC, iSCSI, NFS)<br />Enhanced Ethernet (DCB, CAN, FCoE)<br />I/O virtualization<br />Storage for virtual storage<br />Tiered storage and SSD/flash<br />Specialized arrays<br />Virtual storage appliances (VSA)<br />
    56. 56. Introduction: Converging on Convergence<br />Data centers rely more on standard ingredients<br />What will connect these systems together?<br />IP and Ethernet are logical choices<br />
    57. 57. Drivers of Convergence<br />
    58. 58. Which Storage Protocol to Use?<br />Server admins don’t know/care about storage protocols and will want whatever they are familiar with<br />Storage admins have preconceived notions about the merits of various options:<br />FC is fast, low-latency, low-CPU, expensive<br />NFS is slow, high-latency, high-CPU, cheap<br />iSCSI is medium, medium, medium, medium<br />
    59. 59. vSphere Protocol Performance<br />
    60. 60. vSphere CPU Utilization<br />
    61. 61. vSphere Latency<br />
    62. 62. Microsoft Hyper-V Performance<br />
    63. 63. Which Storage Protocols Do People Use?<br />Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers<br />
    64. 64. The Upshot: It Doesn’t Matter<br />Use what you have and are familiar with!<br />FC, iSCSI, NFS all work well<br />Most enterprise production VM data is on FC, many smaller shops using iSCSI or NFS<br />Either/or? - 50% use a combination<br />For IP storage<br />Network hardware and config matter more than protocol (NFS, iSCSI, FC)<br />Use a separate network or VLAN<br />Use a fast switch and consider jumbo frames<br />For FC storage<br />8 Gb FC/FCoE is awesome for VMs<br />Look into NPIV<br />Look for VAAI<br />
    65. 65. The Storage Network Roadmap<br />
    66. 66. Serious Performance<br />10 GbE is faster than most storage interconnects<br />iSCSI and FCoE both can perform at wire-rate<br />
    67. 67. Latency is Critical Too<br />Latency is even more critical in shared storage<br />FCoE with 10 GbE can achieve well over 500,000 4K IOPS (if the array and client can handle it!)<br />
    68. 68. Benefits Beyond Speed<br />10 GbE takes performance off the table (for now…)<br />But performance is only half the story:<br />Simplified connectivity<br />New network architecture<br />Virtual machine mobility<br />1 GbE Cluster<br />4G FC Storage<br />1 GbE Network<br />10 GbE<br />(Plus 6 Gbps extra capacity)<br />
    69. 69. Enhanced 10 Gb Ethernet<br /><ul><li>Ethernet and SCSI were not made for each other
    70. 70. SCSI expects a lossless transport with guaranteed delivery
    71. 71. Ethernet expects higher-level protocols to take care of issues
    72. 72. “Data Center Bridging” is a project to create lossless Ethernet
    73. 73. AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)
    74. 74. iSCSI and NFS are happy with or without DCB
    75. 75. DCB is a work in progress
    76. 76. FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)
    77. 77. QCN (Qau) is still not ready</li></ul>Priority Flow Control (PFC)<br />802.1Qbb<br />Congestion Management (QCN)<br />802.1Qau<br />Bandwidth Management (ETS)<br />802.1Qaz<br />PAUSE<br />802.3x<br />Data Center Bridging Exchange Protocol (DCBX)<br />Traffic Classes 802.1p/Q<br />
    78. 78. FCoE CNAs for VMware ESX<br />No Intel (OpenFCoE) or Broadcom support in vSphere 4…<br />
    79. 79. vSphere 5: FCoE Software Initiator<br />Dramatically expands the FCoE footprint from just a few CNAs<br />Based on Intel OpenFCoE? – Shows as “Intel Corporation FCoE Adapter”<br />
    80. 80. I/O Virtualization: Virtual I/O<br />Extends I/O capabilities beyond physical connections (PCIe slots, etc)<br />Increases flexibility and mobility of VMs and blades<br />Reduces hardware, cabling, and cost for high-I/O machines<br />Increases density of blades and VMs<br />
    81. 81. I/O Virtualization: IOMMU (Intel VT-d)<br />IOMMU gives devices direct access to system memory<br />AMD IOMMU or Intel VT-d<br />Similar to AGP GART<br />VMware VMDirectPath leverages IOMMU<br />Allows VMs to access devices directly<br />May not improve real-world performance<br />System Memory<br />IOMMU<br />MMU<br />I/O Device<br />CPU<br />
    82. 82. Does SSD Change the Equation?<br />RAM and flash promise high performance…<br />But you have to use it right<br />
    83. 83. Flash is Not A Disk<br />Flash must be carefully engineered and integrated<br />Cache and intelligence to offset write penalty<br />Automatic block-level data placement to maximize ROI<br />IF a system can do this, everything else improves<br />Overall system performance<br />Utilization of disk capacity<br />Space and power efficiency<br />Even system cost can improve!<br />
    84. 84. The Tiered Storage Cliché<br />Cost and Performance<br />Optimized for Savings!<br />
    85. 85. Tiered Storage Evolves<br />
    86. 86. Three Approaches to SSD For VM<br />EMC Project Lightning promises to deliver all three!<br />
    87. 87. Storage for Virtual Servers (Only!)<br />New breed of storage solutions just for virtual servers<br />Highly integrated (vCenter, VMkernel drivers, etc.)<br />High-performance (SSD cache)<br />Mostly from startups (for now)<br />Tintri– NFS-based caching array<br />Virsto+EvoStor – Hyper-V software, moving to VMware<br />
    88. 88. Virtual Storage Appliances (VSA)<br />What if the SAN was pulled inside the hypervisor?<br />VSA = A virtual storage array as a guest VM<br />Great for lab or PoC<br />Some are not for production<br />Can build a whole data center in a hypervisor, including LAN, SAN, clusters, etc<br />Physical Server Resources<br />Hypervisor<br />VM Guest<br />VM Guest<br />Virtual Storage Appliance<br />Virtual SAN<br />Virtual LAN<br />CPU<br />RAM<br />
    89. 89. vSphere 5: vSphere Storage Appliance (VSA)<br />Aimed at SMB market<br />Two deployment options:<br />2x replicates storage 4:2<br />3x replicates round-robin 6:3<br />Uses local (DAS) storage<br />Enables HA and vMotion with no SAN or NAS<br />Uses NFS for storage access<br />Also manages IP addresses for HA<br />
    90. 90. Virtual Storage Appliance Options<br />
    91. 91. Whew! Let’s Sum Up<br />Server virtualization changes everything<br />Throw your old assumptions about storage workloads and presentation out the window<br />We (storage folks) have some work to do<br />New ways of presenting storage to the server<br />Converged I/O (Ethernet!)<br />New demand for storage virtualization features<br />New architectural assumptions<br />
    92. 92. Thank You!<br />Stephen Foskett<br />stephen@fosketts.net<br />twitter.com/sfoskett<br />+1(508)451-9532<br />FoskettServices.com<br />blog.fosketts.net<br />GestaltIT.com<br />
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×