VMworld 2013: What's New in vSphere Platform & Storage

3,206 views

Published on

VMworld 2013

Kyle Gleed, VMware
Cormac Hogan, VMware

Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
3,206
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
26
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

VMworld 2013: What's New in vSphere Platform & Storage

  1. 1. What's New in vSphere Platform & Storage Kyle Gleed, VMware Cormac Hogan, VMware VSVC5005 #VSVC5005
  2. 2. 2 Agenda New vSphere Platform Features New vCenter Server Features New vSphere Storage Features
  3. 3. 3 vSphere 2013 Platform Features
  4. 4. 4 vSphere 5.5 Platform Improvements  Scalability • Doubled several configuration maximums • Virtual Machine Compatibility ESXi 5.5 (aka Virtual Hardware Version 10)  Performance • Expanded vGPU support • Improved power management with support for server CPU C-States  Availability • Hot-Pluggable SSD PCIe devices • Support for Reliable Memory
  5. 5. 5  Several vSphere 5.5 maximums doubled • Logical CPU, Virtual CPU, NUMA Nodes, RAM  Virtualize any size workload with confidence Item 5.1 5.5 Logical CPUs per host 160 320 NUMA Nodes per host 8 16 Virtual CPUs per host 2048 4096 RAM per host 2TB 4TB vSphere Host Configuration Maximums Increased
  6. 6. 6 Virtual Machine Compatibility ESXi 5.5 • aka Virtual Hardware 10 • LSI SAS for Solaris 11 • Latest CPU Architectures • Advanced Host Controller Interface (AHCI) • New SATA controller • Virtual disks and CDROM • 30 devices per controller • 4 controllers per VM • Total of 120 devices per VM
  7. 7. 7 Expanded vGPU Support  Added support for AMD GPUs • NVIDIA available since 5.1  Three rendering modes: • Automatic = Use GPU when available, otherwise use software rendering • Hardware = GPU is required • Power-on fails if no GPU • vMotion check fails if no GPU at destination • Software = don’t use GPU, software rendering only  vMotion between GPU vendors
  8. 8. 8 Expanded vGPU Support Cont.  vGPU Requirements: • AMD or NVIDA Graphics Card (GPU) • See vendor websites for supported cards • 3D graphics must be supported by the guest operating system • http://www.vmware.com/resources/compatibility/search.php • Virtual Machine: • Compatibility ESXi 5.0 or higher (vHW 8) (Windows 8 must be vHW 9) • VMware Tools must be installed • Linux distributions must have a 3.2 or later kernel • Most modern Linux distributions package our drivers by default • VMware is the only vendor accelerating the entire Linux graphics driver stack and providing it as free software!
  9. 9. 9 Hot-Pluggable SSD PCIe Devices  No downtime to Hot-Plug PCIe SSD drives (add/remove) on a running ESXi host • PCIe IO expansion chassis to provide Hot-Plug of PCIe devices to an ESXi host  Support both orderly and surprise hot-plug operations • Orderly operation initiated through hardware elements or software interface • Surprise operation initiated by physically removing or adding device without notifying the system  Requirements • Hardware and BIOS must support Hot-Plug PCIe
  10. 10. 10 Support for Reliable Memory  Reduce memory corruption • Memory corruption = PSOD = BAD! • Provider greater uptime and reliability for ESXi  How does it work? • Feature of the hardware • Some memory is more “reliable” than others which is reported up to ESXi for optimization  Protecting Critical Components: • VMkernel • UW (User Worlds) • Init thread • Hostd and Watchdog
  11. 11. 11 Enhancements for CPU C-States  Use deep C-states in default Balanced Policy • Saves much more power • Can potentially increase performance by quickly entering Turbo Mode frequencies if some other core(s) in the same physical CPU are in deep C- State  More aggressive settings in Low Power Policy • More eager to enter deeper C-states  USB Auto-suspend • Automatically put idle USB hubs in a lower power state • Unused port doesn’t draw much power • BUT, the controller still DMAs
  12. 12. 12 Summary vSphere Platform Features  vSphere configuration maximum increases • 2x increase from vSphere 5.1  Virtual Machine Compatibility ESXi 5.5 (vHW 10) • LSI SAS for Solaris 11, New CPU Enablement, AHCI SATA Controller Support  Expanded vGPU support • Support AMD and NVIDIA GPUs, vMotion across GPU vendors  Hot-Plug SSD PCIe devices • Hot Add/Remove SSD Devices without any downtime  Support for Reliable Memory • Improved uptime and reliability  Reduced power consumption with enhancements for CPU C-States
  13. 13. 13 vSphere 2013 vCenter Server Features
  14. 14. 14 vCenter Server 5.5 Improvements  Security • Improved vCenter Single Sign On  Usability • vSphere Web Client enhancements • Increased platform support  Availability • App HA
  15. 15. 15 vCenter Server 5.5 – Single Sign-On  New vCenter Single Sign On • Improved installation experience • Improved Active Directory integration • One-way and two-way trust • Multi and single forest • Built-in high availability • Continued support for local authentication • No manual database configuration • SQL authentication no longer required • No longer require creating DB user accounts
  16. 16. 16 vCenter Server 5.5 – Web Client  vSphere Web Client • Increased Platform Support • Added support for OS X • VM Console access • Deploy OVF Templates • Attach Client Devices • Enhanced Usability Experience • Drag and Drop • Improved Filters • Recent Items
  17. 17. 17 vSphere 2013 vCenter App HA
  18. 18. 18 vSphere App HA  Protects apps running inside virtual machines  Provides application visibility, monitoring and restart  Allows for automated recovery from: • Host failure, guest OS crash, application failure  Protected applications: • Apache Tomcat 6.0, 7.0 • IIS 6.0, 7.0, 8.0 • MSSQL 2005, 2008, 2008R2, 2012 • TC Server Runtime 6.0, 7.0 • Apache HTTP Server 1.3, 2.0, 2.2
  19. 19. 19 vSphere App HA Policy
  20. 20. 20 vSphere App HA Application Availability
  21. 21. 21 vSphere HA VM-to-VM Anti-Affinity  vSphere HA in vSphere 5.5 vSpherevSpherevSphere vSphere HA/DRS Cluster DRS Affinity Rule: VMs must not run on the same host
  22. 22. 22 vSphere APP HA  Summary • Reduce Application downtime • Protection for several off-the-shelf applications • Recovery from a variety of scenarios • VM-to-VM Anti-Affinity • Optimal workload placement  More Information • BCO5047 – vSphere HA – What’s New and Best Practices
  23. 23. 23 vSphere 2013 Storage Features
  24. 24. 24 New features in vSphere Storage  Scalability • Support for 62TB VMDK  Performance • 16Gb E2E support • vSphere Flash Read Cache (vFRC)  Availability • MSCS Supportability Enhancements • Storage vMotion & SDRS compatibility with vSphere Replication  Operations • PDL Enhancements • VAAI UNMAP Enhancements • VMFS Heap Enhancements
  25. 25. 25  62TB VMDK • Supported on VMFS5 & NFS • No specific virtual hardware requirement • Requires ESXi 5.5  62TB Virtual Mode RDMs also introduced in 5.5 • No change in 64TB pRDMs Support for Larger VMDK & vRDMs
  26. 26. 26 Support for Larger VMDK & vRDMs  Supported • NFS & VMFS • Offline extension of 2TB+ VMDK • vMotion • Storage vMotion • SRM/vSphere Replication • vFlash • Snapshots • Linked Clones • SE Sparse Disks  Not Supported • Online/hot extension of 2TB+ VMDK • BusLogic Virtual SCSI Adapters • Virtual SAN (VSAN) • Fault Tolerance • VI (C#) Client • MBR Partitioned Disks • vmfsSparse Disks • vSphere 5.5 introduces support for 62TB VMDKs & Virtual RDMs
  27. 27. 27 Heads Up! C# Client Interoperability  SRM • 2TB+ VMDK can be managed successfully via vSphere web client. • SRM still requires C# client for management • Attempting to examine 62TB VMDK properties via C# client can cause errors: All new features/enhances supported via web client
  28. 28. 28 16Gb E2E Support  With the release of vSphere 5.5, VMware now supports 16Gb E2E (end- to-end) Fibre Channel 16Gb 16Gb
  29. 29. 29 MSCS - Microsoft Cluster Services Enhancements MSCS Node A MSCS Node B Microsoft Windows 2012 Clustering supported Round Robin Path Policy Supported Round Robin Path Policy Supported FCoE & iSCSI protocols supported
  30. 30. 30 PDL AutoRemove PDL (Permanent Device Loss)  Occurs on failures or is incorrectly removed from host  Based on SCSI Sense Codes  PDL means host no longer sends I/O to these devices PDL AutoRemove in 5.5  PDL AutoRemove automatically removes a device with PDL from the host Benefit of PDL AutoRemove  A PDL state on a device implies it cannot accept more IOs, but needlessly uses up one of the 256 device per host limit.  Now device is automatically removed since it is never coming back.
  31. 31. 31 VAAI UNMAP Improvements  vSphere 5.5 introduced a new simpler VAAI UNMAP/Reclaim command • # esxcli storage vmfs unmap • Reclaim size now specified in blocks rather than a percentage value • Dead space reclaimed in increments rather than all at once
  32. 32. 32 VMFS Heap Improvements  An issue with previous versions of VMFS heap meant that there were concerns when accessing above 30TB of open files from a single ESXi host.  ESXi 5.0p5 & 5.1U1 introduced a larger heap size to deal with this.  vSphere 5.5 introduces a much improved heap eviction process, meaning that there is no need for the larger heap size, which consumes memory.  vSphere 5.5 with a maximum of 256MB of heap allows ESXi hosts to access all address space of a 64TB VMFS.
  33. 33. 33 Storage DRS, Storage vMotion & vSphere Replication Interop  If a VM which was being replicated via vSphere Replication was migrated to another datastore, it triggered a full sync because the persistent state files (psf) were deleted – all of the disks contents are read and check summed on each side.  In vSphere 5.5 the psf files are now moved with the virtual machine and retain its current replication state.  This means that virtual machines at the production site may now be Storage vMotion’ed, and conversely, participate in Storage DRS datastore clusters without impacting vSphere Replication’s RPO (Recovery Point Objective).
  34. 34. 34 What is vSphere Flash Read Cache? Key Features • Hypervisor-based software-defined flash storage tier solution. • Aggregates local flash devices to provide a clustered flash resource for VM and vSphere hosts consumption (Virtual Flash Host Swap Cache) • Leverages local flash devices as a cache • Integrated with vCenter, HA, DRS, vMotion • Scale-Out Storage Capability: 32 nodes SSD SSD SSD SSD vSphere Flash Read Cache Infrastructure vSphere Flash Read Cache vSphere Flash Read Cache vSphere Flash Read Cache vSphere SSD Flash as a New Storage Tier in vSphere
  35. 35. 35 Why vSphere Flash Read Cache? • Cache is a high-speed memory that can be either a reserved section of main memory or a storage device. • Supports Write Through Cache Mode • Improve virtual machines performance by leveraging local flash devices • Ability to virtualize suitable business critical applications Write Commit Ack 3 2 Write Through 1 Cache
  36. 36. 36 vSphere Flash Read Cache Fully Integrated with vSphere • All the management tasks pertaining to the installation, configuration & monitoring of vSphere Flash Read Cache will be done from the vSphere client.
  37. 37. 37 vSphere Flash Read Cache – Flash Resource • Each host creates a Virtual Flash Resource containing one or multiple flash based devices. • There can only be one Virtual Flash Resource per vSphere host. • Flash based devices are pooled into a new file system called VFFS.
  38. 38. 38 • Virtual Flash Host Swap Cache configuration is only available via the vSphere Web Client. • Ability to utilize up to 4TB of vSphere Flash Resource for vSphere Flash Swap Caching purposes. vSphere Flash Read Cache – Virtual Flash Host Swap Cache
  39. 39. 39 • Virtual Machine Virtual Flash Read Cache configuration is only available via the vSphere Web Client. • Configure Virtual Flash Read Cache per VMDK – set to match working set size. • Block Size 4KB – 1024 KB vSphere Flash Read Cache – Virtual Machine Flash Cache
  40. 40. 40 Virtual SAN: Radically Simple Storage • Policy-driven per-VM SLA • vSphere & vCenter integration • Scale-out storage • Built-in resiliency • SSD caching • Converged Compute & Storage Key Features vSphere Hard disks SSD VSAN Hard disks SSD ..…3 to 8… Hard disks SSD Hard disks SSD VSAN Aggregated Datastore
  41. 41. 41 Virtual SAN: Radically Simple Storage • Radically Simple Storage designed for Virtual Machines • Fast, Resilient & Dynamic • Lower TCO for comparable performance Customer Benefits vSphere Hard disks SSD VSAN Hard disks SSD ..…3 to 8… Hard disks SSD Hard disks SSD VSAN Aggregated Datastore
  42. 42. 42 vSphere Storage Features Summary  Support for 62TB VMDK  16Gb E2E support  MSCS supportability enhancements  PDL Autoremove  Storage vMotion and SDRS compatibility with vSphere Replication  VAAI UNMAP & VMFS Heap enhancements  vSphere Flash Read Cache  Virtual SAN
  43. 43. 43 Other VMware Activities Related to This Session  HOL: HOL-SDC-1310 vSOM 101 HOL-SDC-1308 vSphere Flash Read Cache and VSAN  Group Discussions: VSVC1003-GD vSphere Core Upgrades with Kyle Gleed STO1001-GD VSAN with Cormac Hogan & VMware R&D
  44. 44. 44 Thank You
  45. 45. What's New in vSphere Platform & Storage Kyle Gleed, VMware Cormac Hogan, VMware VSVC5005 #VSVC5005

×