Optimize Oracle On VMware (Sep 2011)

5,203 views

Published on

Presentation given at RMOUG , Denver CO, Feb 17-18 2010
Updated September 2011

Published in: Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,203
On SlideShare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
464
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide
  • Apologies, I’m a database type.....Quest is best known for toad, but we also have enterprise monitoring across all levels of the stackIn Melbourne, SQL Navigator + the spotlights. It’s not a complete co-incidence about the star trek theme.
  • As well as total memory to the VM, you can:Adjust memory “shares”, which determine priority for this machine when in contention with other machinesReservation: guaranteed (sort of) amount of memory to allocate Limit, if you want to prevent VM from getting all it’s memory Memory reservationsMemory SharesIdle memory taxMemory sharingThe Balloon driver (vmmemctl) ESX swapping
  • Optimize Oracle On VMware (Sep 2011)

    1. 1. Optimize Oracle RDBMS on VMware<br />Guy Harrison<br />Director, R&D Melbourne<br />www.guyharrison.net<br />Guy.harrison@quest.com<br />@guyharrison<br />
    2. 2. Introductions<br />
    3. 3. Agenda<br />Motivations for Virtualization<br />VMware ESX resource management:<br />Memory<br />CPU<br />IO<br />Paravirtualization (OVM) vs Hardware Assisted Virtualization (ESX)<br />RAC on VMware<br />
    4. 4. Motivations for Virtualization <br />
    5. 5. Resistance to Database virtualization <br />
    6. 6. DB virtualization is happening<br />
    7. 7. Dba-village.com<br />Mar 2010<br />May 2009<br />
    8. 8. Oracle virtualization is lagging....<br />
    9. 9. ESX Memory management<br />
    10. 10. Managing ESX memory<br />ESX can “overcommit” memory<br />Sum of all VM physical memory allocations > actual ESX physical memory<br />Memory is critical to Oracle server performance<br />SGA memory to reduce datafile IO<br />PGA memory to reduce sort & hash IO <br />ESX uses four methods to share memory:<br />Memory Page Sharing<br />“Ballooning”<br />Memory compression<br />ESX swapping<br />DBA needs to carefully configure to avoid disaster<br />
    11. 11. Configuring VM memory<br />VMs Compete for memory in this range<br />Relative Memory Priority for this VM<br />Maximum memory for the VM (dynamic)<br />Minimum Memory for this VM<br />
    12. 12. Monitoring VM memory <br />
    13. 13. ESX and VM memory<br />ESX Swap<br />ESX swap<br />VM<br />ESX virtual memory <br />Effective VM physical memory <br />ESX physical memory<br />VM virtual memory <br />
    14. 14. ESX Ballooning<br />ESX Swap<br />ESX swap<br />Vmmemctl<br />“balloon”<br />VM<br />ESX virtual memory <br />Apparent VM physical memory <br />Effective VM physical memory <br />ESX physical memory<br />VM Swap<br />VM Swap<br />
    15. 15. ESX Ballooning<br />As memory grows, ESX balloon driver (vmmemctl) forces VM to page out memory to VM swapfile<br />
    16. 16. ESX Ballooning<br />Inside the VM, paging to the swapfile is observed.<br />The guest OS will determine which pages are paged out<br />If LOCK_SGA=TRUE, then the SGA should not be paged.<br />
    17. 17. ESX Swapping<br />ESX Swap<br />ESX swap<br />ESX virtual memory <br />VM<br />Effective VM physical memory <br />ESX physical memory<br />VM virtual memory <br />
    18. 18. ESX Swapping<br />ESX Swap<br />ESX swap<br />VM<br />Apparent VM physical memory <br />Effective VM physical memory <br />ESX virtual memory <br />ESX physical memory<br />
    19. 19. ESX Swapping<br />ESX swaps out VM memory to ESXswapfile<br />
    20. 20. ESX Swapping<br />Within the VM, swapping cannot be detected.<br />ESX will determine which memory pages go to disk<br />Particularly occurs when VMware tools are not installed<br />Even if LOCK_SGA=TRUE, SGA memory might be on disk<br />
    21. 21. Avoiding Ballooning and swapping<br />memory reservations help avoid ballooning or ESX swapping<br />
    22. 22. ESX memory management<br />Memory page sharing <br />Multiple VMs can share an identical page of memory (Oracle code pages, etc) <br />Memory compression (new in vSphere 4.1)<br />Pages are compressed and written to cache rather than to disk<br />Swapping is more expensive than ballooning<br />Slower to restore memory<br />OS and Oracle get no choice about what gets paged<br />“Double paging” can occur – guest and ESX both page a block of memory<br />
    23. 23. Ballooning vs. Swapping <br />Swingbench workload running on Oracle database – from VMWare whitepaper: http://www.vmware.com/files/pdf/perf-vsphere-memory_management.pdf<br />
    24. 24. VMware memory recommendations <br />Paging or swapping of PGA or SGA is almost always a Very Bad Thingtm.<br />Use memory reservations to avoid swapping or ballooning<br />Install VMware tools to allow ballooning instead of swapping<br />Set Memory reservation = PGA+SGA+process Overhead<br />Be realistic about memory requirements:<br />In physical machines, we are used to using all available memory<br />In VM, use only the memory you need, freeing up memory for other VMs<br />Oracle advisories (or Spotlight) can show you how much memory is needed<br />Reduce VM reservation and Oracle memory targets in tandem to release memory<br />
    25. 25.
    26. 26. ESX CPU management<br />
    27. 27. ESX CPU management <br />If more virtual CPUs than ESX CPUs, then vCPUs will sometimes wait for physical CPU<br />Time “stops” inside the VM when this occurs <br />For multi-CPU VMs, it’s (nearly) all or nothing. <br />A vCPU can be in one of three states:<br />Associated with an ESX CPU but idle<br />Associated with an ESX CPU and executing instructions<br />Waiting for ESX CPU to become available <br />Shares and reservations determine which VM wins access to the ESX CPUs<br />
    28. 28. Configuring VM CPU <br />VMs compete for CPU in this range<br />Shares determine relative CPU allocated when competing <br />
    29. 29. CPU utilization VM<br />“CPU Ready” is the amount of time VM spends waiting on ESX for CPU <br />Inside the VM, CPU stats can be misleading<br />
    30. 30. SMP for vCPUs<br />ESX usually has to schedule all vCPUs for a VM simultaneously<br />The more CPUs the harder this is<br />Some CPU is also needed for ESX<br />More is therefore not always better<br />(Thanks to Carl Bradshaw for letting me reprint this diagram from his Oracle on VMWare whitepaper)<br />
    31. 31. ESX CPU performance comparisons<br />VT enabled<br />Vs 2 core 1.8 GHz physical machine<br />
    32. 32. Programmatic performance <br />NB: Not a benchmark! Just some informal measurements!!<br />
    33. 33. Programmatic performance (2)<br />
    34. 34. ESX CPU recommendations <br />Use up to date chipsets and ESX software<br />Allocate as few VCPUs as possible to each VM<br />Use reservations and shares to prioritise access to ESX CPU<br />Performance of CPU critical workloads may be disappointing on older hardware<br />Monitor ESX Ready time to determine the “penalty” of competing with other virtual machines<br />
    35. 35. ESX IO managment<br />
    36. 36. Typical VMWare disk configuration <br />
    37. 37. IO Resource Allocation<br />Disk shares can be used to prioritize IO bandwidth.<br />This is poorly implemented prior to vSphere 4.1<br />
    38. 38. Storage IO Control<br />Prior to vSphere 4.1:<br /> disk shares could be used only at the VM level, and only within a single ESX host<br />vSphere 4.1 Storage IO Control (SIOC):<br />Manages disk share priorities for all VMs attaching to the same datastore<br />Is triggered by high (“congested”) latency<br />Can be enabled globally at the datastore level<br />Enables equitable distribution even when set to defaults<br />
    39. 39. Storage IO Control<br />
    40. 40. vSphere 4.1 SIOC<br />
    41. 41. SOIC won’t make up for a poorly configured IO layout<br />
    42. 42. Performant VMware disk configuration <br />
    43. 43. Optimal configuration <br />See “Oracle Database Scalability in VMware® ESX” at www.vmware.com/oracle<br />Each virtual disk directly mapped via RDM to dedicated RAID 0 (+1) group <br />41 Spindles!<br />
    44. 44. ESX IO recommendations<br />Follow normal best practice for physical disks<br />Avoid sharing disk workloads <br />Dedicated datastores using VMFS <br />Align virtual disks to physical disks?<br />Consider Raw Device Mapping (RDM)<br />Consider SIOC in vSphere 4.1 <br />If you can’t optimize IO, avoid IO:<br />Tune, tune, tune SQL<br />Prefer indexed paths<br />Memory configuration<br />Don’t forget about temp IO (sorts, hash joins) <br />
    45. 45. Shameless plugs<br />
    46. 46.
    47. 47.
    48. 48.
    49. 49. Paravirtualizationvs Hardware Virtualization<br />
    50. 50. Paravirtualizationvs “Hardware Virtualization”<br />Virtualization is not emulation....<br />Where-ever possible, Hypervisor runs native code from OS against underlying hardware<br />Because a virtualized operating system is running outside privileged x86 “ring 0”, direct calls to hardware need special handling.<br />The three main approaches are:<br />Full Virtualization (VMWare on older hardware)<br />ParaVirtualization (Xen, Oracle VM)<br />Hardware Assisted Virtualization (Intel VT, AMD-V)<br />
    51. 51. Full virtualization <br />Hardware calls from the VM are handled by the hypervisor by:<br />Catching the calls as they occur at run time<br />Re-writing the VM image at load time (binary translation)<br />Requires no special hardware<br />Supports any guest OS<br />Relatively Poor performance<br />Used by ESX on older chip-sets<br />VM<br />Hypervisor<br />Ring 0<br />Hardware<br />
    52. 52. Hardware Assisted virtualization<br />Intel VT and AMD-V chips add a “root mode”.<br />VMcan issue instructions from non-root Ring 0.<br />CPU can divert these to hypervisor<br />No changes to OS required<br />Good performance<br />Requires modern chipsets<br />Root Mode<br />Non-Root Mode<br />VM<br />Hypervisor<br />Ring 0<br />Hardware<br />
    53. 53. Paravirtualization<br /><ul><li>VM operating system is rewritten to translate device calls to “hypercalls”
    54. 54. Hypercalls are handled by a special VM (dom0 in Xen/OVM)
    55. 55. Good performance but requires modified VM OS
    56. 56. Xen can use either paravirtualization or hardware assist</li></ul>VM<br />(domU)<br />Ring 0<br />VM<br />(dom0)<br />Hypervisor<br />Hardware<br />
    57. 57. RAC and ESX<br />
    58. 58. Paravirtualization, ESX and RAC<br />Prior to 11.2.0.2, Oracle relied on paravirtualized kernels to maintain time synchronization for RAC clusters.<br />From 11.2.0.2 Oracle uses Cluster Time Synchronization Service (CTSS) to maintain clock sync, and this works on ESX<br />Therefore, Oracle supports RAC on Vmware ESX only from 11.2.0.2 onwards<br />See Oracle MySupport Note 249212.1<br />
    59. 59. References<br />Latest version of this presentation:<br />http://www.slideshare.net/gharriso/optimize-oracle-on-vmware-5271530<br />My blog (www.guyharrison.net ):<br />http://guyharrison.squarespace.com/blog/2010/2/22/memory-management-for-oracle-databases-on-vmware-esx.html<br />http://guyharrison.squarespace.com/blog/2010/4/9/esx-cpu-optimization-for-oracle-databases.html<br />http://guyharrison.squarespace.com/blog/2010/7/12/stolen-cpu-on-xen-based-virtual-machines.html<br />http://www.vmware.com/files/pdf/perf-vsphere-memory_management.pdf<br />http://www.vmware.com/files/pdf/Oracle_Databases_on_vSphere_Deployment_Tips.pdf<br />http://www.vmware.com/files/pdf/techpaper/VMW-vSphere41-SIOC.pdf<br />

    ×