Vmware vsphere taking_a_trip_down_memory_lane


Published on

Published in: Data & Analytics
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Vmware vsphere taking_a_trip_down_memory_lane

  1. 1. www.metron-athene.com Taking a Trip Down “vSphere” Memory Lane Jamie Baker Principal Consultant jamie.baker@metron-athene.com
  2. 2. www.metron-athene.com Agenda •  Memory Management Concepts •  Memory Reclamation / Overcommitment •  Resource Pool – Limits and Enforcement •  Performance Management Reporting •  Troubleshooting and Best Practices •  References www.metron-athene.com
  3. 3. www.metron-athene.com Memory Management Concepts •  Memory virtualization is next critical component •  Processes see virtual memory •  Guest operating systems use page tables to map virtual memory addresses to physical memory addresses •  The Memory Management Unit (MMU) translates virtual addresses to physical addresses and the Translation Look-aside Buffer cache help the MMU speed up these translations. •  Page table is consulted if a TLB hit is not achievable. •  The TLB is updated with virtual/physical address map, when page table walk is completed. 324/07/2014
  4. 4. www.metron-athene.com MMU Virtualization •  Hosting multiple virtual machines on a single host requires: –  Another level of virtualization – Host Physical Memory •  Virtual Machine Monitor (VMM) maps “guest” physical addresses (PA) to host physical addresses (MA) •  To support the Guest operating system, the MMU must be virtualized by using: –  Software technique: shadow page tables –  Hardware technique: Intel EPT and AMD RVI 424/07/2014
  5. 5. www.metron-athene.com Software MMU - Shadow Page Tables •  Are created for each primary page table •  Consist of two mappings: VA -> PA and PA -> MA •  Accelerate memory access –  VMM points the hardware MMU directly at Shadow Page Tables –  Memory access runs as native speed –  Ensures VM cannot access host physical memory that is not associated 524/07/2014
  6. 6. www.metron-athene.com Hardware MMU Virtualization •  AMD RVI and Intel EPT permit two levels of address mapping –  Guest page tables –  Nested page tables •  When a virtual address is accessed, the hardware walks both the guest page and nested page tables •  Eliminates the need for VMM to synchronize shadow page tables with guest page tables •  Can affect performance of applications that stress the TLB –  Increases the cost of a page walk –  Can be mitigated by use of Large Pages 624/07/2014
  7. 7. www.metron-athene.com Memory Virtualization Overhead •  Software MMU virtualization incurs CPU overhead: –  When new processes are created •  New address spaces created –  When context switching occurs •  Address spaces are switched –  Running large numbers of processes •  Shadow page tables need updating –  Allocating or deallocating pages •  Hardware MMU virtualization incurs CPU overhead –  When there is a TLB miss –  Overall performance win over Shadow Page Tables 724/07/2014
  8. 8. www.metron-athene.com Memory Reclamation Challenges •  VM physical memory is not “freed” –  Memory is moved to the “free” list •  The hypervisor is not aware when the VM releases memory –  It has no access to the VMs “free” list –  The VM can accrue lots of host physical memory •  Therefore, the hypervisor cannot reclaim released VM memory 824/07/2014
  9. 9. www.metron-athene.com VM Memory Reclamation Techniques •  The hypervisor relies on these techniques to “free” the host physical memory •  Transparent page sharing (default) –  redundant copies reclaimed •  Ballooning –  Forces guest OS to “free” up guest physical memory when the physical host memory is low –  Balloon driver installed with VMware Tools •  Memory Compression –  Reduce number of memory pages it needs to swap out –  Decompression latency is much smaller than swap-in latency –  Compressing memory pages has significant less performance impact •  Swap to Host Cache –  Allows users to configure a special swap-cache on SSD storage –  Much faster access than regular host-level swap area, significantly reducing access latency •  Host-level (hypervisor) swapping –  Used when TPS and Ballooning are not enough –  Swaps out guest physical memory to the swap file –  Might severely penalize guest performance 9
  10. 10. www.metron-athene.com Memory Management Reporting 0 1,000 2,000 3,000 4,000 5,000 Average Swap space in use MB Average Amount of memory used by memory control MB Average Memory shared across VMs MB Production Cluster Memory Shared, Ballooned and Swapped VIXEN (ESX) 1024/07/2014
  11. 11. www.metron-athene.com Why does the Hypervisor Reclaim Memory? •  Hypervisor reclaims memory to support memory overcommitment •  ESX host memory is overcommitted when the total amount of VM physical memory exceeds the total amount of host 1124/07/2014
  12. 12. www.metron-athene.com When to Reclaim Host Memory •  ESX/ESXi maintains four host free memory states and associated thresholds: –  High (6%), Soft (4%), Hard (2%), Low (1%) •  If the host free memory drops towards the stated thresholds, the following reclamation technique is used: 1224/07/2014 High   So(   Hard   Low   None   Ballooning   Swapping  and  Ballooning   Swapping  
  13. 13. www.metron-athene.com vSwp file usage and placement guidelines •  Used when memory is overcommitted •  vSwp file is created for every VM •  Default placement is with VM files •  Can affect vMotion performance if vSwp file is not located on Shared Storage 1324/07/2014
  14. 14. www.metron-athene.com VMkernel Swap 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% VM  Memory Balloon Swap  File Reservation   MB Example: •  Assume maximum memory contention •  Default 65% can be Balloon driver •  Example Reservation is 30% •  5% In the VMkernel (.vSwp) file.
  15. 15. www.metron-athene.com Resource Pool – Memory Reporting 0 5,000 10,000 15,000 20,000 25,000 ESX Host (Vixen) Priority Guests RP Memory Limit vs. Memory Used Per Guest 05/17/2010 • Pool  Limit   • Guest  Memory  Usage   • Pool  Memory  Usage  
  16. 16. www.metron-athene.com Use of Limits 1624/07/2014 Web 2 Web 3
  17. 17. www.metron-athene.com Enforcing Limits – Web2 VM 1724/07/2014
  18. 18. www.metron-athene.com Enforcing Limits – Web3 VM 1824/07/2014
  19. 19. www.metron-athene.com ESX Host (Web2) – VM Active Memory 1924/07/2014
  20. 20. www.metron-athene.com ESX Host (Web 3) – VM Active Memory 2024/07/2014 Additional VM hosted Awacs-­‐web3  (Yell  VM  Data)  
  21. 21. www.metron-athene.com Limits are enforced! 2124/07/2014 awacs-­‐web3   awacs-­‐web2  
  22. 22. www.metron-athene.com Memory Limits – A guide •  Granted Memory overruled by Resource Pool limit •  Enforces limits by reclaiming memory from VM •  Be aware of any limits –  Resource Pool or VM •  Monitor your VM Active and Host Consumed VM Memory •  Reduce the Granted Memory rather than enforce limits •  Use Reservations where necessary 2224/07/2014
  23. 23. www.metron-athene.com Monitoring VM and Host Memory Usage •  Active –  amount of physical host memory currently used by the guest –  displayed as “Guest Memory Usage” in vCenter at Guest level •  Consumed –  amount of physical ESX memory allocated (granted) to the guest, accounting for savings from memory sharing with other guests. –  includes memory used by Service Console & VMKernel –  displayed as “Memory Usage” in vCenter at Host level –  displayed as “Host Memory Usage” in vCenter at Guest level •  If consumed host memory > active memory –  Host physical memory not overcommitted –  Active guest usage low but high host physical memory assigned –  Perfectly normal •  If consumed host memory <= active memory –  Active guest memory might not completely reside in host physical memory –  This might point to potential performance degradation 2324/07/2014
  24. 24. www.metron-athene.com Active and Consumed - Report 2424/07/2014 0 500 1000 1500 2000 2500 3000 3500 4000 4500 22/11/2011 Amount  of  host  memory  consumed  by  the  VM  MB Windows  Used  Memory  MB Active  Memory  MB Total  Physical  Memory  MB ORMNVAT01 VM Host Consumed vs. Active VM Memory MB
  25. 25. www.metron-athene.com Memory Troubleshooting 1.  Active host-level swapping –  Cause: excessive memory overcommitment –  Resolution: •  reduce memory overcommitment (add physical memory / reduce VMs) •  enable balloon driver in all VMs •  reduce memory reservations and use shares 2. Guest operating system paging –  Monitor the hosts ballooning activity –  If host ballooning > 0 look at the VM ballooning activity –  If VM ballooning > 0 check for high paging activity within the guest OS 3. When swapping occurs before ballooning –  Many VMs are powered on at same time •  VMs might access a large portion of their allocated memory •  At the same time, the balloon drivers have not started yet •  This causes the host to swap VMs 2524/07/2014
  26. 26. www.metron-athene.com Memory Performance Best Practices •  Allocate enough memory to hold the working set of applications running in the virtual machine, thus minimizing swapping •  Never disable the balloon driver •  Keep transparent page sharing enabled •  Avoid over committing memory to the point that it results in heavy memory reclamation 2624/07/2014
  27. 27. www.metron-athene.com References •  http://www.vmware.com/files/pdf/vsphere_pricing.pdf •  http://www.vmware.com/technical-resources/performance/ resources.html •  http://www.metron-athene.com/training/webinars/index.html 2724/07/2014
  28. 28. www.metron-athene.com Taking a Trip Down “vSphere” Memory Lane Jamie Baker Principal Consultant jamie.baker@metron-athene.com