Virtualizing Oracle Databases with VMware

4,971 views

Published on

Presentation delivered

Published in: Technology, Business
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
4,971
On SlideShare
0
From Embeds
0
Number of Embeds
36
Actions
Shares
0
Downloads
341
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Virtualizing Oracle Databases with VMware

  1. 1. Virtualizing Oracle Databases with VMware Richard McDougall  Chief Performance Architect © 2009 VMware Inc. All rights reserved
  2. 2. Agenda VMware Platform Introduction Why Virtualize Databases? Virtualization Technical Primer Performance Studies and Proof Points Deploying Databases in Virtual Environments •  Consolidation and Sizing •   VMware Platform Introduction   Why   Virtualization Technical Primer  
  3. 3. VMware Virtualization Basics
  4. 4. VMotion Technology VMotion Technology moves running virtual machines from one host to another while maintaining continuous service availability - Enables Resource Pools - Enables High Availability
  5. 5. Resource Controls Reservation •  Minimum service level guarantee (in MHz) Total Mhz •  Even when system is overcommitted •  Needs to pass admission control Limit Shares •  CPU entitlement is directly proportional to VMs Shares shares and depends on the total number of apply here shares issued •  Abstract number, only ratio matters Reservation Limit •  Absolute upper bound on CPU entitlement (in MHz) 0 Mhz •  Even when system is not overcommitted
  6. 6. Resource Control Example Add 2nd VM Add 3rd VM 100% ► with same 50% ►with same number number 33.3% of shares of shares ▼ Set 3rd VM’s limit to 25% of total capacity FAILED Add 4th VM Set 1st VM’s with reservation reservation toADMISSION set to 75% of ◄ ◄ 50% of total 37.5%CONTROL total capacity 50% capacity
  7. 7. Resource Pools Motivation •  Allocate aggregate resources for sets of VMs •  Isolation between pools, sharing within pools •  Flexible hierarchical organization Admin •  Access control and delegation What is a resource pool? •  Abstract object with permissions L: not set Pool A Pool B L: 2000Mhz R: 600Mhz R: not set •  Reservation, limit, and shares S: 60 shares S: 40 shares •  Parent pool, child pools and VMs •  Can be used on a stand-alone host or in a cluster (group of hosts) VM1 VM2 VM3 VM4 60% 40%
  8. 8. Example migration scenario 4_4_0_0 with DRS 1 2 HP ProLiant vCenter 1 2 HP ProLiant OVER DL380G6 OVER DL380G6 1 2 TEMP 1 5 1 2 TEMP 1 5 POWER POWER POWER POWER SUPPLY SUPPLY INTER PL A Y ER SUPPLY SUPPLY INTER PL A Y ER LOCK LOCK POWER CAP POWER CAP DIMMS DIMMS 1A 3G 5E 7C 9i 9i 7C 5E 3G 1A 1A 3G 5E 7C 9i 9i 7C 5E 3G 1A 2 6 2 6 2D 4B 6H 8F 8F 6H 4B 2D 2D 4B 6H 8F 8F 6H 4B 2D ONLINE ONLINE 1 SPARE 2 1 SPARE 2 PROC PROC PROC PROC MIRROR MIRROR FANS FANS 3 7 3 7 1 2 3 4 5 6 1 2 3 4 5 6 4 8 4 8 Imbalanced Balanced Cluster Cluster POWER SUPPLY 1 POWER CAP POWER SUPPLY 1 2 2 OVER TEMP INTER LOCK DIMMS 1 5 PL A Y ER HP ProLiant DL380G6 Heavy Load POWER SUPPLY POWER CAP 1 1A 3G 5E 7C 9i POWER SUPPLY 1 2 2 OVER TEMP INTER LOCK DIMMS 9i 7C 5E 3G 1A 1 5 PL A Y ER HP ProLiant DL380G6 1A 3G 5E 7C 9i 9i 7C 5E 3G 1A 2 6 2 6 2D 4B 6H 8F 8F 6H 4B 2D ONLINE 1 SPARE 2 2D 4B 6H 8F 8F 6H 4B 2D ONLINE PROC PROC 1 2 MIRROR SPARE FANS PROC PROC 3 7 MIRROR 1 2 3 4 5 6 FANS 3 7 1 2 3 4 5 6 4 8 4 8 Lighter Load
  9. 9. DRS Scalability – Transactions per minute (Higher the better) Transactions per minute - DRS vs. No DRS No DRS DRS Already balanced So, fewer gains Higher gains (> 40%) with more imbalance 140000 130000 120000Transaction per minute 110000 100000 90000 80000 70000 60000 50000 40000 2_2_2_2 3_2_2_1 3_3_1_1 3_3_2_0 4_2_1_1 4_2_2_0 4_3_1_0 4_4_0_0 5_3_0_0 Run Scenario
  10. 10. DRS Scalability – Application Response Time (Lower the better) Transaction Response Time - DRS vs. No DRS No DRS DRS 70.00 60.00Transaction Response time (ms) 50.00 40.00 30.00 20.00 10.00 0.00 2_2_2_2 3_2_2_1 3_3_1_1 3_3_2_0 4_2_1_1 4_2_2_0 4_3_1_0 4_4_0_0 5_3_0_0 Run Scenario
  11. 11. VMware HA VMs Reboot App App App App HA HA OS OS OS OS VMware ESX VMware ESX
  12. 12. VMware Fault Tolerance No Reboot – Seamless Cutover App App FT OS OS VMware ESX VMware ESX
  13. 13. vApp: The application of the cloud  An uplifting of a virtualized workload •  VM = Virtualized Hardware Box •  App = Virtualized Software Solution •  Takes the benefits of virtualization: encapsulation, isolation and mobility higher up the stack  Properties: Policies •  Comprised of one or more VMs (may be multi-tier applications) 1.  Product: eCommerce 2.  Topology •  Encapsulates requirements on the 3.  Resources Req: CPU, Mem, Disk, Bandwidth deployment environment 4.  Only port 80 is used •  Distributed as an OVF package 5.  DR RPO: 1 hour 6.  VRM: Encrypt w/ SHA-1  Built by: 7.  Decommission in 2 month Websphere •  ISVs / Virtual Appliance Vendors Tomcat Exchange •  IT administrators •  SI/VARs SAP
  14. 14. The Progression of Virtualization to Cloud VMware ESX® 2009VMware Server Virtualization WorkstationVirtualization 2003 VMware vSphere™ 2001 Complete Virtualization VMware Infrastructure Platform From Desktop Virtual through the Datacenter… 1998 Resource to the Cloud Pools 14
  15. 15. Datacenter of the Future – private cloud •  On-demand capacity •  Pooling, load balancing of server, storage, network •  Built-in availability, security and scalabilityResource Pools A Compute P factoryvSphere vSphere vSphere vSphere I
  16. 16. vSphere 4.0 – The Most Complete Virtualization Platform •  Firewall •  Clustering •  Anti-virus Dynamic Resource •  Data Protection •  Intrusion Prevention Sizing •  Fault Tolerance •  Intrusion Detection Application Services Availability Security Scalability vSphere 4.0 vCompute vStorage vNetwork Infrastructure Services •  Hardware Assist •  Storage Management •  Enhanced Live Network & Replication Migration Management Compatibility •  Storage Virtual Appliances
  17. 17. Business-Critical Application Momentum % of customers running apps in production on VMware 56% 53% 50% 36% 41% 34% 24% 27% MS MS MS Oracle Oracle IBM IBM SAP Exchange SQL SharePoint Middleware DB WebSphere DB2 Source: VMware customer survey, September 2008, sample size 1038 Data: Within subset of VMware customers running a specific app, % that have at least one instance of that app in production in a VM In a recent Gartner poll, 73% of customers claimed to use x86 virtualization for mission critical applications in production Source: Gartner IOM Conference (June 2008) “Linux and Windows Server Virtualization Is Picking Up Steam” (ID Number: G00161702)
  18. 18. Agenda VMware Platform Introduction Why Virtualize Databases? Virtualization Technical Primer Performance Studies and Proof Points Deploying Databases in Virtual Environments •  Picking a Hardware Platform •  Configuring Storage •  Configuring the Virtual Machine •  OS Choices and Tuning •  Database Configuration •  Performance Monitoring
  19. 19. Provision DB On-Demand Pre-Configured vApps Database Database " Standardize onSQL SQL Enterprise Ed. optimal app & OS OS 4 vCPU OS 4 vCPU 4 GB 4 GB configurations " Minimize configuration drift and errors Accelerate Faster service " Support multi-tier Apps dev & test availability Lab Production Provision On Demand " Accelerate app development " Faster service availability
  20. 20. Databases: Why Use VMs Rather than DB Virtualization? Virtualization at hypervisor level provides the best abstraction •  Each DBA has their own hardened, isolated, managed sandbox Strong Isolation •  Security •  Performance/Resources •  Configuration •  Fault Isolation Scalable Performance •  Low-overhead virtual Database performance •  Efficiently Stack Databases per-host
  21. 21. Agenda VMware Platform Introduction Why Virtualize Databases? Virtualization Technical Primer Performance Studies and Proof Points Deploying Databases in Virtual Environments •  Picking a Hardware Platform •  Configuring Storage •  Configuring the Virtual Machine •  OS Choices and Tuning •  Database Configuration •  Performance Monitoring
  22. 22. VMware ESX Architecture CPU is controlled by scheduler and virtualized File System by monitor TCP/IPGuest Guest Monitor supports: ! BT (Binary Translation) ! HW (Hardware assist) Monitor Monitor (BT, HW, PV) ! PV (Paravirtualization) Virtual NIC Virtual SCSI Memory is allocated by the Memory VMkernel Scheduler Allocator Virtual Switch File System VMkernel and virtualized by the monitor NIC Drivers I/O Drivers Network and I/O devicesPhysical are emulated and proxiedHardware though native device drivers
  23. 23. Agenda VMware Platform Introduction Why Virtualize Databases? Virtualization Technical Primer Performance Studies and Proof Points Deploying Databases in Virtual Environments •  Picking a Hardware Platform •  Configuring Storage •  Configuring the Virtual Machine •  OS Choices and Tuning •  Database Configuration •  Performance Monitoring
  24. 24. Evolution of Performance for Large Apps on ESX 100%MissionCriticalApps ESX 2.x VI 3.0 VI 3.5 vSphere 4.0 Overhead:2-15% Overhead:30-60%Overhead:20-40% Overhead:10-30% VCPUs:8 VCPUs: 2 VCPUs:2 VCPUs:4 VM RAM:255GB VM RAM:3.6 GB VM RAM:16 GB VM RAM:64GBGeneral Phys RAM:1 TBPopulation Phys RAM:64GB Phys RAM:64GB Phys RAM:256GBOf PCPUs:64 coreApps PCPUs:16 core PCPUs:16 core PCPUs:64 core IOPS:350,000 IOPS:<10,000 IOPS:10,000 IOPS:100,000 N/W:28 Gb/s N/W:380 Mb/s N/W:800 Mb/s N/W:9 Gb/s 64-bit OS Support Monitor Type: Gen-1 64-bit OS Support 320 VMs per host Binary TranslationHW Virtualization Gen-2 HW 512 vCPUs per host Monitor Type: Virtualization Monitor Type: EPT VT / SVM Monitor Type: NPT Ability to satisfy Performance Demands
  25. 25. Can I virtualize CPU Intensive Applications?VMware ESX 3.x compared to NativeSPECcpu results covered by O.Agesen and K.Adams PaperWebsphere results published jointly by IBM/VMwareSPECjbb results from recent internal measurements Most CPU intensive applications have very low overhead
  26. 26. Debunking the myth: High Throughput, Low Overhead I/O Maximum reported storage: 365K IOPS• 100K on VI3 Maximum reported network: 16 Gb/s• Measured on VI3
  27. 27. Can I Virtualize High Networking I/O Applications? Overall response time is lower when CPU utilization is less than 100% due to multi-core offload
  28. 28. Enterprise Workload Demands vs. Capabilities Workload Requires vSphere 4Oracle 11g 8vcpus for 95% of DBs 8vcpus per VM 64GB for 95% of DBs 256GB per VM 60k IOPS max for OLTP @ 8vcpus 120k IOPS per VM 77Mbits/sec for OLTP @ 8vcpus 9900Mbits/sec per VMSQLserver 8vcpus for 95% of DBs 8vcpus per VM 64GB @ 8vcpus 256GB per VM 25kIOPS max for OLTP @ 8vcpus 120k IOPS per VM 115Mbits/sec for OLTP @ 8vcpus 9900Mbits/sec per VMSAP SD 8vcpus for 90% of SAP Installs 8vcpus per VM 24GB @ 8vcpus 256GB per VM 1k IOPS @ 8vcpus 120k IOPS per VM 115Mbits/sec for OLTP @ 8vcpus 9900Mbits/sec per VMExchange 4cpus per VM, Multiple VMs 8vcpus per VM 16GB @ 4vcpus 256GB per VM 1000 IOPS for 2000 users 120k IOPS per VM 8Mbits/sec for 2000 users 9900Mbits/sec per VMApache SPECweb 2-4cpus per VM, Multiple VMs 8vcpus per VM 8GB @ 4vcpus 256GB per VM 100IOPS for 2000 users 120k IOPS per VM 3Gbits/sec for 2000 users 9900Mbits/sec per VM
  29. 29. Measuring the Performance of DB Virtualization Throughput Delivered Minimal Overheads
  30. 30. How large is your database instance? (one VM shown)
  31. 31. IO In Action: Oracle/TPC-C*"   ESX achieves 85% of native performance with Na1ve# VM# 58000 IOPS an industry standard 8# OLTP workload on an 8- Scaling#Ra1o# vCPU VM 6#"   1.9x increase in throughput with each 4# doubling of vCPUs 2# 0# 1# 2# 4# 8# v/p#CPUs#
  32. 32. Eight vCPU Oracle System Characteristics Metric 8 vcpu VM Business transactions per minute 250,000 Disk IOPS 60,000 Disk Bandwidth 258 MB/s Network Packets/sec 27,000 Network Throughput 77 Mb/s * Our benchmark was a fair-use implementation of the TPC-C business model; our results are not TPC-C compliant results, and not comparable to official TPC-C results
  33. 33. Oracle/TPC-C* Experimental Details Host was an 8 CPU system with an Xeon 5500 OLTP Benchmark: fair-use implementation of TPC-C workload Software stack includes: RHEL5.1, Oracle 11g R1, internal build of ESX (ESX 4.0 RC) Were there many Tweaks in getting this result? Not really… •  ESX development build with these features !  Async I/O, pvscsi driver, virtual Interrupt coalescing, topology-aware scheduling !  EPT: H/W MMU enabled processor •  The only ESX “tunable” applied: static vmxnet TX coalescing !  3% improvement in performance
  34. 34. VMware vSphere enables you to use all those cores… VMWare ESX Scaling: Keeping up with core counts Virtualization provides a means to exploit the hardware’s increasing parallelism Most applications don’t scale beyond 4/8 way
  35. 35. “Bonus” Memory During Consolidation: Sharing! VM 1 VM 2 VM 3 Content-based •  Hint (hash of page content) generated for 4K pages •  Hint is used for a match Hyper •  If matched, perform bit by bit visor comparison COW (Copy-on-Write) VM 1 VM 2 VM 3 •  Shared pages are marked read- only •  Write to the page breaks sharing Hyper visor
  36. 36. Multi-VM Performance: DVD-Rental Workload!  Simulate a large multi-tier application with RDBMS •  Simulates DVD store transactions •  Java client tier •  Microsoft SQLServer and Oracle Database SQLserver: Oracle: Dell PE2950 Sun 16-core x4600 M2 Quad Core Xeon VMware ESX 3.5 EMC CLARiiON CX-340 2 x Intel X5450 Oracle 10G R2 32GB RAM RHEL4, Update 4, 64-bit
  37. 37. Consolidating Multiple Oracle VMs Aggregate TPM vs. Number of VMs Scaling to 16 Cores, 45000 100 40000 90 256GB RAM! 80 35000 70 30000 60 Aggregate TPM 25000 50 CPU Utilization 20000 40 15000 30 10000 20 5000 10 0 0 1 2 3 4 5 6 7 # of VMsAverage of 1GB Memory Saved per instanced from page sharing
  38. 38. Oracle Performance (Response time) Average response time vs. Number of VMs 0.20 100 Average response time 0.18 90 CPU Utilization 0.16 80 Aggregate Response Time (secs) 0.14 70 0.12 60 0.10 50 0.08 40 0.06 30 0.04 20 0.02 10 0.00 0 1 2 3 4 5 6 7 # of VMs !  Oracle scales very well on ESX in consolidation scenarios !  Efficient, guaranteed resource allocation to individual Virtual Machine
  39. 39. Agenda VMware Platform Introduction Why Virtualize Databases? Virtualization Technical Primer Performance Studies and Proof Points Deploying Databases in Virtual Environments •  Consolidation and Sizing •  Picking a Hardware Platform •  Configuring Storage •  Configuring the Virtual Machine •  OS Choices and Tuning •  Database Configuration •  Performance Monitoring
  40. 40. General Best Practices for Virtualizing DBs Characterize DBs into three rough groups •  Green DBs – typically 70% ! Ideal candidate for virtualization: -  Well tuned and modest CPU consumption -  Less than 1000 IOPS, 4 cores •  Yellow DBs – typically 25% ! Likely candidate for virtualization -  May need some SQL tuning and monitoring to understand CPU and I/O requirements -  4-8 cores, >1000 IOPS -  Storage I/O planning and configuration required •  Red DBs – typically 5% ! Unlikely candidates until larger VMs available ! Consumes more than 8 physical cores ! Not a lot of SQL tuning to be done
  41. 41. Consolidation and Sizing CPU Utilization Distribution 100000 10000 Consolidation targets are often Number of Systems <30% Utilized 1000 " Windows average utilization: 5-8% 100 " Linux/Unix average: 10-35% 10 1 0 20 40 60 80 100 % CPU Utilization
  42. 42. Sizing and Requirements Virtual Machine sizing is different to Physical •  Don’t just take the #cpus in the physical system as the vcpu requirement •  Many Physical systems are sized for the peak utilization for with ample headroom for future growth •  As a result, utilization is often very low in physical systems •  With virtual machines, it’s not necessary to build headroom •  For example, many databases running on 4-cpu systems can easily run in a 2-vcpu guest Moving of older RISC/SPARC machines to virtual x86 •  Even that large older generation SPARC may be a good candidate… •  48 x 1.2Ghz SPARC cores = 1 x 8 core Nehalem VM •  Since most large SPARC machines are consolidated already, it’s likely that your larger databases can run inside a VM
  43. 43. Picking Hardware: Recent Hardware has Lower Overhead Intel Architecture VMEXIT Latencies 1400 1200 1000 Latency (cycles) 800 600 400 200 0 Prescott Cedar Mill Merom Penryn Nehalem HW virtualization support improving from CPU generation to generation
  44. 44. Use Intel Nehalem or AMD Barcelona, or later… AMD RVI Speedup Hardware memory 1.6 management units (MMU) improve 1.4 efficiency 1.2 •  AMD RVI currently available 1 •  Dramatic gains can be seen 0.8 But some workloads 0.6 see little or no value 0.4 •  And a small few actually slow down 0.2 0 SQL Server Citrix XenApp Apache Compile
  45. 45. Databases: Top Ten Tuning Recommendations 1.  Optimize Storage Layout, # of Disk Spindles 2.  Use 64-bit Database 3.  Add enough memory to cache DB, reduce I/O 4.  Optimize Storage Layout, # of Disk Spindles 5.  Use Direct-IO high performance un-cached path in the Guest Operating System 6.  Use Asynchronous I/O to reduce system calls 7.  Optimize Storage Layout, # of Disk Spindles 8.  Use Large MMU Pages 9.  Use the latest H/W – with AMD RVI or Intel EPT 10. Optimize Storage Layout, # of Disk Spindles
  46. 46. Databases: Workload Considerations OLTP  DSS Short Transactions Long Transactions Limited number of standardized queries Complex queries  Small amounts of data accessed Large amounts of data accessed Combines data from different  Uses data from only one source sources  I/O Profile  I/O Profile •  Small Synchronous reads/writes (2k->8k) •  Large, Sequential I/Os (up to 1MB) •  Heavy latency-sensitive log I/O •  Extreme Bandwidth Required  Memory and I/O intensive •  Heavy ready traffic against data volumes •  Little log traffic  CPU, Memory and I/O intensive  Indexing enables higher performance
  47. 47. Databases: Storage Configuration Storage considerations •  VMFS or RDM •  Fibre Channel, NFS or iSCSI •  Partition Alignment •  Multiple storage paths OS/App, Data, Transaction Log and TempDB on separate physical spindles RAID 10 or RAID5 for Data, RAID 1 for logs Queue depth and Controller Cache Settings TempDB optimization
  48. 48. Disk Fundamentals Databases are mostly random I/O access patterns Accesses to disk are dominated by seek/rotate •  10k RPM Disks: 150 IOPS max, ~80 IOPS Nominal •  15k RPM Disks: 250 IOPS max, ~120 IOPS Nominal Database Storage Performance is controlled by two primary factors •  Size and configuration of cache(s) •  Number of physical disks at the back-end
  49. 49. Disk Performance Higher sequential performance (bandwidth) on the outer tracks
  50. 50. Databases: Storage Hierarchy " In a recent study, we scaled up to Database Cache 320,000 IOPS to an EMC array from a single ESX server. Guest OS Cache " 8K Read/Write Mix " Cache as much as possible in /dev/hda caches Controller Cache " Q: What’s the impact on the number of disks if we improve cache hit rates from 90% to 95%? " 10 in 100 => 5 in 100… " #of disks reduced by 2x!
  51. 51. Storage – VMFS or RDM Guest Guest OS OS Guest OS /dev/hda /dev/hda /dev/hda VMFS database1.vmdk database2.vmdk FC LUN FC or iSCSI RAW  VMFS LUN RAW provides direct access to  Leverage templates and quick provisioninga LUN from within the VM  Fewer LUNs means you don’t have to Allows portability between physical watch Heapand virtual  Scales better with Consolidated RAW means more LUNs Backup •  More provisioning time  Preferred Method Advanced features still work
  52. 52. Best Practices: VMFS or RDM Performance is similar
  53. 53. Databases: Typical I/O Architecture Database Cache 2k,8k,16k x n 2k, 8k, 16k x n 512->1MB DB Log DB Writes Writes Reads File System FS Cache
  54. 54. Know your I/O: Use a top-down Latency analysis technique Application File A = Application Latency Guest System A R = Perfmon I/O Drivers Windows R Physical Disk Device Queue “Disk Secs/transfer” S S = Windows Physical Disk Service Time Virtual SCSI G G = Guest Latency VMkernel File System K K = ESX Kernel D D = Device Latency
  55. 55. Checking for Disk Bottlenecks Disk latency issues are visible from Oracle stats •  Enable statspack •  Review top latency events Top 5 Timed Events % Total Event Waits Time (s) Ela Time --------------------------- ------------ ----------- ----------- db file sequential read 2,598 7,146 48.54 db file scattered read 25,519 3,246 22.04 library cache load lock 673 1,363 9.26 CPU time 2,154 934 7.83 log file parallel write 19,157 837 5.68
  56. 56. Oracle File System Sync vs DIO
  57. 57. Oracle DIO vs. RAW
  58. 58. Direct I/O   Guest-OS Level Option for Bypassing the guest cache •  Uncached access avoids multiple copies of data in memory •  Avoid read/modify/write module file system block size •  Bypasses many file-system level locks   Enabling Direct I/O for Oracle and MySQL on Linux # vi init.ora # vi my.cnf filesystemio_options=“setall” innodb_flush_method to O_DIRECT Check: Check: # iostat 3 # iostat 3 (Check for I/O size matching (Check for I/O size matching the DB block size…) the DB block size…)
  59. 59. Asynchronous I/O   An API for single-threaded process to launch multiple outstanding I/Os •  Multi-threaded programs could just just multiple threads •  Oracle databases uses this extensively •  See aio_read(), aio_write() etc...   Enabling AIO on Linux # rpm -Uvh aio.rpm # vi init.ora filesystemio_options=“setall” Check: # ps –aef |grep dbwr # strace –p <pid> io_submit()… <- Check for io_submit in syscall trace
  60. 60. Picking the size of each VM  vCPUs from one VM stay on one socket* Socket 0 Socket 1 VM Size Options  With two quad-core sockets, there are only 2 two positions for a 4- way VM  1- and 2-way VMs can be arranged many ways on quad core socket 12  Newer ESX schedulers more efficiency use fewer options •  Relaxed co-scheduling 8
  61. 61. Use Large Pages   Guest-OS Level Option to use Large MMU Pages •  Maps the large SGA region with fewer TLB entries •  Reduces MMU overheads •  ESX 3.5 Uniquely Supports Large Pages!   Enabling Large Pages on Linux # vi /etc/sysctl.conf (add the following lines:) vm/nr_hugepages=2048 vm/hugetlb_shm_group=55 # cat /proc/vminfo |grep Huge HugePages_Total: 1024 HugePages_Free: 940 Hugepagesize: 2048 kB
  62. 62. Large Pages Increases TLB memory Performance Gains coverage •  Removes TLB misses, improves efficiency 12% Improves performance of 10% applications that are sensitive to TLB miss costs 8% Configure OS and application to 6% leverage large pages •  LP will not be enabled by default 4% 2% 0% Gain (%)
  63. 63. Linux Versions Some older Linux versions have a 1Khz timer to optimize desktop- style applications •  There is no reason to use such a high timer rate on server-class applications •  The timer rate on 4vcpu Linux guests is over 70,000 per second! Use RHEL >5.1 or latest tickless timer kernels •  Install 2.6.18-53.1.4 kernel or later •  Put divider=10 on the end of the kernel line in grub.conf and reboot, or default on tickless kernel •  All the RHEL clones (CentOS, Oracle EL, etc.) work the same way
  64. 64. Monitor and Control Service Levels with AppSpeed Policies (SLA) End-user 99.9% Uptime Infrastructure 100 ms latency App .01% error rate Web DBAutomatically map services to AppinfrastructureMonitor service levels andidentify bottlenecksSize infrastructure dynamically tomeet SLA cost-effectively
  65. 65. Performance Whitepapers•  VMware vCenter Update Manager Performance and Best Practices•  Microsoft Exchange Server 2007 Performance on VMware vSphere•  Virtualizing Performance-Critical Database Applications in VMware vSphere•  Performance Evaluation of Intel EPT Hardware Assist•  SAP Performance on VMware vSphere•  A Comparison of Storage Protocol Performance•  Microsoft SQLServer Performance•  Fault-Tolerance Performance•  Overview of Memory Management in VMware vSphere•  Scheduler Improvements in VMware vSphere•  Comparison of Storage Protocols with Microsoft Exchange 2007•  Networking Performance and Scalability in VMware vSphere•  Performance Analysis of VMware VMFS Filesystem•  Performance Impact of PVSCSI•  vSphere Performance Best Practices
  66. 66. For more info: www.vmware.com/oracle Richard McDougall  Chief Performance Architect © 2009 VMware Inc. All rights reserved

×