7. The Stats
Three Half Rack Exadata clusters with High Cap. drives
Cluster #1
36 Dev/Test Databases
Cluster #2
11 Production Databases
Cluster #3
13 Dev/Test Databases
6 Standby Databases
Still more databases to come…
www.enkitec.com 7
8. Why Consolidate?
Primary drivers for consolidation center around cost savings
•Reduces Oracle software licensing
•3rd party products such as backup agents, ETL tools, etc…
•More efficient use of system resources
•Soft Costs
– Floor space
– Power & Cooling
– Administration, Staffing Costs
(training, etc.)
www.enkitec.com 8
10. www.enkitec.com 10
For example, the first row should read…
Database ‘A’ requires 4 CPU’s and will run on nodes 1 and 2 (2 CPU’s each)
Let’s say we have the following databases to migrate on Exadata:
Cluster Level
Utilization
A Simple Consolidation Example
11. www.enkitec.com 11
Let’s say we have the following databases to migrate on Exadata:
Per compute node
Utilization
For example, the first row should read…
Database ‘A’ requires 4 CPU’s and will run on nodes 1 and 2 (2 CPU’s each)
A Simple Consolidation Example
16. Provisioning Worksheet
• Capacity Planning
• Communication Tool
• Hand off
www.enkitec.com 16
**Supplement to existing Exadata installation tools:
• Site planning checklist
• Configuration Worksheet
• Exadata Configurator sheet
• CheckIP
• OneCommand
Utilization = Requirements / Capacity
17. Capacity
www.enkitec.com 17
2 = quarter rack
4 = half rack
8 = full rack
SPECint_rate2006
http://goo.gl/doBI5
CPU_COUNT,
threads, & cores
http://goo.gl/CunHN
96 to 144GB
(frequency of the
memory DIMMs
drops to 800 MHz
from 1333 MHz)
Space will also depend on:
•ASM redundancy
•DATA/RECO allocation
http://goo.gl/I3fjn
Query Low (4x)
Query High (6x)
Archive Low (7x)
Archive High (12x)
Smart
Scans!
18. CPU Core Comparison
www.enkitec.com 18
Source
chip efficiency factor = source SPEC rating / Exadata SPEC rating
= 16/26
= .6154
EXA cores requirement = source host cores * utilization * chip efficiency factor
= 32 * .7 * .6154
= 13.78
* offload factor
* .5
--------- 6.89
Sun Fire X4170 M2 X5670@2.93GHz
Destination
how much of the
source CPU cores
are being used
multiplier for
equivalent
database
machine cores
amount of CPU
resources that will
be offloaded to the
storage cells
19. The Perfect Storm
(Peoplesoft HR)
www.enkitec.com 19
Month-end Processing
+ Weekly Time Entry
+ SQL Plan Change
------------------------------------
Uh-oh!
20. CPU Allocation
www.enkitec.com 20
DB Uniq Name DB Name
node
1
node
2
node
3
node
4
4 instance 5 instance 4 instance 3 instance
47% cpu used 75% cpu used 47% cpu used 18% cpu used
49% mem used 66% mem used 71% mem used 54% mem used
BIPRDDAL biprd P P
DBFSPRD DBFSPRD P P P P
HCMPRDDAL hcmprd P P
MTAPRD11DAL mtaprd11 P P
PAPRDDAL paprd P P
RMPRDDAL rmprd P P
dbm dbm F F F F
Fsprddal fsprd P P
= Preferred
= Failover
23. www.enkitec.com 23
Instance Activity – HCMPRD2
HCMPRD Caged
at 12 CPU’s
SQL Profile Installed
to lock in good plan.
Problem: A single SQL stmt. overwhelming
CPU resources.
Node 2
25. www.enkitec.com 25
Overlapping workloads of three databases
across 3 nodes.
BIPRD, HCMPRD, and MTAPRD
Overlapping workloads of three databases
across 3 nodes.
BIPRD, HCMPRD, and MTAPRD
Node 1
Node 2
Node 3
Node 4
27. www.enkitec.com 27
Notice what happens to CPU waits
and the system load average when
this report is run.
Notice what happens to CPU waits
and the system load average when
this report is run.
37. www.enkitec.com 37
Smart Scan in Action. The cells are scanning 1T but only returning 144G…
***That’s on each of the highlighted row source below…
38. www.enkitec.com 38
The databases on other nodes see the contention as “System I/O”
Without I/O resource management even critical processes are affected (CKPT, LGWR, …)
39. www.enkitec.com 39
Inter-database IORM Plan
(only kicks in when needed)
I/O requests from critical processes like CKPT, LGWR, LMON get priority automatically.
Without IORM I/O requests from these important processes receive the same priority
as any other process.
*Side Benefit (automatic when IORM is enabled)
welcome, I'll mainly talk about server consolidation of a mixed workload environment specifically on a half rack exadata and also will talk about the methodology and tools, as well as lessons learned The company btw is a large real state investment company that consolidated their peoplesoft and biee environments
Just a brief introduction of myself..
Why do we consolidate? Primary driver is… cost savings.. When you consolidate you reduce your total footprint on everything!
(the idea behind all things) Monitoring Cluster level, node level is critical for managing resources of a consolidated environment The scenario is 7 databases that will be spread out across the 4 nodes
Let ’s say we have the following DBs to migrate on Exadata… I call this table “the node layout” And it is read as .. Database ‘A’ requires 4 CPU’s and will run on nodes 1 and 2 (2 CPU’s each) You have 4 nodes with 7 databases spread out.. Now you want to be able to see the “cluster level utilization” which you just sum up all of their core requirement and divide it by the total number of cores across the cluster
BUT more important is seeing the per compute node utilization because you may be having a node that ’s 80% utilized where the rest of the nodes are on the 10% range
Here ’s another view of the node layout where we distribute the CPU cores of the instances based on their node assignments So each block on the left side is one CPU core That ’s 24 cores which accounts the threads as cores. And that is based on the CPU_COUNT parameter and /proc/cpuinfo and you set or take the number of CPUs from CPU_COUNT when you do instance caging and to be consistent with the monitoring of OEM and AWR So on the cluster level utilization it ’s 29.2% While on the per compute node this is the utilization.
Now what we don ’t want to happen is if we change the node layout and assign more instances on node 2 and still make use of the same number of CPU core requirement across the databases On the cluster level it will be the same utilization BUT on the per compute node you end up having node2 with 80% utilization while the rest are pretty much idle So we created a bunch of tools to where we can easily create a provisioning plan, be able to play around with scenarios, and be able to audit it. And that ’s what Randy will be introducing.. BTW, I like the part of the interview of Cary where he mentioned that even with a 135 lane highway you will still have the same traffic problem with the 35 lane highway if you saturate it a bunch of cars.. So the capacity issue on a small hardware can also be an issue on a big hardware and also on Exadata. On this slide it is similar to monitoring the utilization of the whole highway as well as the per lane utilization of that highway..
The three legs of the consolidation process: Gather Requirements Provision Resources Audit Results Utilization metrics from the audit should be fed back into the provisioning process for re-evaluation.
It ’s a capacity planning tool where we make sure that the 3 basic components (CPU, memory, IO) does not exceed the available capacity. And Oracle has created a bunch of tools to standardize their installation of Exadata which helps to avoid mistakes and configuration issues. BUT the problem is when all of the infra is in place how do you now get the end state where all of the instances are up and running So this tool bridges that gap for you to get to that end state And since it ’s an Excel based tool it’s pretty flexible and you can hand this off to your boss as a documentation of their instance layout
Now we move on to the capacity… So there ’s a section on the provisioning worksheet where you will input the capacity of the Exadata that you currently have On the node count you put .. 2,4,8 Then we get the SPECint_rate equivalent of the Exadata processor so you ’ll be able to compare the SPEED of the Exadata CPU against your source servers And I ’ve also explained earlier that we are counting threads as cores.. And I have an investigation on that which is available at this link Each node has 96GB of memory Disk space is dependent on ASM redundancy and DATA/RECO allocation The table compression factor lets you gain more disk space as you compress the big tables The OFFLOAD FACTOR which is the amount of CPU resources that will be offloaded to the storage cells This is art.. This is NOT something that you can calculate.. It ’s not math.. It’s like black magic.. We have done a bunch of Exadata so we know when we see a workload we can guess of what we think the offload percentage is.. That definitely affects the CPU but it’s not something that you can scientifically calculate.
So there ’s a source and destination platform which is the Exadata And you are transferring a bunch of databases from different platforms And you have to get the equivalent number of cores of the source system against Exadata And what we do is We find the type, speed, and number of CPU cores of the source platform We make use of SPECint comparisons to find the equivalent number of Exadata cores needed Of course the CPU cores capacity will depend of the Exadata that you have (Quarter, Half, Full, Mulitple Racks) Let me introduce you to some simple math… Chip eff factor.. That will be your multiplier as to how it is equivalent to Exadata cores And we make use of that to get the “Exa cores requirement”.. (let me explain the formula) Now if it ’s a DW database you will probably doing a lot of offloading.. So that where we factor in the offload factor. And I ’m pretty sure that if you attended Tim’s presentation or the tuning class… you will get a higher offload factor here ;)
The first event came about on a busy Friday, right in the middle of month-end processing. During month-end processing the 4 primary business databases become extremely busy and our configuration is put to the test.
Just a quick review of the instance/node layout. Notice that the HR database (HCMPRD) shares node 2 with the BIPRD (and two other smaller ones)
Our first hint that there was a problem was the Oracle Load Map which showed 66 active sessions on the HR database – waiting for CPU! Complaints were: people could not enter their time (HR) OBIEE was running painfully slow. When we tried to login to the database server it was so busy that we could hardly get logged in.
We went to the Top Activity page for HCMPRD and found that the problem was with on particular SQL statement. We knew that we probably had a SQL statement with a bad plan but we needed to take the pressure off of the CPU before we could do anything.
Our first course of action was to implement instance caging to reduce the load on CPU resources and lessen the impact to the other databases sharing that node. When we confined the instance to 12 CPU ’s – notice what happened to the operating system run queue. Once we limited the instance to 12 CPU ’s we went about the task of investigating what went wrong with the SQL statement. We found the that execution plan *had* in fact changed. We used a SQL Profile to lock in the good execution plan – now look at what happened to the active session count when we implemented the profile.
During the course of troubleshooting this issue we discovered that the OBIEE application would fire off between 15-25 SQL queries with the click of a single button.
Unlike the other databases, OBIEE was a new application without any utilization history. So we didn ’t know what to expect from it. BIPRD is still contending with the HCMPRD, FSPRD, and MTAPRD when it runs some inefficient queries with cartesian joins which use a lot of PGA memory. The problem can be so extreme that we can run out of memory and the system begins to swap heavily. Swapping causes high wait I/O and load average. Nothing will cripple a system faster than running out of memory. Swapping happens outside of the Oracle kernel so the database doesn ’t know anything is wrong. All it *sees* are high waits for I/O. Since swapping happens outside of the database, instance caging does not help. This is a Tableu graph using metrics from Karl Arao ’s AWR Toolkit.
We have to be drastic on our solution so we segregated the OBIEE from the other databases and run it as a standalone database The advantage is that we have isolated our “anti-social” database on a node by itself (node 3) So now when the workloads overlap, these databases won ’t have to compete for CPU and memory resources. This does *not* help us in terms of I/O. But we ’ll talk about that in our next story.
This is the performance page whenever that inefficient batch of SQLs ran.
Looking at the AWR data we find spikes on PGA, WAIT IO, and LOAD AVG
And this is the inefficient SQL.. It ’s a SQL that runs for 2mins, and it’s doing a tiny bit of smart scans Consumes 1.6G of PGA. That doesn ’t sound too terribly bad but when you execute 60 of these at the same time it can quickly consume all the memory on the server.
So we brought in Karen Morton and Martin Paynter to see if there was anything we could tune the SQL. This was when we discovered that a single report could kick off as many as 26 independent SQL queries. (another Ah-hah moment) The tuning greatly reduced the amount of PGA memory needed for the report. Martin also helped us configure OBIEE so that it would manage the number of SQL sent to the database. This graph shows what the load profile looked like when we were finished.
IORM is an Exadata exclusive!
Got an email from the DBA that his database refresh took 12 hours which usually takes 40mins Immediately we did an investigation on IO contention issues
We saw that this BIUAT database is the only active database that is doing heavy I/O.
Then we confirmed that it is doing sustained smart scans
From v$sql we saw about 31 active sessions; most of which were doing smart scans
And looking at the SQL it is scanning 4TB of data and returning 400GB of it. If you do this kind of SQL you will saturate the Exadata storage cells.
You can see the details of the Smart Scans on the row source operations of the SQL.. So here each of the row source is scanning 1TB and returning 144GB..
And this is what the Top Activity page of the other database will look like when encountering an IO contention issue.. You ’ll see here that the database is having high system IO waits on the critical background processes CKPT, LGWR, DBWR, LMON
This lead us to a simple IORM plan that just caps the BIPRD database which is the “anti-social” database on Level1 And the OTHER group which is the rest of the database on the cluster on Level2 This decision is based on the analysis of the workload of the databases that are doing the smart scans… BIPRD will still get the 100% if the other databases are idle and will pull back to 30% when the databases from the OTHER group will need IO bandwidth (and vice versa).
SYS@biprd2> show parameter db_uniq NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_unique_name string BIPRDDAL # main commands alter iormplan dbPlan=( - (name= BIPRDDAL, level=1, allocation=30), - (name=other, level=2, allocation=100)); alter iormplan active list iormplan detail list iormplan attributes objective alter iormplan objective = auto # list dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail' dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective' # implement dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail' dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\( \(name= BIPRDDAL, level=1, allocation=30\), \(name=other, level=2, allocation=100\)\);' dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail' dcli -g ~/cell_group -l root 'cellcli -e alter iormplan active' dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail' dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = auto' dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective' # revert dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\"\"' dcli -g ~/cell_group -l root 'cellcli -e alter iormplan catPlan=\"\"' dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail' dcli -g ~/cell_group -l root 'cellcli -e alter iormplan inactive' dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail' dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective=\"\"' dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'