SlideShare a Scribd company logo
BASLE BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA
HAMBURG COPENHAGEN LAUSANNE MUNICH STUTTGART VIENNA ZURICH
Checklist: Database Performance Issues
Markus Flechtner
Trivadis – Our mission.
Checklist: Performance Issues
2 17.05.2022
Trivadis makes IT easier:
We provide significant support for our
customers in the smart use of data in
the digital age.
We reduce complexity for our
customers through outstanding
technological expertise.
We take over key tasks in the existing
and future IT of our customers.
Trivadis – What sets us apart.
Checklist: Performance Issues
3 17.05.2022
We understand the business processes and economic challenges of our customers and
support them through IT consulting and in the development of comprehensive IT solutions.
Our proven products, developed by Trivadis, are based on in-depth expertise in the key
technologies offered by Microsoft, Oracle and Open Source.
That sets us apart from the competition.
A selection of awards we have received
OPEN SOURCE
Trivadis – Our key figures
Checklist: Performance Issues
4 17.05.2022
Founded in 1994
15 Trivadis locations with
more than 650 employees
Sales of CHF 111 million (EUR 96
million)
Over 250 Service Level Agreements
More than 4000 training participants
Research and development budget: CHF
5.0 million
More than 1900 projects each year with
over 800 customers
Financially independent and sustainably
profitable
About me .. Markus Flechtner
Principal Consultant, Trivadis, Duesseldorf/Germany, since April 2008
Working with Oracle since the 1990’s
– Development (Forms, Reports, PL/SQL)
– Support
– Database Administration
Focus
– Oracle Real Application Clusters
– Database Upgrade and Migration Projects
Teacher
– O-RAC – Oracle Real Application Clusters
– O-NF-DBA – Oracle Database New Features for the DBA
– O-MT – Oracle Multitenant
– PG4ORA – PostgreSQL for Oracle DBAs
Blog:
https://markusdba.net/
@markusdba
17.05.2022 Checklist: Performance Issues
5
17.05.2022 Checklist: Performance Issues
6
Technology on its own won't help you.
You need to know how to use it properly.
Agenda
Checklist: Performance Issues
7 17.05.2022
1. Specify the Problem
2. Performance Analysis Methodology
3. Tools: Diagnostic Pack & Statspack
4. Tools: Tuning Pack
5. Common Measures
6. More Information
Checklist: Performance Issues
8 17.05.2022
Specify the problem
Specify the problem (1)
What is slow?
– "everything"
– A single query
– One or more parts of the application (e.g. a specific batch job)
When did it happen?
– Permanent, ongoing
– At specific times (specific days, day of week, hours, ..)
– At irregular times
– Can you reproduce it at will?
Is there a response time specification in a SLA which was/is currently violated?
– If not, then there is no problem 
What are the current and the expected response time?
Checklist: Performance Issues
9 17.05.2022
Specify the problem (2)
How do you evaluate the performance problem?
– Poor end-user response time
– Long job duration
– Timeout
– Irregular response time
– SLA not fullfiled
– Database Call hanging
– Other ..
Was anything changed?
What other activities were/are occuring when the problem occurred?
Checklist: Performance Issues
10 17.05.2022
Checklist: Performance Issues
11 17.05.2022
Performance Analysis Methodology
Problem areas
Check database
key performance
indicators
Check server
resources
Identify top
wait events
Check
memory
advisories
Identify hot
segments
Identify top/
involved
sessions
Check optimizer
statistics/
configuration
Identify recent
changes
Analyse
Statspack
report
Perform SQL
Trace
Not a
database
problem
Check/tune
other
databases
Problem occurred in the past Problem occurs now
Analyse AWR
report and
ADDM findings
Perform SQL
tuning
Analyse ASH
report
Identify top/
important SQL
Diagnostic pack Diagnostic pack
Inefficient
SQL
Slow
IO
High
CPU-
usage
Memory Locking
Identify the bottleneck
Detail analysis
Bad
Indexing
Hot
Segments
CPU, IO-rate, DB-time, Redo
Tx-Rate, Logons...
Memory, Swap,
CPU (user, kernel)
High
execu-
tions
Many
logons
Oracle Database Tuning, Performance Analysis Methodology
Parsing
High
IO rate
Performance Analysis Methodology (1)
Checklist: Performance Issues
12 17.05.2022
Source:
Trivadis-Training O-TUN
Performance Analysis Methodology (2)
Historical Problem
– Run AWR for the period
– Run ADDM for the period
– Run AWR compare report to compare the period in question with a period with "good performance"
Current Problem
– Check if there is a "Real-Time-ADDM"
– Check ASH
– Check Real-Time-SQL-Monitoring
Checklist: Performance Issues
13 17.05.2022
Enterprise Manager Cloud Control
OEM is very helpful for generating performance analysis reports (ADDM, AWR, ASH, ..)
Checklist: Performance Issues
14 17.05.2022
SQL Developer
If you don't have OEM Cloud Control available, SQL Developer may help to generate
performance related reports
Checklist: Performance Issues
15 17.05.2022
Use SQL*Plus
If you have neither OEM Cloud Control nor SQL Developer (nor TOAD) at hand, you can
use the PL/SQL packages in the database
Checklist: Performance Issues
16 17.05.2022
SQL> SELECT output FROM TABLE
(DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_TEXT(3693619282,1,566,567));
SQL> SELECT output FROM TABLE (DBMS_WORKLOAD_REPOSITORY.AWR_DIFF_REPORT_TEXT
(3693619282,1,566,567,3693619282,1,568,569));
Checklist: Performance Issues
17 17.05.2022
Tools: Statspack
18
Statspack Introduction
Statspack is a set of SQL, PL/SQL, and SQL*Plus scripts
– All scripts are located in $ORACLE_HOME/rdbms/admin
Statspack allows collection, storage, and viewing of performance data
Statspack separates the data collection from the report generation
The performance data is collected when a snapshot is taken
A snapshot is a set of statistics gathered at a single time and is identified by the snapshot id;
each time a new collection is taken, a new snap_id is generated
All instances in a RAC environment have to be configured separately
Performance Analysis
18
19
Statspack Installation
Statspack has to be installed by a DBA on a per database instance basis
– The installation script spcreate.sql creates the repository schema PERFSTAT with a number tables and the
STATSPACK package
– By default the repository is placed in the SYSAUX tablespace
A batch installation is also possible
Performance Analysis
19
SQL> @?/rdbms/admin/spcreate
SQL> connect / as sysdba
SQL> define default_tablespace='sysaux'
SQL> define temporary_tablespace='temp'
SQL> define perfstat_password='<passwd>'
SQL> @?/rdbms/admin/spcreate
SQL> undefine perfstat_password
20
Statspack Levels
The amount of performance data gathered by the package is controlled by specifying a
snapshot level
Snapshot levels
– level 0: general performance statistics
– DEFAULT level 5: level 0 + SQL statements in library cache exceeding one of the
predefined thresholds
– level 6: level 5 + SQL plans and SQL plan usage of statements gathered in level 5
– level 7: level 6 + Segment level statistics exceeding one of the predefined thresholds
– level 10: level 7 + Parent and Child latches
Trivadis recommends Statspack level 7 as it allows to report hot segments and includes
SQL plans
– The creation of a snapshot may take a few seconds and consume 50 – 100 millions of
logical reads
Performance Analysis
20
21
Statspack Collection
Can be performed manually by calling the procedure SNAP
In order to automate the snapshot collection a batch job is needed
– on busy systems such a job may hang - therefore it is important to monitor its runtime and stop it if it does not
finish within a short time period; this can be performed in a cron job or by two DBMS_SCHEDULER jobs
• the first job calls the procedure SNAP and raises an event if its max_run_duration exceeds a predefined time
interval (e.g. 5 minutes)
• such an event (JOB_OVER_MAX_DUR) is consumed by the second job that stops the first job
– old snapshot data should be deleted regularly (e.g. right before or after the collection)
Performance Analysis
21
SQL> EXEC perfstat.statspack.snap
SQL> EXEC perfstat.statspack.purge(I_PURGE_BEFORE_DATE=>sysdate-7);
22
Statspack Session Collection
In addition to instance-level data the procedure SNAP can collect session data of one selected
session per snapshot
– session data includes session wait events, session time model statistics and session statistics
(V$SESSTAT)
– session data is shown in reports if the session ID and its serial# match between the selected
snapshot IDs
Performance Analysis
22
SQL> EXEC perfstat.statspack.snap(i_session_id => 21)
SQL> SELECT snap_id, snap_level, session_id, serial#
FROM perfstat.stats$snapshot ORDER BY snap_id;
SNAP_ID SNAP_LEVEL SESSION_ID SERIAL#
---------- ---------- ---------- ----------
3151 7 0 0
3152 7 21 3434
3153 7 21 3434
3154 7 0 0
23
Statspack Parameters
Altering default parameters and thresholds can be performed by
MODIFY_STATSPACK_PARAMETER
– changed parameters apply only for the current database ID
Performance Analysis
23
SQL> EXEC perfstat.statspack.modify_statspack_parameter -
( i_snap_level => 7, -
i_disk_reads_th => 10000, -
i_buffer_gets_th => 1000000, -
i_seg_phy_reads_th => 10000, -
i_seg_log_reads_th => 1000000 -
);
SQL> SELECT dbid, snap_level, disk_reads_th, buffer_gets_th
FROM perfstat.stats$statspack_parameter;
DBID SNAP_LEVEL DISK_READS_TH BUFFER_GETS_TH
---------- ---------- ------------- --------------
3693619282 7 10000 1000000
24
Statspack Baselines
Snapshots data worthy of keeping can be marked a baselines and will not be purged by the
purge procedure
– The procedure MAKE_BASELINE marks snapshots IDs as baselines but it does not perform any consistency
checks on the snapshots requested to be baselined
– The procedure CLEAR_BASELINE removes the baseline marker
Performance Analysis
24
SQL> EXEC perfstat.statspack.make_baseline -
( i_begin_snap => 3151, -
i_end_snap => 3152 -
);
SQL> SELECT snap_id, snap_level, baseline
FROM perfstat.stats$snapshot WHERE snap_id > 3150;
SNAP_ID SNAP_LEVEL BASELINE
---------- ---------- --------
3151 7 Y
3152 7 Y
3153 7
25
Statspack Reports
Statspack allows the generation of performance reports
– instance reports (spreport.sql) - covering all aspects of instance performance during a time interval defined be two
snapshot IDs
– SQL reports (sprepsql.sql) - for a specific SQL statement identified by its HASH_VALUE during one time interval
A batch report generation is also possible
Performance Analysis
25
SQL> @?/rdbms/admin/spreport
...
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap:
SQL> connect / as sysdba
SQL> define begin_snap=3151
SQL> define end_snap=3153
SQL> define report_name=sp_3151_3153
SQL> @?/rdbms/admin/spreport
26
Statspack Reports
A report can only be generated if the specified time period does not span an instance shutdown
The time units used in reports are specified in the column headings of each timed column
– (s) - a second
– (cs) - a centisecond – a 100th of a second
– (ms) - a millisecond – a 1000th of a second
– (us) - a microsecond – a 1000000th of a second
Some aspects of the instance report can be configured by altering the script sprepins.sql
– num_rows_per_hash (default 4) - number of rows of text per SQL
– top_pct_sql (default 1.0%) - only SQLs exceeding this percentage of resources used are
shown on reports
– top_n_segstat (default 5) - number of hot segments to be displayed
Performance Analysis
26
27
Statspack Report Sections
Performance Analysis
27
 Summary Page
 Load Profile
 Instance Efficiency
 Top 5 Wait Events
 Host CPU, Instance CPU and
Memory
 Time Model Statistics
 Wait Events and Wait Event
Histograms
 Top SQL
 System Statistics
 OS Statistics
 Session Statistics (if exist)
 Session Wait Events
 Time Model Statistics
 Session Statistics
 IO Stats by Function
 Tablespace and File IO
 Buffer Cache and SGA Advisories
 PGA Memory Advisory
 Top Process Memory
 Enqueue Activity
 Latch Activity
 Mutex Sleeps
 Top Segments by (4 categories)
 Shared Pool, Java Pool, SGA Target
Advisories
 SGA Memory Summary
 Instance Parameters
28
Performance Analysis with Statspack Reports
The methodology when performing an analysis based on a Statspack report is very similar to the
methodology used for the analysis using Dynamic Performance Views
The details analysis steps and their corresponding report sections
– “Check database KPI”: Load Profile and Time Model Statistics
– “Identify top wait events”: Top 5 Wait Events
– “Check memory advisories”: Memory Advisories
– “Identify hot segments”: Segments by…
– “Identify top SQL”: SQL ordered by …
– “Check server resources”: Host CPU, OS Statistics
– “Perform SQL tuning”: generate SQL reports for the top statements
Performance Analysis
28
29
Statspack Load Profile Section
Performance Analysis
29
 This section shows some important key performance indicators and allows to
quantify the workload
 Physical reads: values greater than 5000-10000 indicate a very high IO load
 Physical writes: values greater than 1000 indicate many data loads
 Logons: values greater 1 may indicate connection pool/application interface problems
 Recursive Call %: high values may indicate a lot of PL/SQL
 Rollback per transaction %: values over 1% may indicate application errors
 In this section new important indicators are displayed (~ AWR)
 DB time(s) and DB CPU(s)
 W/A MB processed (SQL WorkArea MB processed)
30
| a huge DWH | |a huge multi- | | an OLTP |
| database | |appl. database| | database |
--------------- ---------------- --------------
Redo size: 11,656,147.85 502,627.91 96,893.15
Logical reads: 223,194.51 293,538.76 5,201.45
Block changes: 34,132.96 3,631.52 597.63
Physical reads: 36,669.91 1,892.25 48.58
Physical writes: 7,607.93 172.92 21.11
User calls: 377.54 3,459.70 194.43
Parses: 105.67 1,484.86 12.19
Hard parses: 2.19 32.76 4.39
Sorts: 67.11 1,011.47 4.90
Logons: 0.51 7.02 0.03
Executes: 12,282.65 7,001.83 119.98
Transactions: 34.22 106.53 12.35
Recursive Call%: 98.24 79.03 55.08
Rollback per
transaction %: 0.15 7.14 0.11
Rows per Sort: 8687.73 203.01 24.05
 Three different Load Profiles (only per second values)
Perf
orm
ance 30
Statspack Load Profile Section
31
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.83 Redo NoWait %: 100.00
Buffer Hit %: 90.80 In-memory Sort %: 99.96
Library Hit %: 100.20 Soft Parse %: 97.93
Execute to Parse %: 99.14 Latch Hit %: 99.34
Parse CPU to Parse Elapsd %: 37.19 % Non-Parse CPU: 99.85
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 88.55 92.26
% SQL with executions>1: 88.85 90.22
% Memory for SQL w/exec>1: 90.36 92.24
 All efficiency percentages should be “not far” from 100%
 Low “Parse CPU to Parse Elapsed” might indicate latch waits during parse operations
 Low “%SQL with executions>1” might indicate bad cursor sharing
Perf
orm
ance 31
Statspack Load Profile Section
32
Statspack Top 5 Timed Events Section
Performance Analysis
32
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
------------------------------- ----------- ----------- ------ ------
CPU time 3,083
db file scattered read 28,938,043 755 26 13.3
db file sequential read 107,055,270 505 5 8.9
read by other session 30,882,189 378 12 6.7
direct path read 17,681,836 209 12 6.0
----------------------------------------------------
 This section includes the total CPU time used by all instance processes (this
statistic may include wait time for CPU)
 Total Call Time is the sum of DB time and Background elapsed time
 Events are important if their percentage of Total Call Time is relevant
 The average wait for “read” events should not exceed 10ms
33
Statspack Top SQL Section
Performance Analysis
33
CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
19.17 1 19.17 46.5 134.47 5,597 3309535418
Module: SQL*Plus
select * from sh.sales s, sh.products p where p.prod_id=s.prod_i
d minus select * from sh.sales s, sh.products p where p.prod_id=
s.prod_id
...
Elapsed Elap per CPU Old
Time (s) Executions Exec (s) %Total Time (s) Physical Reads Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
134.47 1 134.47 107.9 19.17 56,791 3309535418
...
CPU Elapsd Old
Physical Rds Executions Rds per Exec %Total Time (s) Time (s) Hash Value
-------------- ------------ -------------- ------ -------- --------- ---------
56,791 1 56,791.0 102.6 19.17 134.47 3309535418
 Often a specific SQL statement appears in more than one category
 Old Hash Value corresponds to the column V$SQL.OLD_HASH_VALUE and is the
input parameter for Statspack SQL reports
34
Querying Statspack Repository
Performance Analysis
34
 … allows the access to more than one interval
 “DB time (s/s)” means DB time in seconds per second and corresponds to “Average Active Sessions”
SELECT TO_CHAR(snap_date,'DD-MON HH24:MI:SS') snap_date,
ROUND((value-prev_value)/1000000,2) "DB time(s)",
ROUND((value-prev_value)/1000000/((snap_date-prev_date)*24*3600),2)
AS "DB time(s/s)" FROM
(SELECT sn.snap_time snap_date, s.value value,
LAG(sn.snap_time, 1, NULL) OVER (ORDER BY sn.snap_id) prev_date,
LAG(s.value, 1, NULL) OVER (ORDER BY sn.snap_id) prev_value
FROM v$statname n, stats$sys_time_model s, stats$snapshot sn
WHERE n.name='DB time' AND n.stat_id=s.stat_id
AND s.snap_id=sn.snap_id ORDER BY sn.snap_id);
SNAP_DATE DB time(s) DB time(s/s)
--------------- ---------- ------------
04-DEC 14:22:04 3456.63 .96
04-DEC 15:22:04 4028.95 1.12
04-DEC 16:22:04 4535.32 1.26
04-DEC 17:22:04 5208.32 1.45
35
Querying Statspack Repository
Performance Analysis
35
 Looking for periods with an average physical reads rate > 100
SELECT TO_CHAR(snap_date,'DD-MON HH24:MI:SS') snap_date,
(value-prev_value) "Physical Reads",
ROUND((value-prev_value)/((snap_date-prev_date)*24*3600),2) AS "Physical
Reads/s" FROM
(SELECT sn.snap_time snap_date, s.value value,
LAG(sn.snap_time, 1, NULL) OVER (ORDER BY sn.snap_id) prev_date,
LAG(s.value, 1, NULL) OVER (ORDER BY sn.snap_id) prev_value
FROM v$statname n, perfstat.stats$sysstat s,
perfstat.stats$snapshot sn
WHERE n.name='physical reads' AND n.statistic#=s.statistic# AND
s.snap_id=sn.snap_id ORDER BY sn.snap_id)
WHERE (value-prev_value)/((snap_date-prev_date)*24*3600) > 100;
SNAP_DATE Physical Reads Physical Reads/s
--------------- -------------- ----------------
04-DEC 14:22:04 1969884 547.19
04-DEC 18:22:04 1623744 451.04
Checklist: Performance Issues
36 17.05.2022
Tools:
Automatic Workload Repository
AWR
37
Automatic Workload Repository (AWR)
AWR collects, processes, and maintains performance statistics for problem detection and self-
tuning purposes
AWR was initially based on the Statspack and there still are many similarities
– The snapshot concept
– Source tables and repository tables
– Reports
The differences are
– Automatic installation, snapshot creation and purging
– AWR repository is part of data dictionary
– AWR processes ASH data
– No explicit session data gathering
– AWR requires "Diagnostic Pack" license
Performance Analysis
37
Snapshots
A snapshots is a sets of performance data for a time period
– Snapshots are stored in the SYSAUX tablespace by a special background process called
Manageability Monitor (MMON)
– By default, snapshots are performed every 60 minutes and are retained for 8 days (10g 7 days)
Manual snapshot creation is possible
– Default snap_level is ‘TYPICAL’
Performance Analysis
38
EXEC DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT();
EXEC DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT(flush_level=>'ALL');
SELECT snap_id, begin_interval_time, flush_elapsed, snap_level
FROM dba_hist_snapshot;
SNAP_ID BEGIN_INTERVAL_TIME FLUSH_ELAPSED SNAP_LEVEL
---------- ------------------------- ------------------- ----------
569 10-JUN-10 11.00.46.505 AM +00000 00:00:01.7 1
570 10-JUN-10 11.28.13.310 AM +00000 00:00:01.8 2
38
Configuration
AWR settings can be modified using Enterprise Manager or the package
DBMS_WORKLOAD_REPOSITORY
– Settings are bound to a specific database ID
– In case of a DBID change a new default record is automatically added
Next example shows a configuration change
– TOPNSQL defines the number of top SQLs stored per category
– The default of 30 SQLs is some cases too small
Performance Analysis
39
SELECT * FROM dba_hist_wr_control;
DBID SNAP_INTERVAL RETENTION TOPNSQL
---------- -------------------- -------------------- ----------
3693619282 +00000 01:00:00.0 +00008 00:00:00.0 DEFAULT
EXEC DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(TOPNSQL=>100);
DBID SNAP_INTERVAL RETENTION TOPNSQL
---------- -------------------- -------------------- ----------
3693619282 +00000 01:00:00.0 +00008 00:00:00.0 100
39
Baselines
A baseline contains performance data from a specific time period
– it is defined as a range of snapshots that are excluded from the purging process
– baseline snapshots are retained indefinitely
– the data is preserved for future comparison with other periods
Baselines can be created directly
– static (fixed) baselines – correspond to a contiguous time period in the past
Baselines can be automatically created by baseline templates
– Templates define a future time period
– “single template” creates one baseline for a defined time period
– “repeating template” creates and drops baselines based on a repeating time schedule (e.g.
every Friday morning)
Performance Analysis
40
40
Baseline Creation
Creating a static baseline for a past three hours with an expiration of one year
– default expiration is null (indefinitely)
Baselines can be renamed and dropped
Performance Analysis
41
BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE(
start_time=>sysdate-3/24,end_time=>sysdate,
baseline_name=>'Normal workload',expiration=>365);
END;
/
SELECT BASELINE_TYPE, START_SNAP_ID, END_SNAP_ID, EXPIRATION
FROM dba_hist_baseline WHERE baseline_name='Normal workload';
BASELINE_TYPE START_SNAP_ID END_SNAP_ID EXPIRATION
------------- ------------- ----------- ----------
STATIC 566 569 365
41
Baseline Metrics
Average, minimum and maximum values off all metrics stored in baseline’s snapshots can be
selected
Performance Analysis
42
SELECT metric_name,average,"MINIMUM","MAXIMUM" FROM
table(DBMS_WORKLOAD_REPOSITORY.SELECT_BASELINE_METRIC(
'Normal workload'));
METRIC_NAME AVERAGE MINIMUM MAXIMUM
------------------------------ ---------- ---------- ----------
Average Active Sessions .003979012 0 .100404983
Buffer Cache Hit Ratio 99.7082894 0 100
CPU Usage Per Sec .24042959 0 3.773565
Database Time Per Sec .397901245 0 10.0404983
Logical Reads Per Sec 20.0128217 0 188.166667
Logons Per Sec .058639403 0 .15
Physical Reads Per Sec .261634595 0 47.6302918
...
158 rows selected.
42
Baseline Templates
Creating a repeating template ‘Friday morning’
– Validity one year
– Expiration of created baselines 90 days
Performance Analysis
43
BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE(
day_of_week => 'FRIDAY',
hour_in_day => 8,
duration => 2,
start_time => sysdate,
end_time => sysdate+365,
baseline_name_prefix => 'fr_8_10_',
template_name => 'Friday morning',
expiration => 90);
END;
/
SELECT repeat_interval FROM DBA_HIST_BASELINE_TEMPLATE;
REPEAT_INTERVAL
-----------------------------------------------------------------
FREQ=WEEKLY;INTERVAL=1;BYDAY=FRI;BYHOUR=8;BYMINUTE=0;BYSECOND=0
43
AWR Repository Views
There exist around 100 AWR views; most interesting are listed here
– DBA_HIST_ACTIVE_SESS_HISTORY ASH samples
– DBA_HIST_BASELINE Existing baselines
– DBA_HIST_BASELINE_TEMPLATE Baseline templates
– DBA_HIST_COLORED_SQL Colored SQL statements
– DBA_HIST_FILESTATXS Datafile statistics
– DBA_HIST_MEMORY_TARGET_ADVICE Memory Target advisory
– DBA_HIST_PARAMETER Initialization parameters
– DBA_HIST_SEG_STAT Segment statistics
– DBA_HIST_SEG_STAT_OBJ Segment names
– DBA_HIST_SNAPSHOT Existing snapshots
– DBA_HIST_SQLSTAT SQL statistics
– DBA_HIST_SQLTEXT SQL text
– DBA_HIST_SYSMETRIC_HISTORY System metrics
– DBA_HIST_SYSMETRIC_SUMMARY MIN, MAX, AVG, STDDEV over a longer snapshot interval
– DBA_HIST_SYSSTAT System statistics
– DBA_HIST_SYS_TIME_MODEL Time model statistics
– DBA_HIST_TBSPC_SPACE_USAGE TS space usage
– DBA_HIST_TEMPSTATXS Tempfile statistics
– DBA_HIST_WR_CONTROL AWR configuration
Performance Analysis
44
44
Reports
AWR reports can be created in Enterprise Manager or by calling SQL*plus scripts in
?/rdbms/admin
Following report types can be created
– Single period single instance
• Scripts awrrpt.sql (current instance), awrrpti.sql (selected instance)
– Single period multiple instance
• Scripts awrgrpt.sql (all instances), awrgrpti.sql (selected instances)
– Compare periods singe instance
• Scripts awrddrpt.sql (current instance), awrddrpi.sql (selected instance)
– Compare periods multiple instance
• Scripts awrgdrpt.sql (all instances), awrgdrpi.sql (selected instances)
– SQL reports (single period single instance)
• awrsqrpt.sql, awrsqrpi.sql
Reports can be created in HTML or text format
Performance Analysis
45
45
Reports
Reports can also be created by directly calling the package DBMS_WORKLOAD_REPOSITORY
Examples
– Single period report for DBID 3693619282, instance 1, snapshot ID period 566-567
– Compare period report shows differences between periods 566-567 and 568-569
SQL> SELECT output FROM TABLE
(DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_TEXT(3693619282,1,566,567));
SQL> SELECT output FROM TABLE (DBMS_WORKLOAD_REPOSITORY.AWR_DIFF_REPORT_TEXT
(3693619282,1,566,567,3693619282,1,568,569));
46
Performance Analysis
46
47
AWR Report Sections
Performance Analysis
47
 Summary Page
 Load Profile
 Instance Efficiency
 Top 5 Wait Events
 Host CPU, Instance CPU and
Memory
 OS Statistics
 Time Model Statistics
 Wait Events and Wait Event
Histograms
 Service Statistics
 Service Wait Class
 Top SQL
 System Statistics
 IO Stats by Function
 IO Stats by Filetype
 Tablespace and File IO
 Memory Advisories
 Enqueue Activity
 Latch Activity
 Mutex Sleeps
 Top Segments (14 categories)
 SGA Memory Summary
 Memory Resize Operations
 AQ Statistics
 Shared Server Statistics
 Instance Parameters
 Use of ADDM Reports alongside AWR
(12c)
(bold sections are not available on
Statspack reports)
AWR- Report – 12c
Container/PDB Information visible
in just five sections:
– SQL Statistics
– Tablespace I/O Stats
– File I/O Stats
– Segment Statistics
– Init-Ora-Parameter
New is at End of Report
a corresponding ADDM-Report
Performance Analysis
48
48
Compare Period Reports
Such reports begin with the comparison of the host configuration, SGA configuration
and the workload
Performance Analysis
49
49
Checklist: Performance Issues
50 17.05.2022
Tools:
Active Session History
ASH
ASH Parameters
Initialization parameter “_ash_enable” controls if ASH is enabled
The sampling interval is controlled by “_ash_sampling_interval”
– Default is 1000 (ms)
Following query shows current retention and amount of available ASH data in memory
– The retention depends on the activity of sessions
– AS only active sessions are sampled the KPI “AAS” (Average Active Sessions) can be easily computed
Performance Analysis
51
SELECT min(sample_time), max(sample_time), count(*),
count(*)/(max(sample_id)-min(sample_id)+1) as AAS
FROM v$active_session_history;
MIN(SAMPLE_TIME) MAX(SAMPLE_TIME) COUNT(*) AAS
------------------ ------------------- -------- ----------
09-JUN-10 07.38.40 09-JUN-10 11.07.12 1868 .1492966
51
ASH Data
ASH data is stored in a circular buffer in SGA
– Around 150 bytes per sample row
– One permanent active session needs around 500kB per hour
The contents of the buffer are flushed to disk during an AWR snapshot or when the buffer is 2/3 full
– every 10th sample is written to the table WRH$_ACTIVE_SESSION_HISTORY (view
DBA_HIST_ACTIVE_SESS_HISTORY)
– This is controlled by parameter “_ash_disk_filter_ratio” (default 10)
Performance Analysis
52
SELECT min(sample_time), max(sample_time),
(max(sample_id) - min(sample_id) + 1)/10 as samples,
count(*)/(max(sample_id)-min(sample_id)+1)*10 as AAS
FROM dba_hist_active_sess_history;
MIN(SAMPLE_TIME) MAX(SAMPLE_TIME) SAMPLES AAS
------------------ ------------------- -------- ----------
28-MAY-10 12.38.40 09-JUN-10 11.07.12 13075 .1663949
52
ASH Memory
Allocated ASH memory can be obtained from v$sgastat
– 11gR2 uses around 20% more memory than 10gR2 (new attributes)
ASH sampling is also available for Active Data Guard physical standby instances and Automatic
Storage Management (ASM) instances
– data is collected and displayed in V$ACTIVE_SESSION_HISTORY
– data is not written to WRH$_ACTIVE_SESSION_HISTORY
Performance Analysis
53
SELECT * FROM v$sgastat WHERE name='ASH buffers';
POOL NAME BYTES
------------ -------------------------- ----------
shared pool ASH buffers 4194304
53
ASH Contents
ASH data is multi-dimensional and can be used to find
– Top sessions
– Top SQLs / execution plans
– Top PL/SQL objects
– Top programs
– Top clients
– Top modules and actions
– Top services
– Top blockers
– Top wait events
– Top transaction
– Top resource manager consumer groups
– Top PGA memory consumers
– Top temporary tablespace consumers
Performance Analysis
54
54
ASH Contents
Analysis of ASH contents is based on the number of samples
– It is assumed that 1 sample equals 1 second of DB time
• if the sampling interval is 1 second (default)
– Important metrics
• accumulated DB time and CPU time
• number of Read/Write- I/O, I/O requests and I/O bytes
– These metrics could be used to rank the activity not only by DB time but also by their CPU or I/O usage
Newer ASH samples include
– PGA allocated in bytes at sample time
– TEMP tablespace usage at sample time
• This feature could be very useful when trying to find out which session used a lot of TEMP space in the past
– Time Model information (e.g. connection mgmt, hard parse, SQL execution)
Performance Analysis
55
55
ASH Contents – 12c
New Releases will bring new features, new information, new attributes…
Since 12.1. ASH samples include
– Container/PDB-Information (CON_ID, CON_DBID)
– Dbreplay- and Capture information (IS_CAPTURED, IS_REPLAYED,…)
– In_Memory data ( IN_INMEMORY_QUERY,…, since 12.1.0.2)
– DBOP_NAME (db operation name like ‘Database Pump Job’, Null means SQL)
Performance Analysis
56
56
ASH Report – 12c
Container/PDB Information visible
in just two sections:
– Top Containers
– Top SQL
Performance Analysis
57
57
ASH Reports
ASH reports is an easy way to analyze ASH data
ASH reports can be generated using Enterprise Manager
Or using SQL*Plus scripts located in ?/rdbms/admin
– ashrpt.sql reports all activity within the specified period
– ashrpti.sql allows
• to filter for a specific session, SQL, wait class, service hash, module & action names, client
identifier, PL/SQL entries
• enter multiple instances in order to create a RAC report
– both scripts interactively ask for parameters and call the function ASH_REPORT_TEXT or
ASH_REPORT_HTML in package DBMS_WORKLOAD_REPOSITORY
Performance Analysis
58
58
ASH Report Sections
ASH reports perform ranking by the highest percentages of ASH samples and are divided into
the following sections
– Load Profile (Average Active Sessions, Avg. Active Session per CPU)
– Top Events (User events, background events and event parameters)
– Top Containers
– Top Service/Module
– Top Phases of Execution (time model)
– Top SQL with Top Events
– Top SQL using literals
– Top Parsing Module/Action
– Top PL/SQL Procedures
– Top Java Workloads
– Top Sessions (including Event, Program and number of distinct TX-IDs)
– Top Blocking Sessions
– Top Sessions running parallel operations
– Top Objects (Application, Cluster, User I/O and buffer busy waits only)
– Top Latches
– Activity Over Time
Performance Analysis
59
59
Section Activity Over Time
This section divides the analysis period into smaller time slots
– typically 10 slots; it can be specified when using ashrpti.sql
• “Specify Slot Width in seconds to use in the 'Activity Over Time' section”
Top 3 events are reported in each of those slots
– 'Slot Count‘ is the number of samples in that slot
– 'Event Count‘ is the number of samples waiting for that event
– '% Event' is 'Event Count' over all samples in the analysis period
Performance Analysis
60
Slot Event
Slot Time (Duration) Count Event Count % Event
-------------------- -------- ------------------------------ -------- -------
14:30:00 (1.0 min) 204 log file switch (checkpoint in 148 11.59
CPU + Wait for CPU 31 2.43
log file switch completion 10 0.78
14:31:00 (1.0 min) 622 log file switch (checkpoint in 361 28.27
CPU + Wait for CPU 168 13.16
log file parallel write 21 1.64
60
Checklist: Performance Issues
61 17.05.2022
Tools:
Automatic DB Diagnostic Monitor
ADDM
62
Automatic Database Diagnostic Monitor (ADDM)
ADDM diagnoses the root causes of performance problems
ADDM analysis is based on a pair of AWR snapshots (the period)
Analysis is performed each time an AWR snapshot is taken
– the analysis period is defined by the last two snapshots
– the results are saved in the database
The goal of ADDM is to reduce DB Time
– ADDM outputs quantified recommendations
– Recommendations are sorted by DB time savings
Performance Analysis
62
Considered Areas
ADDM considers the following problem areas
– CPU bottlenecks – Is this database using the CPU or others?
– Undersized Memory Structures
– I/O capacity issues
– High load SQL statements, PL/SQL execution and compilation
– High load Java usage
– RAC specific issues - global cache hot blocks and objects, interconnect latency issues?
– Sub-optimal use of database by the application - poor connection management, excessive
parsing or application level lock contention?
– Database configuration issues
– Concurrency issues - buffer busy problems?
– Hot objects
Performance Analysis
63
63
ADDM Results
ADDM analysis results are represented as a set of findings
Findings can belong to the following types
– Problem findings – quantified by its impact (portion of DB time) and possibly associated with a list of
recommendations
• Database configuration: changing initialization parameter settings
• Schema changes: hash partitioning a table or index, or using automatic segment-space management
(ASSM)
• Application changes: using the cache option for sequences or using bind variables
• Using other advisors: running SQL Tuning Advisor or the Segment Advisor
• Hardware changes: adding CPUs or changing the I/O subsystem configuration
– Symptom findings - information that may lead to problem findings
– Information findings - relevant for understanding the situation
– Warning findings - about problems that may affect the completeness or accuracy of the analysis
Performance Analysis
64
64
ADDM Setup
ADDM is enabled if parameter
– CONTROL_MANAGEMENT_PACK_ACCESS is DIAGNOSTIC or DIAGNOSTIC+TUNING
– STATISTICS_LEVEL is not BASIC
The advisor parameter DBIO_EXPECTED influences the analysis of the I/O performance
– it describes the expected average time for a single block read operation in microseconds
– the default value is 10 milliseconds and should only be adjusted if the underlying hardware performs significantly
different
Performance Analysis
65
SQL> EXEC DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER( -
'ADDM','DBIO_EXPECTED', 5000);
SQL> SELECT parameter_name,parameter_value
FROM dba_advisor_def_parameters
WHERE advisor_name='ADDM' AND parameter_name='DBIO_EXPECTED';
PARAMETER_NAME PARAMETER_VALUE
-------------------- --------------------
DBIO_EXPECTED 5000
65
ADDM Reports
Reports can be created
– in Enterprise Manager
– by calling SQL*plus scripts in ?/rdbms/admin
• @addmrpt.sql: reports on current instance
• @addmrpti.sql: prompts for a DBID and an instance number
– by calling the functions
• DBMS_ADDM.GET_REPORT or
• DBMS_ADVISOR.GET_TEXT_REPORT (TYPE=>’ADDM’)
A report contains the following sections
– Analysis Period & Target
– Activity During the Analysis Period (DB time and AAS)
– Summary of Findings (with activity percentage)
– Findings and Recommendations
– Additional Information
Performance Analysis
66
66
Running addmrpt.sql
Performance Analysis
67
SQL> @?/rdbms/admin/addmrpt
…
Activity During the Analysis Period
-----------------------------------
Total database time was 2447 seconds.
The average number of active sessions was .68.
Summary of Findings
-------------------
Description Active Sessions
Recommendations
Percent of Activity
---------------------- --------------------- ---------------
1 Top SQL Statements .67 | 98.33 5
2 Log File Switches .4 | 58.4 2
3 "Other" Wait Class .03 | 4.95 0
4 "Concurrency" Wait Class .03 | 4.46 0
…
Finding 1: Top SQL Statements
Impact is .67 active sessions, 98.33% of total activity.
--------------------------------------------------------
SQL statements consuming significant database time were found.
67
Running GET_REPORT
SQL> SELECT task_name, how_created
FROM dba_addm_tasks WHERE created>sysdate-1/24;
TASK_NAME HOW_CREATED
------------------------------ ------------------------------
ADDM:33378954_1_56 AUTO
SQL> set long 1000000 pagesize 0 longchunksize 1000
SQL> SELECT dbms_addm.get_report('ADDM:33378954_1_56') FROM dual;
...
Summary of Findings
-------------------
Description Active Sessions
Recom Percent of Activity
mend.
------------------------------------- ------------------- -----
1 Top SQL Statements .6 | 95.78 5
2 "Other" Wait Class .19 | 30.21 0
3 I/O Throughput .05 | 7.29 2
4 Top Segments by "User I/O" .03 | 4.99 1
5 Unusual "Other" Wait Event .03 | 4.81 4
6 Shared Pool Latches .02 | 3.85 0
7 Buffer Cache Latches .02 | 3.21 1
8 Unusual "Other" Wait Event .02 | 2.43 1
68
Performance Analysis
68
ADDM Views
DBA_ADDM_FINDINGS
– contains a subset of the findings displayed in DBA_ADVISOR_FINDINGS
– can be queried in order to find out if there exist any findings
DBA_ADVISOR_RECOMMENDATIONS
– displays the results of completed diagnostic tasks with recommendations
Performance Analysis
69
SQL> SELECT task_name, finding_name, type FROM dba_addm_findings
WHERE finding_name!='normal, successful completion'
ORDER BY task_id;
TASK_NAME FINDING_NAME TYPE
------------------------- ------------------------------ ----------
ADDM:3333478954_1_56 Log File Switches PROBLEM
ADDM:3333478954_1_56 "Configuration" Wait Class SYMPTOM
ADDM:3333478954_1_56 Top SQL Statements PROBLEM
69
ADDM Views
DBA_ADVISOR_FINDING_NAMES
– contains all possible findings
Performance Analysis
70
SQL> SELECT finding_name FROM DBA_ADVISOR_FINDING_NAMES
WHERE advisor_name='ADDM' ORDER BY id;
FINDING_NAME
----------------------------------------
"Administrative" Wait Class
"Application" Wait Class
"Cluster" Wait Class
...
Undersized instance memory
Top SQL Statements
Top Segments by "User I/O" and "Cluster"
Buffer Busy - Hot Block
Buffer Busy - Hot Objects
83 rows selected.
70
Every 3 seconds the MMON obtains and checks performance statistics
If it detects any of the following issues, it triggers a real-time ADDM analysis
– High load
– I/O bound
– CPU bound
– Over-allocated memory
– Interconnect bound
– Session limit
– Process limit
– Hung session
– Deadlock detected
Performance Analysis
71
Real-Time ADDM
October
2018
Oracle 12c & 18c - New Features for DBAs - Performance &
Optimizer
72
Real-Time ADDM in EM DB Express
Checklist: Performance Issues
73 17.05.2022
Common Measures
Common Measures
Gather New Statistics
– Database Statistics
– Data Dictionary Statistics
– Fixed Object Statistics
– System Statistics
Check Memory Advisors
– SGA-Size
– PGA-Size
– Shared Pool Size
– DB-Cache-Size
Checklist: Performance Issues
74 17.05.2022
75
SGA Target Advisory - Example
This example shows that an increase of the SGA from the current size of 500MB to 750MB
would result in
– a potential reduction of the DB time from 1087453 to 118424 sec
– a potential reduction of the physical reads from 563 to 68 millions
Performance Analysis
75
SELECT SGA_SIZE, SGA_SIZE_FACTOR FACTOR, ESTD_DB_TIME,
ESTD_DB_TIME_FACTOR DB_TIME_FACTOR, ESTD_PHYSICAL_READS
FROM v$sga_target_advice ORDER BY SGA_SIZE;
SGA_SIZE FACTOR ESTD_DB_TIME DB_TIME_FACTOR ESTD_PHYSICAL_READS
---------- ---------- ------------ -------------- -------------------
250 .5 2810957 2.5849 1425938513
375 .75 1821484 1.675 926155240
500 1 1087453 1 563834920
625 1.25 401270 .369 217076444
750 1.5 118424 .1089 68844244
875 1.75 90259 .083 68844244
1000 2 87540 .0805 47418517
Checklist: Performance Issues
76 17.05.2022
Analysis with Dynamic
Performance Views
Detail Analysis with Dynamic Performance Views
Analysis tasks that can be performed by querying the dynamic performance views
– Check database performance indicators
– Identify top sessions
– Identify top wait events
– Check memory advisories
– Identify hot segments
– Identify top and important SQL statements
– Check server resources
• only partially possible over V$-Views
– Check optimizer statistics/configuration
• see chapter about the Cost Based Optimizer
Performance Analysis
77
78
Performance Indicators - Cumulative
Select cumulative values of DB time, DB CPU time and compute AAS, #CPUs used and the wait
time using Time Model Statistics
Performance Analysis
78
SELECT stat_name, value/1000000 seconds,
ROUND(value/1000000/((sysdate-startup_time)*24*60*60),2) avg_active
FROM v$sys_time_model, v$instance WHERE stat_name like 'DB%'
UNION ALL
SELECT 'Wait time' stat_name,
(dbt.value-dbc.value)/1000000 seconds, NULL
FROM
(SELECT value FROM v$sys_time_model WHERE stat_name='DB time') dbt,
(SELECT value FROM v$sys_time_model WHERE stat_name='DB CPU') dbc;
STAT_NAME SECONDS AVG_ACTIVE
-------------------- -------- -----------
DB time 35684.97 1.32 (Average Active Sessions)
DB CPU 6391.72 .24 (Average #CPUs used)
Wait time 29293.25
79
Performance Indicators - Metrics
Most important indicators can be found in V$SYSMETRIC
Performance Analysis
79
SELECT metric_name, value, metric_unit FROM v$sysmetric
WHERE group_id=(SELECT group_id FROM v$metricgroup WHERE
name='System Metrics Long Duration') AND metric_id IN
(2003,2004,2006,2016,2018,2030,2044,2046,2057,2075,2107,2108,2123);
METRIC_NAME VALUE METRIC_UNIT
------------------------- -------- ---------------------
CPU Usage Per Sec 469.5218 CentiSeconds Per Second
Database CPU Time Ratio 17.47001 % Cpu/DB_Time
Database Time Per Sec 2687.589 CentiSeconds Per Second
Database Wait Time Ratio 82.52999 % Wait/DB_Time
Host CPU Utilization (%) 94.07062 % Busy/(Idle+Busy)
Logical Reads Per Sec 127489.2 Reads Per Second
Logons Per Sec 3.427 Logons Per Second
Physical Reads Per Sec 253.24 Reads Per Second
Physical Writes Per Sec 163.4667 Writes Per Second
Redo Generated Per Sec 1251923 Bytes Per Second
User Transaction Per Sec 25.3145 Transactions Per Second
80
Performance Metric Groups
Performance Analysis
80
SELECT g.name, g.interval_size/100 int_sec,
max_interval hist#, count(*) metric#
FROM v$metricgroup g,v$metricname m WHERE g.group_id=m.group_id
GROUP BY g.name, g.interval_size, g.max_interval ORDER BY 1;
NAME INT_SEC HIST# METRIC#
---------------------------- ------ ------ -------
Event Class Metrics 60 60 6
Event Metrics 60 1 5
File Metrics Long Duration 600 6 6
I/O Stats by Function Metrics 60 60 10
Resource Manager Stats 60 60 9
Service Metrics 60 60 5
Service Metrics (Short) 5 24 5
Session Metrics Long Duration 60 60 1
Session Metrics Short Duration 15 1 10
System Metrics Long Duration 60 60 158 -- history 3600s
System Metrics Short Duration 15 12 47 -- history 180s
Tablespace Metrics Long Duration 60 0 2
...
81
Performance Statistics (metrics)
This example shows the short term history of the system metric
“Physical Reads Per Sec” (long duration)
Performance Analysis
81
SELECT TO_CHAR(begin_time,'HH24:MI:SS') time,ROUND(value,2) value
FROM v$sysmetric_history
WHERE group_id=(SELECT group_id FROM v$metricgroup
WHERE name='System Metrics Long Duration')
AND metric_name = 'Physical Reads Per Sec' ORDER BY begin_time;
TIME VALUE
-------- ----------
14:25:25 60.19
14:26:25 42.93
14:27:25 13.92
...
15:23:25 124.18
15:24:25 18.61
15:25:25 16.95
61 rows selected.
82
Performance Indicators - Metrics
Performance Analysis
82
 All statistics in the metric group ‘System Metrics Long Duration’ are refreshed
with an interval of 60 seconds
 The metric group ‘System Metrics Short Duration’ is refreshed every 15
seconds but it does not contain all metrics included in the group ‘System
Metrics Long Duration’
 V$SYSMETRIC_HISTORY contains last 60 intervals of ‘System Metrics Long
Duration’ metrics and 12 intervals of ‘System Metrics Short Duration’
 Warning: the METRIC_ID’s may change in future releases
83
Performance Indicators - Interpretation
Performance Analysis
83
 If host CPU utilization is over 70% then focus on the CPU usage of all database
instances running on this server
 Low DB CPU time ratio (<40%) means a lot of waits
 check the wait classes
 Total Physical IO…
 Physical Reads + Physical Writes > 5000/s
 system could be IO bound, find the causer
 Logons per second > 1
 check connection pooling / application interface
 Redo per second > 1000000 bytes(1MB/s=3,6GB/h=86,4GB/day)
 check data loads
 Hard parse rate > 10/s may result in a high latch contention
84
Identification of Top Sessions
Performance Analysis
84
SELECT session_id, ROUND(cpu/intsize_csec,2) cpus_used,
ROUND(physical_reads/intsize_csec*100) preads /*per second*/,
ROUND(pga_memory/1024/1024,2) pga_mb, program
FROM v$sessmetric m, v$session s
WHERE m.session_id=s.sid AND (m.cpu>10 OR m.physical_reads>10)
ORDER BY cpus_used;
SESSION_ID CPUS_USED PREADS PGA_MB PROGRAM
---------- ---------- ---------- ---------- -----------------------
9906 .04 3 1.65 perl@srv1
5638 .05 0 1.65 JDBC Thin Client
9428 .08 374 4.08 oe331.exe
8089 .29 1004 11.9 sqlplus@srv1
3707 .36 2333 95.13 oe331.exe
7398 .71 0 1.58 CTEXT.exe
 Top sessions can be identified by querying V$SESSMETRIC
 Data is refreshed every 15 seconds
 Other V$SESSION columns give more details about the session
85
Identification of Top Sessions
Performance Analysis
85
SELECT session_id, ROUND(cpu/intsize_csec,2) cpus_used,
ROUND(physical_reads/intsize_csec*100) ph_reads /*per second*/,
SUBSTR(sql_text,1,50) sql_text
FROM v$sessmetric m, v$session s, v$sqlstats sql
WHERE m.session_id=s.sid AND s.sql_id=sql.sql_id(+)
AND (m.cpu>10 OR m.physical_reads>10) ORDER BY ph_reads DESC;
SESSION_ID CPUS_USED PH_READS SQL_TEXT
---------- ---------- ---------- ---------------------------------
4966 .48 3309 BEGIN kutil2.export...
9428 .21 1993 SELECT COUNT (COMP_ID) FROM...
8585 .04 153 BEGIN kutil2.export...
4412 .11 98 SELECT DISTINCT COMP_ID FROM...
...
 V$SQLSTATS can be joined in order to retrieve the last SQL command executed by
the session
 Results are sorted by the number of physical reads per second
86
Identification of Top Sessions
Performance Analysis
86
 V$SESSMETRIC contains only selected session statistics
 CPU, physical reads, logical reads, PGA memory and the number of hard and soft
parses
 In order to rate sessions by other criteria (e.g. redo size) the session statistics have
to be temporary saved (e.g. in a table) in order to compute the metric manually
 Warning: statistic#’s may change in future releases
CREATE GLOBAL TEMPORARY TABLE my_sesstat
(sid NUMBER, value NUMBER, dt DATE);
INSERT into my_sesstat SELECT sid,value,sysdate
FROM v$sesstat where statistic#=134 /*redo size*/;
<WAIT A MINUTE>
INSERT into my_sesstat SELECT sid,value,sysdate
FROM v$sesstat where statistic#=134 /*redo size*/;
87
Identification of Top Wait Events
Performance Analysis
87
SELECT SUBSTR(name,1,30) name, num_sess_waiting sess#,
ROUND(time_waited/100,2) time_s, wait_count waits,
ROUND(time_waited_fg/100,2) time_fg_s, wait_count_fg waits_fg
FROM v$event_name n, v$eventmetric m
WHERE n.event#=m.event# AND n.wait_class!='Idle' AND time_waited>0
ORDER BY time_waited DESC;
NAME SESS# TIME_S WAITS TIME_FG_S WAITS_FG
---------------------------- ------ ------ ------ --------- -------
latch: enqueue hash chains 2 50.25 4499 50.25 4499
log file parallel write 0 40.64 146552 0 0
latch: cache buffers chains 0 37.42 3635 37.4 3634
latch free 0 26.15 2651 26.15 2649
latch: cache buffers lru chain 3 10.11 1017 10.1 1016
...
 Top wait events of last 60 seconds, refreshed every minute
 SESS#: number of sessions waiting at the end of the interval
 FG: foreground sessions
88
Identification of Top Wait Events
Performance Analysis
88
SELECT begin_time, wait_class, ROUND(dbtime_in_wait) "DBTIME%",
ROUND(m.time_waited/100,2) time_s, m.wait_count waits,
ROUND(m.time_waited_fg/100,2) time_fg_s, m.wait_count_fg waits_fg
FROM v$system_wait_class n, v$waitclassmetric_history m
WHERE n.wait_class#=m.wait_class# AND m.time_waited>100 AND
wait_class!='Idle' ORDER BY begin_time, m.time_waited;
BEGIN_TIME WAIT_CLASS DBTIME% TIME_S WAITS TIME_FG_S WAITS_FG
------------ ----------- ------- ------- ------- --------- --------
...
14-OCT 15:23 System I/O 8 45.45 178801 0 0
14-OCT 15:23 Concurrency 13 76.49 8227 76.33 8223
14-OCT 15:23 User I/O 21 123.23 425747 116.66 0
14-OCT 15:24 System I/O 8 48.9 171067 0 0
14-OCT 15:24 Concurrency 12 74.66 7794 74.54 7789
14-OCT 15:24 User I/O 21 124.18 424620 118.42 0
 Wait class history (one hour retention), refreshed every minute
 DBTIME% Percent of database time spent in the wait
89
Identification of Top Wait Events
Current waits per session
– Refreshed immediately
Performance Analysis
89
SELECT sid, event, state,
wait_time_micro wait_us,time_since_last_wait_micro no_wait_us
FROM v$session WHERE type!='BACKGROUND' AND status='ACTIVE';
SID EVENT STATE WAIT_US NO_WAIT_US
----- ------------------------------ ------------------- ------- ----------
13 db file scattered read WAITED KNOWN TIME 1924 15119
15 latch: cache buffers lru chain WAITED KNOWN TIME 10328 86844
17 latch: enqueue hash chains WAITED KNOWN TIME 5257 31407
22 latch: cache buffers chains WAITING 95 0
24 latch: enqueue hash chains WAITING 15471 0
125 SQL*Net message to client WAITED SHORT TIME 2 104
141 db file sequential read WAITING 520 0
146 db file sequential read WAITING 2273 0
148 db file sequential read WAITING 3683 0
149 direct path read WAITED KNOWN TIME 4035 4645
150 direct path write temp WAITING 363 0
90
Wait State and Wait Time (V$SESSION)
Possible wait states are
– WAITING - Session is currently waiting
– WAITED UNKNOWN TIME - Duration of the last wait is unknown (when the parameter
TIMED_STATISTICS is set to false)
– WAITED SHORT TIME - Last wait was less than a hundredth of a second
– WAITED KNOWN TIME - Duration of the last wait is specified in the WAIT_TIME column
Wait time columns
– WAIT_TIME - duration of the last wait in hundreds of a second (0 if session is currently waiting)
– SECONDS_IN_WAIT - time waited for the current event or amount of time since the start of the
last wait
– 11g: WAIT_TIME_MICRO - amount of time waited (in microseconds) in the current or last wait
– 11g: TIME_SINCE_LAST_WAIT_MICRO - time elapsed since the end of the last wait (in
microseconds); 0 if the session is currently in a wait
Performance Analysis
90
91
Identification of Top Wait Events
All wait events of a particular session
Performance Analysis
91
SELECT sid, event, total_waits waits,
ROUND(time_waited/100,2) time_sec,
ROUND(average_wait*10,1) avg_ms
FROM v$session_event WHERE sid = 15
ORDER BY time_waited DESC;
SID EVENT WAITS TIME_SEC AVG_MS
------ ----------------------------- ------ -------- -------
15 db file sequential read 22700 61.59 2.7
15 latch: cache buffers chains 2115 20.52 9.7
15 latch: row cache objects 1344 13.03 9.7
15 cursor: pin S 302 3.32 11.0
15 latch: shared pool 137 .89 6.5
15 log file switch (checkpoint in 12 .65 54.2
...
92
Identification of Top Wait Events
All non-idle waits since the last instance startup
Performance Analysis
92
SELECT wait_class, event,
ROUND(time_waited/100,2) time_s, total_waits waits,
ROUND(average_wait*10,1) avg_ms
FROM v$system_event WHERE wait_class!='Idle'
ORDER BY time_waited DESC;
WAIT_CLASS EVENT TIME_S WAITS AVG_MS
----------- ------------------------------ -------- -------- --------
User I/O db file sequential read 3100.82 500132 6.2
Other latch: enqueue hash chains 2419.66 219071 11
System I/O log file parallel write 2000.32 7792650 .3
Concurrency latch: cache buffers chains 1933.61 195386 9.9
Concurrency latch: row cache objects 1255.14 125995 10
Other latch: cache buffers lru chain 486.85 51686 9.4
Concurrency cursor: pin S 315.55 28213 11.2
93
Identify Hot Segments
Hot segments are objects with high number of physical reads, logical reads, buffer busy waits or
row lock waits
Can be identified by viewing the segment statistics
The statistics values should be related to
– the size of the segment (columns bytes or blocks in DBA_SEGMENTS)
– the uptime of the instance (values are cumulative)
Hot segments in a specific time interval are shown on Statspack and AWR reports
– unfortunately not related to the segment size
Performance Analysis
93
94
Identify Hot Segments
Using segment statistics to find top segments in the category “physical reads”
Performance Analysis
94
SELECT owner, object_name, object_type, value FROM
(SELECT * FROM v$segment_statistics WHERE statistic_name='physical reads'
ORDER BY value DESC)
WHERE rownum<5 ORDER BY value DESC;
OWNER OBJECT_NAME OBJECT_TYPE VAL
---------- -------------------- -------------------- ----------
SH CUSTOMERS TABLE 293008
SH COSTS_PROD_BIX INDEX PARTITION 95280
SH COSTS TABLE PARTITION 56112
SH SALES TABLE PARTITION 23680
95
Identify Hot Segments
Every block of the relatively small table CONTRACT is read from the disk 1694 times per day!
Performance Analysis
95
SELECT seg.owner||'.'||seg.segment_name as name,
ROUND(bytes/1024/1024) size_mb,
ROUND(value/(sysdate-startup_time)) preads_per_day,
ROUND(value/blocks/(sysdate-startup_time)) preads_per_block_and_day
FROM dba_segments seg, v$instance, v$segment_statistics segst
WHERE seg.owner=segst.owner AND seg.segment_name=segst.object_name
AND statistic_name='physical reads'
AND NVL(seg.partition_name,'NULL')=NVL(segst.subobject_name,'NULL')
ORDER BY value DESC;
PREADS_PER_
NAME SIZE_MB PREADS_PER_DAY BLOCK_AND_DAY
----------------------------- ---------- -------------- ------------
ADM21.QUOTATION 592 88177349 1164
ADM21.CONTRACT 61 13223731 1694
ADM21.PK_DESCRIPTION 864 911662 8
ADM21.FK_QUOTATION 72 780679 85
ADM21.PARTNER 32 641029 157
96
Identify Top SQL
Top SQL a.k.a. high load SQL are statements consuming a lot of CPU, IO and memory
The library cache holds typically thousands of SQL statements
– Retention of statements in library cache depends on many factors such as size of the
cache, number of distinct statements and their complexity
– The retention typically varies between a few minutes and a few hours
• depends on the workload
• may differ for each V$-view
Many V$-views show the contents of the library cache and summarize the consumption of
resources per statement or per cursor
Performance Analysis
96
97
Identify Top SQL
V$SQLSTATS provides statistics for unique combinations of SQL_ID and PLAN_HASH_VALUE
– SQL_ID: SQL identifier of the parent cursor in the library cache
– PLAN_HASH_VALUE: numeric representation of the SQL plan for this cursor
– SQL_FULLTEXT: text for the SQL statement exposed as CLOB
– CPU_TIME (in microseconds) for parsing, executing and fetching
– ELAPSED_TIME: (in microseconds) for parsing, executing and fetching
– DISK_READS: physical reads
– BUFFER_GETS: logical reads
– SHARABLE_MEM: shared memory (in bytes) currently occupied by all cursors
– EXECUTIONS: number of executions that took place since it was brought into the library cache
V$SQL contains more details about each cursor
– USERS_EXECUTING: number of sessions executing this statement
– FIRST_LOAD_TIME: time the cursor was brought into the library cache
– OPTIMIZER_COST: cost of this query given by the optimizer
Performance Analysis
97
98
Identify Top SQL
V$SQL_BIND_CAPTURE displays information on bind variables used in cursors
– NAME: of the bind variable
– VALUE_STRING: value of the bind represented as a string (one of the values used during a past execution of its
associated cursor)
– captured are bind variables used in the WHERE or HAVING clauses
V$SQL_OPTIMIZER_ENV displays the contents of the optimizer environment used to build the
execution plan of a SQL cursor
– NAME: of the parameter (e.g. “optimizer_features_enable”)
– VALUE: value of the parameter (e.g. “10.2.0.4”)
V$SQL_WORKAREA_ACTIVE shows work areas currently allocated
– OPERATION_TYPE (SORT, HASH JOIN, GROUP BY, …)
– ACTUAL_MEM_USED: PGA memory (in bytes) currently allocated
– TEMPSEG_SIZE: size (in bytes) of the temporary segment used
Performance Analysis
98
99
Identify Top SQL
Statements ordered by their total elapsed_time (=> DB time)
Performance Analysis
99
SELECT sql_id, ROUND(cpu_time/1000000,2) "CPU", ROUND(elapsed_time/1000000,2)
"Elaps", ROUND(disk_reads) "PhReads", ROUND(buffer_gets) "LogReads",
executions "Executes", ROUND((sysdate-last_active_time)*1440) "Minut. Ago"
FROM v$sqlstats ORDER BY elapsed_time DESC;
SQL_ID CPU Elaps PhReads LogReads Executes Minut. Ago
------------- -------- -------- -------- -------- -------- ----------
9sg53ps2g0ps2 9500.75 40593.8 27924322 5.11E+08 178 112
3c2nr9k15xccf 5511.38 5514.84 19 4.48E+08 1.89E+08 1
dm333j6hxzm3g 5401.42 5402.68 0 4.45E+08 1.89E+08 1
26ywz70p2ct0u 4971.57 4983.9 69 2.01E+08 48236 0
ajgappu6rtcyv 4737.91 4744.84 1310 1.92E+08 4421 0
2bj1jsb0tt717 2630.46 2636.98 1 2.00E+08 66682196 0
gtxzycka2u5fy 1768.69 1904.29 140315 83788901 718 2
gydrp3fmpsm4f 442.63 891.61 40946 11958948 1 141
100
Identify Top SQL
Statements ordered by their average elapsed_time
Performance Analysis
100
SELECT sql_id, ROUND(cpu_time/executions/1000000,2) "CPU/ex",
ROUND(elapsed_time/executions/1000000,2) "Elaps/ex",
ROUND(disk_reads/executions) "PR/ex", ROUND(rows_processed/executions)
"Rows/ex", executions "Execs", optimizer_cost "Cost", users_executing
"Current"
FROM v$sql WHERE executions>0 ORDER BY elapsed_time/executions DESC;
SQL_ID CPU/ex Elaps/ex PR/ex Rows/ex Execs Cost Current
------------- ------ -------- ------ ------- ------ ------ -------
gydrp3fmpsm4f 442.63 891.61 40946 1 1 0 0
9sg53ps2g0ps2 53.37 228.3 157027 1 179 0 1
5z0r8s71drxc7 25.18 180.1 69306 300 5 62 0
bjzjsujrpg7s4 2.69 48.82 10466 2057 5 38 0
7w7a2atyp74cy 28.72 48.35 47302 985212 1 11227 0
bm1k2mqxrdncn 1.34 40.15 14099 2 2 20130 0
b0mdju9p7najn 8.68 37.99 132 1 1 0 0
8uqtp9m4su2f6 11.72 21.24 68785 9053 5 337751 1
101
Identify Top SQL
Current PGA memory and temporary tablespace usage per statement (in bytes)
– workarea_size is the allocated memory
Performance Analysis
101
SELECT sql_id, SUM(ACTUAL_MEM_USED) actual_mem_used,
SUM(WORK_AREA_SIZE) workarea_size,
SUM(TEMPSEG_SIZE) alloc_temp_size
FROM v$sql_workarea_active GROUP BY sql_id ORDER BY 2 DESC;
SQL_ID ACTUAL_MEM_USED WORKAREA_SIZE ALLOC_TEMP_SIZE
------------- -------------- ------------- ---------------
8j0n5xh1smbs3 529778688 693104640 9736028160
8wr3bvwqwasuf 88411136 98216960
36ksnj22f6vss 5901312 5898240 5583667200
bkzt5zg8r6ac4 5901312 5898240 5426380800
7cccyn1874ah3 692224 692224
48z2qrs9pdv17 145408 178176
67hpxuxsg867w 0 3648512
102
Check Server Resources
An average CPU utilization of the database server within the last 15 seconds can be directly
selected from V$SYSMETRIC
– This metric is reliable and matches the results shown by the system utility mpstat
Performance Analysis
102
SQL> !mpstat 15 1
04:15:13 PM CPU %user %nice %sys %iowait %idle intr/s
04:15:28 PM all 26.63 0.00 0.77 0.02 72.11 4513.47
Average: all 26.63 0.00 0.77 0.02 72.11 4513.47
SQL> SELECT value, metric_unit FROM v$sysmetric WHERE metric_name='Host CPU
Utilization (%)' AND group_id=(SELECT group_id FROM v$metricgroup WHERE
name='System Metrics Short Duration');
VALUE METRIC_UNIT
---------- ------------------------------
27.9147097 % Busy/(Idle+Busy)
103
Check Server Resources
V$OSSTAT shows information about the database server
Performance Analysis
103
SELECT stat_name,value FROM v$osstat ORDER BY stat_name;
STAT_NAME VALUE
------------------------------ --------------
BUSY_TIME 2405773468
IDLE_TIME 8285566777
IOWAIT_TIME 551614543
LOAD 3.029296875 /* current value */
NICE_TIME 8295
NUM_CPUS 16 /* threads */
NUM_CPU_CORES 8 /* cores */
NUM_CPU_SOCKETS 2 /* sockets */
PHYSICAL_MEMORY_BYTES 67585912832 /* RAM */
RSRC_MGR_CPU_WAIT_TIME 39073870
SYS_TIME 558782034
USER_TIME 1758082647
VM_IN_BYTES 4526080
VM_OUT_BYTES 20391936
Checklist: Performance Issues
104 17.05.2022
More information
More information .. - MOS notes (1)
Diagnosing Performance Issues
– How to Investigate Slow or Hanging Database Performance Issues (Doc ID 1362329.1)
– Collecting Diagnostic Information For DB Performance Issues (Doc ID 1998964.1)
– Diagnostics For Database Performance Issues (Doc ID 781198.1)
– How to Use AWR Reports to Diagnose Database Performance Issues (Doc ID 1359094.1)
– How to use OS Commands to Diagnose Database Performance Issues? (Doc ID 1401716.1)
– How to Collect Diagnostics for Database Hanging Issues(Doc ID 452358.1)
– How to Get Historical Session Information in Standard Edition(Doc ID 2055993.1)
Avoiding Performance Issues
– Avoiding and Resolving Database Performance Related Issues After Upgrade (Doc ID 1528847.1)
– Best Practices: Proactively Avoiding Database and Query Performance Issues (Doc ID 1482811.1)
– Best Practices: Proactive Data Collection for Performance Issues(Doc ID 1477599.1)
Checklist: Performance Issues
105 17.05.2022
More information .. - MOS notes (2)
Statspack
– Systemwide Tuning Using STATSPACK Reports(Doc ID 228913.1)
– FAQ- Statspack Complete Reference(Doc ID 94224.1)
– Performance overhead when running statspack(Doc ID 396061.1)
– Gathering a StatsPack Snapshot(Doc ID 149121.1)
– Installing and Configuring StatsPack Package(Doc ID 149113.1)
– How To Automate Purging of Statspack Snapshots(Doc ID 464214.1)
– Installing and Using Standby Statspack(Doc ID 454848.1)
Checklist: Performance Issues
106 17.05.2022
More information .. - MOS notes (3)
Automatic Workload Repository (AWR)
– Performance Diagnosis with Automatic Workload Repository (AWR)(Doc ID 1674086.1)
– Automatic Workload Repository (AWR) Reports - Main Information Sources(Doc ID 1363422.1)
– Comparing The AWR Report With The Baseline Values.(Doc ID 1258564.1)
– FAQ: Automatic Workload Repository (AWR) Reports(Doc ID 1599440.1)
– How to generate 'Automatic Workload Repository' ( AWR), 'Automatic Database Diagnostic
Monitor' (ADDM), 'Active Session History' (ASH) reports.(Doc ID 2349082.1)
Automatic Database Diagnostic Monitor (ADDM)
– How to Compare ADDM Reports(Doc ID 2168126.1)
– How to Generate and Check an ADDM report(Doc ID 1680075.1)
Active Session History (ASH)
– Analysis of Active Session History (Ash) Online and Offline(Doc ID 243132.1)
Checklist: Performance Issues
107 17.05.2022
More information ..
Christian Antognini "Troubleshooting Oracle Performance"
– Part II "Identification"
• Analysis of Reproducable Problems
• Real-Time-Analysis of Irreproducable Problems
• Postmortem Analysis of Irreproducable Problems
Checklist: Performance Issues
108 17.05.2022
More information ..
Trivadis Training "O-TUN - Oracle Database Performance Troubleshooting and
Tuning"
– 3 days
Contents
– Terminology
– Statistics, Metrics, Waits, Locks, Latches, Performance Indicators
– Performance Analysis
– Procedure
– Introduction to the Cost-Based Optimizer
– Memory Tuning: SGA and PGA Management, Recommendations
– Optimizing Data Structures and Data Access
– I/O Analysis, Calibration, Storage Issues, Tuning Recommendations
– Oracle Performance and Resource Management
Checklist: Performance Issues
109 17.05.2022
Questions and Answers
Markus Flechtner
Principal Consultant
Phone +49 211 5866 64725
Markus.Flechtner@Trivadis.com
@markusdba http://markusdba.de
Download the slides from https://www.slideshare.net/markusflechtner
Please don‘t forget the session evaluation – Thank you!
17.05.2022 Checklist: Performance Issues
110

More Related Content

What's hot

Oracle Database Performance Tuning Advanced Features and Best Practices for DBAs
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAsOracle Database Performance Tuning Advanced Features and Best Practices for DBAs
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAs
Zohar Elkayam
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
Databricks
 
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark ApplicationsTop 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Cloudera, Inc.
 
Tanel Poder - Performance stories from Exadata Migrations
Tanel Poder - Performance stories from Exadata MigrationsTanel Poder - Performance stories from Exadata Migrations
Tanel Poder - Performance stories from Exadata Migrations
Tanel Poder
 
UKOUG, Oracle Transaction Locks
UKOUG, Oracle Transaction LocksUKOUG, Oracle Transaction Locks
UKOUG, Oracle Transaction Locks
Kyle Hailey
 
Sga internals
Sga internalsSga internals
Sga internals
sergkosko
 
Oracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLONOracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLON
Markus Michalewicz
 
Oracle ASM Training
Oracle ASM TrainingOracle ASM Training
Oracle ASM Training
Vigilant Technologies
 
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
Aaron Shilo
 
Oracle AWR Data mining
Oracle AWR Data miningOracle AWR Data mining
Oracle AWR Data mining
Yury Velikanov
 
Average Active Sessions RMOUG2007
Average Active Sessions RMOUG2007Average Active Sessions RMOUG2007
Average Active Sessions RMOUG2007
John Beresniewicz
 
AWR and ASH Deep Dive
AWR and ASH Deep DiveAWR and ASH Deep Dive
AWR and ASH Deep Dive
Kellyn Pot'Vin-Gorman
 
Analyzing and Interpreting AWR
Analyzing and Interpreting AWRAnalyzing and Interpreting AWR
Analyzing and Interpreting AWR
pasalapudi
 
Average Active Sessions - OaktableWorld 2013
Average Active Sessions - OaktableWorld 2013Average Active Sessions - OaktableWorld 2013
Average Active Sessions - OaktableWorld 2013
John Beresniewicz
 
Oracle Database performance tuning using oratop
Oracle Database performance tuning using oratopOracle Database performance tuning using oratop
Oracle Database performance tuning using oratop
Sandesh Rao
 
Performance Tuning Using oratop
Performance Tuning Using oratop Performance Tuning Using oratop
Performance Tuning Using oratop
Sandesh Rao
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
Databricks
 
Ash architecture and advanced usage rmoug2014
Ash architecture and advanced usage rmoug2014Ash architecture and advanced usage rmoug2014
Ash architecture and advanced usage rmoug2014
John Beresniewicz
 
Awr + 12c performance tuning
Awr + 12c performance tuningAwr + 12c performance tuning
Awr + 12c performance tuning
AiougVizagChapter
 
How Adobe Does 2 Million Records Per Second Using Apache Spark!
How Adobe Does 2 Million Records Per Second Using Apache Spark!How Adobe Does 2 Million Records Per Second Using Apache Spark!
How Adobe Does 2 Million Records Per Second Using Apache Spark!
Databricks
 

What's hot (20)

Oracle Database Performance Tuning Advanced Features and Best Practices for DBAs
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAsOracle Database Performance Tuning Advanced Features and Best Practices for DBAs
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAs
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
 
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark ApplicationsTop 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
 
Tanel Poder - Performance stories from Exadata Migrations
Tanel Poder - Performance stories from Exadata MigrationsTanel Poder - Performance stories from Exadata Migrations
Tanel Poder - Performance stories from Exadata Migrations
 
UKOUG, Oracle Transaction Locks
UKOUG, Oracle Transaction LocksUKOUG, Oracle Transaction Locks
UKOUG, Oracle Transaction Locks
 
Sga internals
Sga internalsSga internals
Sga internals
 
Oracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLONOracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLON
 
Oracle ASM Training
Oracle ASM TrainingOracle ASM Training
Oracle ASM Training
 
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
 
Oracle AWR Data mining
Oracle AWR Data miningOracle AWR Data mining
Oracle AWR Data mining
 
Average Active Sessions RMOUG2007
Average Active Sessions RMOUG2007Average Active Sessions RMOUG2007
Average Active Sessions RMOUG2007
 
AWR and ASH Deep Dive
AWR and ASH Deep DiveAWR and ASH Deep Dive
AWR and ASH Deep Dive
 
Analyzing and Interpreting AWR
Analyzing and Interpreting AWRAnalyzing and Interpreting AWR
Analyzing and Interpreting AWR
 
Average Active Sessions - OaktableWorld 2013
Average Active Sessions - OaktableWorld 2013Average Active Sessions - OaktableWorld 2013
Average Active Sessions - OaktableWorld 2013
 
Oracle Database performance tuning using oratop
Oracle Database performance tuning using oratopOracle Database performance tuning using oratop
Oracle Database performance tuning using oratop
 
Performance Tuning Using oratop
Performance Tuning Using oratop Performance Tuning Using oratop
Performance Tuning Using oratop
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
 
Ash architecture and advanced usage rmoug2014
Ash architecture and advanced usage rmoug2014Ash architecture and advanced usage rmoug2014
Ash architecture and advanced usage rmoug2014
 
Awr + 12c performance tuning
Awr + 12c performance tuningAwr + 12c performance tuning
Awr + 12c performance tuning
 
How Adobe Does 2 Million Records Per Second Using Apache Spark!
How Adobe Does 2 Million Records Per Second Using Apache Spark!How Adobe Does 2 Million Records Per Second Using Apache Spark!
How Adobe Does 2 Million Records Per Second Using Apache Spark!
 

Similar to Oracle - Checklist for performance issues

22-4_PerformanceTuningUsingtheAdvisorFramework.pdf
22-4_PerformanceTuningUsingtheAdvisorFramework.pdf22-4_PerformanceTuningUsingtheAdvisorFramework.pdf
22-4_PerformanceTuningUsingtheAdvisorFramework.pdf
yishengxi
 
Allan_John_R_Salgado-MCSD.NET, MCTS,MCPD-Resume(LinkedIn)
Allan_John_R_Salgado-MCSD.NET, MCTS,MCPD-Resume(LinkedIn)Allan_John_R_Salgado-MCSD.NET, MCTS,MCPD-Resume(LinkedIn)
Allan_John_R_Salgado-MCSD.NET, MCTS,MCPD-Resume(LinkedIn)
ALLAN JOHN SALGADO - MCSD.NET , MCPD, MCTS
 
Aspects of 10 Tuning
Aspects of 10 TuningAspects of 10 Tuning
Aspects of 10 Tuning
Sage Computing Services
 
Performance tuning in sap bi 7.0
Performance tuning in sap bi 7.0Performance tuning in sap bi 7.0
Performance tuning in sap bi 7.0
gireesho
 
MySQL Performance Schema : fossasia
MySQL Performance Schema : fossasiaMySQL Performance Schema : fossasia
MySQL Performance Schema : fossasia
Mayank Prasad
 
Sap erp
Sap erpSap erp
Sap erp
Wajdi Lahyani
 
SAP performance testing & engineering courseware v01
SAP performance testing & engineering courseware v01SAP performance testing & engineering courseware v01
SAP performance testing & engineering courseware v01
Argos
 
PoC Oracle Exadata - Retour d'expérience
PoC Oracle Exadata - Retour d'expériencePoC Oracle Exadata - Retour d'expérience
PoC Oracle Exadata - Retour d'expérience
Swiss Data Forum Swiss Data Forum
 
Sage Summit 2013: Sage 300 ERP Diagnostic Tools
Sage Summit 2013: Sage 300 ERP Diagnostic ToolsSage Summit 2013: Sage 300 ERP Diagnostic Tools
Sage Summit 2013: Sage 300 ERP Diagnostic Tools
Sage 300 ERP CS
 
Maximizing Database Tuning in SAP SQL Anywhere
Maximizing Database Tuning in SAP SQL AnywhereMaximizing Database Tuning in SAP SQL Anywhere
Maximizing Database Tuning in SAP SQL Anywhere
SAP Technology
 
Sherlock holmes for dba’s
Sherlock holmes for dba’sSherlock holmes for dba’s
Sherlock holmes for dba’s
Kellyn Pot'Vin-Gorman
 
Trivadis TechEvent 2017 Field report SQL Server by Stephan Hurni
Trivadis TechEvent 2017 Field report SQL Server by Stephan HurniTrivadis TechEvent 2017 Field report SQL Server by Stephan Hurni
Trivadis TechEvent 2017 Field report SQL Server by Stephan Hurni
Trivadis
 
Winning performance challenges in oracle standard editions
Winning performance challenges in oracle standard editionsWinning performance challenges in oracle standard editions
Winning performance challenges in oracle standard editions
Pini Dibask
 
Habits of Effective SAS Programmers
Habits of Effective SAS ProgrammersHabits of Effective SAS Programmers
Habits of Effective SAS Programmers
Sunil Gupta
 
How to pinpoint and fix sources of performance problems in your SAP BusinessO...
How to pinpoint and fix sources of performance problems in your SAP BusinessO...How to pinpoint and fix sources of performance problems in your SAP BusinessO...
How to pinpoint and fix sources of performance problems in your SAP BusinessO...
Xoomworks Business Intelligence
 
Checklist for Upgrades and Migrations
Checklist for Upgrades and MigrationsChecklist for Upgrades and Migrations
Checklist for Upgrades and Migrations
Markus Flechtner
 
Performance Stability, Tips and Tricks and Underscores
Performance Stability, Tips and Tricks and UnderscoresPerformance Stability, Tips and Tricks and Underscores
Performance Stability, Tips and Tricks and Underscores
Jitendra Singh
 
Crack the complexity of oracle applications r12 workload v2
Crack the complexity of oracle applications r12 workload v2Crack the complexity of oracle applications r12 workload v2
Crack the complexity of oracle applications r12 workload v2
Ajith Narayanan
 
design_Mockup1.pdfUntitled July 25 2022
design_Mockup1.pdfUntitled July 25 2022  design_Mockup1.pdfUntitled July 25 2022
design_Mockup1.pdfUntitled July 25 2022
LinaCovington707
 
Evolution of Performance Management: Oracle 12c adaptive optimizations - ukou...
Evolution of Performance Management: Oracle 12c adaptive optimizations - ukou...Evolution of Performance Management: Oracle 12c adaptive optimizations - ukou...
Evolution of Performance Management: Oracle 12c adaptive optimizations - ukou...
Nelson Calero
 

Similar to Oracle - Checklist for performance issues (20)

22-4_PerformanceTuningUsingtheAdvisorFramework.pdf
22-4_PerformanceTuningUsingtheAdvisorFramework.pdf22-4_PerformanceTuningUsingtheAdvisorFramework.pdf
22-4_PerformanceTuningUsingtheAdvisorFramework.pdf
 
Allan_John_R_Salgado-MCSD.NET, MCTS,MCPD-Resume(LinkedIn)
Allan_John_R_Salgado-MCSD.NET, MCTS,MCPD-Resume(LinkedIn)Allan_John_R_Salgado-MCSD.NET, MCTS,MCPD-Resume(LinkedIn)
Allan_John_R_Salgado-MCSD.NET, MCTS,MCPD-Resume(LinkedIn)
 
Aspects of 10 Tuning
Aspects of 10 TuningAspects of 10 Tuning
Aspects of 10 Tuning
 
Performance tuning in sap bi 7.0
Performance tuning in sap bi 7.0Performance tuning in sap bi 7.0
Performance tuning in sap bi 7.0
 
MySQL Performance Schema : fossasia
MySQL Performance Schema : fossasiaMySQL Performance Schema : fossasia
MySQL Performance Schema : fossasia
 
Sap erp
Sap erpSap erp
Sap erp
 
SAP performance testing & engineering courseware v01
SAP performance testing & engineering courseware v01SAP performance testing & engineering courseware v01
SAP performance testing & engineering courseware v01
 
PoC Oracle Exadata - Retour d'expérience
PoC Oracle Exadata - Retour d'expériencePoC Oracle Exadata - Retour d'expérience
PoC Oracle Exadata - Retour d'expérience
 
Sage Summit 2013: Sage 300 ERP Diagnostic Tools
Sage Summit 2013: Sage 300 ERP Diagnostic ToolsSage Summit 2013: Sage 300 ERP Diagnostic Tools
Sage Summit 2013: Sage 300 ERP Diagnostic Tools
 
Maximizing Database Tuning in SAP SQL Anywhere
Maximizing Database Tuning in SAP SQL AnywhereMaximizing Database Tuning in SAP SQL Anywhere
Maximizing Database Tuning in SAP SQL Anywhere
 
Sherlock holmes for dba’s
Sherlock holmes for dba’sSherlock holmes for dba’s
Sherlock holmes for dba’s
 
Trivadis TechEvent 2017 Field report SQL Server by Stephan Hurni
Trivadis TechEvent 2017 Field report SQL Server by Stephan HurniTrivadis TechEvent 2017 Field report SQL Server by Stephan Hurni
Trivadis TechEvent 2017 Field report SQL Server by Stephan Hurni
 
Winning performance challenges in oracle standard editions
Winning performance challenges in oracle standard editionsWinning performance challenges in oracle standard editions
Winning performance challenges in oracle standard editions
 
Habits of Effective SAS Programmers
Habits of Effective SAS ProgrammersHabits of Effective SAS Programmers
Habits of Effective SAS Programmers
 
How to pinpoint and fix sources of performance problems in your SAP BusinessO...
How to pinpoint and fix sources of performance problems in your SAP BusinessO...How to pinpoint and fix sources of performance problems in your SAP BusinessO...
How to pinpoint and fix sources of performance problems in your SAP BusinessO...
 
Checklist for Upgrades and Migrations
Checklist for Upgrades and MigrationsChecklist for Upgrades and Migrations
Checklist for Upgrades and Migrations
 
Performance Stability, Tips and Tricks and Underscores
Performance Stability, Tips and Tricks and UnderscoresPerformance Stability, Tips and Tricks and Underscores
Performance Stability, Tips and Tricks and Underscores
 
Crack the complexity of oracle applications r12 workload v2
Crack the complexity of oracle applications r12 workload v2Crack the complexity of oracle applications r12 workload v2
Crack the complexity of oracle applications r12 workload v2
 
design_Mockup1.pdfUntitled July 25 2022
design_Mockup1.pdfUntitled July 25 2022  design_Mockup1.pdfUntitled July 25 2022
design_Mockup1.pdfUntitled July 25 2022
 
Evolution of Performance Management: Oracle 12c adaptive optimizations - ukou...
Evolution of Performance Management: Oracle 12c adaptive optimizations - ukou...Evolution of Performance Management: Oracle 12c adaptive optimizations - ukou...
Evolution of Performance Management: Oracle 12c adaptive optimizations - ukou...
 

More from Markus Flechtner

My SYSAUX tablespace is full, please
My SYSAUX tablespace is full, pleaseMy SYSAUX tablespace is full, please
My SYSAUX tablespace is full, please
Markus Flechtner
 
Rolle Rückwärts - Backported Features in Oracle Database 19c
Rolle Rückwärts - Backported Features in Oracle Database 19cRolle Rückwärts - Backported Features in Oracle Database 19c
Rolle Rückwärts - Backported Features in Oracle Database 19c
Markus Flechtner
 
Oracle vs. PostgreSQL - Unterschiede in 45 Minuten
Oracle vs. PostgreSQL - Unterschiede in 45 MinutenOracle vs. PostgreSQL - Unterschiede in 45 Minuten
Oracle vs. PostgreSQL - Unterschiede in 45 Minuten
Markus Flechtner
 
Container Only - Neue Features für Multitenant in Oracle 21c
Container Only - Neue Features für Multitenant in Oracle 21cContainer Only - Neue Features für Multitenant in Oracle 21c
Container Only - Neue Features für Multitenant in Oracle 21c
Markus Flechtner
 
Oracle Datenbank-Architektur
Oracle Datenbank-ArchitekturOracle Datenbank-Architektur
Oracle Datenbank-Architektur
Markus Flechtner
 
Wie kommt der Client zur Datenbank?
Wie kommt der Client zur Datenbank?Wie kommt der Client zur Datenbank?
Wie kommt der Client zur Datenbank?
Markus Flechtner
 
OraChk
OraChkOraChk
TFA - Trace File Analyzer Collector
TFA - Trace File Analyzer CollectorTFA - Trace File Analyzer Collector
TFA - Trace File Analyzer Collector
Markus Flechtner
 
High Availability for Oracle SE2
High Availability for Oracle SE2High Availability for Oracle SE2
High Availability for Oracle SE2
Markus Flechtner
 
My SYSAUX tablespace is full - please help
My SYSAUX tablespace is full - please helpMy SYSAUX tablespace is full - please help
My SYSAUX tablespace is full - please help
Markus Flechtner
 
Datenbank-Hausputz für Einsteiger
Datenbank-Hausputz für EinsteigerDatenbank-Hausputz für Einsteiger
Datenbank-Hausputz für Einsteiger
Markus Flechtner
 
Should I stay or should I go?
Should I stay or should I go?Should I stay or should I go?
Should I stay or should I go?
Markus Flechtner
 
Privilege Analysis with the Oracle Database
Privilege Analysis with the Oracle DatabasePrivilege Analysis with the Oracle Database
Privilege Analysis with the Oracle Database
Markus Flechtner
 
New Features for Multitenant in Oracle Database 21c
New Features for Multitenant in Oracle Database 21cNew Features for Multitenant in Oracle Database 21c
New Features for Multitenant in Oracle Database 21c
Markus Flechtner
 
Einführung in den SQL-Developer
Einführung in den SQL-DeveloperEinführung in den SQL-Developer
Einführung in den SQL-Developer
Markus Flechtner
 
Oracle Database: Checklist Connection Issues
Oracle Database: Checklist Connection IssuesOracle Database: Checklist Connection Issues
Oracle Database: Checklist Connection Issues
Markus Flechtner
 
Codd & ACID - ein Ausflug in die Datenbank-Theorie und Geschichte
Codd & ACID - ein Ausflug in die Datenbank-Theorie und GeschichteCodd & ACID - ein Ausflug in die Datenbank-Theorie und Geschichte
Codd & ACID - ein Ausflug in die Datenbank-Theorie und Geschichte
Markus Flechtner
 
Datenbank-Selbstverwaltung - Das Oracle-Data-Dictionary
Datenbank-Selbstverwaltung - Das Oracle-Data-DictionaryDatenbank-Selbstverwaltung - Das Oracle-Data-Dictionary
Datenbank-Selbstverwaltung - Das Oracle-Data-Dictionary
Markus Flechtner
 
Die Datenbank ist nicht immer Schuld - Gründe warum Datenbank-Migration schei...
Die Datenbank ist nicht immer Schuld - Gründe warum Datenbank-Migration schei...Die Datenbank ist nicht immer Schuld - Gründe warum Datenbank-Migration schei...
Die Datenbank ist nicht immer Schuld - Gründe warum Datenbank-Migration schei...
Markus Flechtner
 
Taming the PDB: Resource Management and Lockdown Profiles
Taming the PDB: Resource Management and Lockdown ProfilesTaming the PDB: Resource Management and Lockdown Profiles
Taming the PDB: Resource Management and Lockdown Profiles
Markus Flechtner
 

More from Markus Flechtner (20)

My SYSAUX tablespace is full, please
My SYSAUX tablespace is full, pleaseMy SYSAUX tablespace is full, please
My SYSAUX tablespace is full, please
 
Rolle Rückwärts - Backported Features in Oracle Database 19c
Rolle Rückwärts - Backported Features in Oracle Database 19cRolle Rückwärts - Backported Features in Oracle Database 19c
Rolle Rückwärts - Backported Features in Oracle Database 19c
 
Oracle vs. PostgreSQL - Unterschiede in 45 Minuten
Oracle vs. PostgreSQL - Unterschiede in 45 MinutenOracle vs. PostgreSQL - Unterschiede in 45 Minuten
Oracle vs. PostgreSQL - Unterschiede in 45 Minuten
 
Container Only - Neue Features für Multitenant in Oracle 21c
Container Only - Neue Features für Multitenant in Oracle 21cContainer Only - Neue Features für Multitenant in Oracle 21c
Container Only - Neue Features für Multitenant in Oracle 21c
 
Oracle Datenbank-Architektur
Oracle Datenbank-ArchitekturOracle Datenbank-Architektur
Oracle Datenbank-Architektur
 
Wie kommt der Client zur Datenbank?
Wie kommt der Client zur Datenbank?Wie kommt der Client zur Datenbank?
Wie kommt der Client zur Datenbank?
 
OraChk
OraChkOraChk
OraChk
 
TFA - Trace File Analyzer Collector
TFA - Trace File Analyzer CollectorTFA - Trace File Analyzer Collector
TFA - Trace File Analyzer Collector
 
High Availability for Oracle SE2
High Availability for Oracle SE2High Availability for Oracle SE2
High Availability for Oracle SE2
 
My SYSAUX tablespace is full - please help
My SYSAUX tablespace is full - please helpMy SYSAUX tablespace is full - please help
My SYSAUX tablespace is full - please help
 
Datenbank-Hausputz für Einsteiger
Datenbank-Hausputz für EinsteigerDatenbank-Hausputz für Einsteiger
Datenbank-Hausputz für Einsteiger
 
Should I stay or should I go?
Should I stay or should I go?Should I stay or should I go?
Should I stay or should I go?
 
Privilege Analysis with the Oracle Database
Privilege Analysis with the Oracle DatabasePrivilege Analysis with the Oracle Database
Privilege Analysis with the Oracle Database
 
New Features for Multitenant in Oracle Database 21c
New Features for Multitenant in Oracle Database 21cNew Features for Multitenant in Oracle Database 21c
New Features for Multitenant in Oracle Database 21c
 
Einführung in den SQL-Developer
Einführung in den SQL-DeveloperEinführung in den SQL-Developer
Einführung in den SQL-Developer
 
Oracle Database: Checklist Connection Issues
Oracle Database: Checklist Connection IssuesOracle Database: Checklist Connection Issues
Oracle Database: Checklist Connection Issues
 
Codd & ACID - ein Ausflug in die Datenbank-Theorie und Geschichte
Codd & ACID - ein Ausflug in die Datenbank-Theorie und GeschichteCodd & ACID - ein Ausflug in die Datenbank-Theorie und Geschichte
Codd & ACID - ein Ausflug in die Datenbank-Theorie und Geschichte
 
Datenbank-Selbstverwaltung - Das Oracle-Data-Dictionary
Datenbank-Selbstverwaltung - Das Oracle-Data-DictionaryDatenbank-Selbstverwaltung - Das Oracle-Data-Dictionary
Datenbank-Selbstverwaltung - Das Oracle-Data-Dictionary
 
Die Datenbank ist nicht immer Schuld - Gründe warum Datenbank-Migration schei...
Die Datenbank ist nicht immer Schuld - Gründe warum Datenbank-Migration schei...Die Datenbank ist nicht immer Schuld - Gründe warum Datenbank-Migration schei...
Die Datenbank ist nicht immer Schuld - Gründe warum Datenbank-Migration schei...
 
Taming the PDB: Resource Management and Lockdown Profiles
Taming the PDB: Resource Management and Lockdown ProfilesTaming the PDB: Resource Management and Lockdown Profiles
Taming the PDB: Resource Management and Lockdown Profiles
 

Recently uploaded

UX Webinar Series: Drive Revenue and Decrease Costs with Passkeys for Consume...
UX Webinar Series: Drive Revenue and Decrease Costs with Passkeys for Consume...UX Webinar Series: Drive Revenue and Decrease Costs with Passkeys for Consume...
UX Webinar Series: Drive Revenue and Decrease Costs with Passkeys for Consume...
FIDO Alliance
 
Communications Mining Series - Zero to Hero - Session 3
Communications Mining Series - Zero to Hero - Session 3Communications Mining Series - Zero to Hero - Session 3
Communications Mining Series - Zero to Hero - Session 3
DianaGray10
 
The Impact of the Internet of Things (IoT) on Smart Homes and Cities
The Impact of the Internet of Things (IoT) on Smart Homes and CitiesThe Impact of the Internet of Things (IoT) on Smart Homes and Cities
The Impact of the Internet of Things (IoT) on Smart Homes and Cities
Arpan Buwa
 
LeadMagnet IQ Review: Unlock the Secret to Effortless Traffic and Leads.pdf
LeadMagnet IQ Review:  Unlock the Secret to Effortless Traffic and Leads.pdfLeadMagnet IQ Review:  Unlock the Secret to Effortless Traffic and Leads.pdf
LeadMagnet IQ Review: Unlock the Secret to Effortless Traffic and Leads.pdf
SelfMade bd
 
kk vathada _digital transformation frameworks_2024.pdf
kk vathada _digital transformation frameworks_2024.pdfkk vathada _digital transformation frameworks_2024.pdf
kk vathada _digital transformation frameworks_2024.pdf
KIRAN KV
 
Retrieval Augmented Generation Evaluation with Ragas
Retrieval Augmented Generation Evaluation with RagasRetrieval Augmented Generation Evaluation with Ragas
Retrieval Augmented Generation Evaluation with Ragas
Zilliz
 
Use Cases & Benefits of RPA in Manufacturing in 2024.pptx
Use Cases & Benefits of RPA in Manufacturing in 2024.pptxUse Cases & Benefits of RPA in Manufacturing in 2024.pptx
Use Cases & Benefits of RPA in Manufacturing in 2024.pptx
SynapseIndia
 
Mule Experience Hub and Release Channel with Java 17
Mule Experience Hub and Release Channel with Java 17Mule Experience Hub and Release Channel with Java 17
Mule Experience Hub and Release Channel with Java 17
Bhajan Mehta
 
Semantic-Aware Code Model: Elevating the Future of Software Development
Semantic-Aware Code Model: Elevating the Future of Software DevelopmentSemantic-Aware Code Model: Elevating the Future of Software Development
Semantic-Aware Code Model: Elevating the Future of Software Development
Baishakhi Ray
 
BLOCKCHAIN TECHNOLOGY - Advantages and Disadvantages
BLOCKCHAIN TECHNOLOGY - Advantages and DisadvantagesBLOCKCHAIN TECHNOLOGY - Advantages and Disadvantages
BLOCKCHAIN TECHNOLOGY - Advantages and Disadvantages
SAI KAILASH R
 
Using LLM Agents with Llama 3, LangGraph and Milvus
Using LLM Agents with Llama 3, LangGraph and MilvusUsing LLM Agents with Llama 3, LangGraph and Milvus
Using LLM Agents with Llama 3, LangGraph and Milvus
Zilliz
 
Feature sql server terbaru performance.pptx
Feature sql server terbaru performance.pptxFeature sql server terbaru performance.pptx
Feature sql server terbaru performance.pptx
ssuser1915fe1
 
Types of Weaving loom machine & it's technology
Types of Weaving loom machine & it's technologyTypes of Weaving loom machine & it's technology
Types of Weaving loom machine & it's technology
ldtexsolbl
 
How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...
DianaGray10
 
Uncharted Together- Navigating AI's New Frontiers in Libraries
Uncharted Together- Navigating AI's New Frontiers in LibrariesUncharted Together- Navigating AI's New Frontiers in Libraries
Uncharted Together- Navigating AI's New Frontiers in Libraries
Brian Pichman
 
Acumatica vs. Sage Intacct _Construction_July (1).pptx
Acumatica vs. Sage Intacct _Construction_July (1).pptxAcumatica vs. Sage Intacct _Construction_July (1).pptx
Acumatica vs. Sage Intacct _Construction_July (1).pptx
BrainSell Technologies
 
Sonkoloniya documentation - ONEprojukti.pdf
Sonkoloniya documentation - ONEprojukti.pdfSonkoloniya documentation - ONEprojukti.pdf
Sonkoloniya documentation - ONEprojukti.pdf
SubhamMandal40
 
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
shanihomely
 
Vertex AI Agent Builder - GDG Alicante - Julio 2024
Vertex AI Agent Builder - GDG Alicante - Julio 2024Vertex AI Agent Builder - GDG Alicante - Julio 2024
Vertex AI Agent Builder - GDG Alicante - Julio 2024
Nicolás Lopéz
 
Camunda Chapter NY Meetup July 2024.pptx
Camunda Chapter NY Meetup July 2024.pptxCamunda Chapter NY Meetup July 2024.pptx
Camunda Chapter NY Meetup July 2024.pptx
ZachWylie3
 

Recently uploaded (20)

UX Webinar Series: Drive Revenue and Decrease Costs with Passkeys for Consume...
UX Webinar Series: Drive Revenue and Decrease Costs with Passkeys for Consume...UX Webinar Series: Drive Revenue and Decrease Costs with Passkeys for Consume...
UX Webinar Series: Drive Revenue and Decrease Costs with Passkeys for Consume...
 
Communications Mining Series - Zero to Hero - Session 3
Communications Mining Series - Zero to Hero - Session 3Communications Mining Series - Zero to Hero - Session 3
Communications Mining Series - Zero to Hero - Session 3
 
The Impact of the Internet of Things (IoT) on Smart Homes and Cities
The Impact of the Internet of Things (IoT) on Smart Homes and CitiesThe Impact of the Internet of Things (IoT) on Smart Homes and Cities
The Impact of the Internet of Things (IoT) on Smart Homes and Cities
 
LeadMagnet IQ Review: Unlock the Secret to Effortless Traffic and Leads.pdf
LeadMagnet IQ Review:  Unlock the Secret to Effortless Traffic and Leads.pdfLeadMagnet IQ Review:  Unlock the Secret to Effortless Traffic and Leads.pdf
LeadMagnet IQ Review: Unlock the Secret to Effortless Traffic and Leads.pdf
 
kk vathada _digital transformation frameworks_2024.pdf
kk vathada _digital transformation frameworks_2024.pdfkk vathada _digital transformation frameworks_2024.pdf
kk vathada _digital transformation frameworks_2024.pdf
 
Retrieval Augmented Generation Evaluation with Ragas
Retrieval Augmented Generation Evaluation with RagasRetrieval Augmented Generation Evaluation with Ragas
Retrieval Augmented Generation Evaluation with Ragas
 
Use Cases & Benefits of RPA in Manufacturing in 2024.pptx
Use Cases & Benefits of RPA in Manufacturing in 2024.pptxUse Cases & Benefits of RPA in Manufacturing in 2024.pptx
Use Cases & Benefits of RPA in Manufacturing in 2024.pptx
 
Mule Experience Hub and Release Channel with Java 17
Mule Experience Hub and Release Channel with Java 17Mule Experience Hub and Release Channel with Java 17
Mule Experience Hub and Release Channel with Java 17
 
Semantic-Aware Code Model: Elevating the Future of Software Development
Semantic-Aware Code Model: Elevating the Future of Software DevelopmentSemantic-Aware Code Model: Elevating the Future of Software Development
Semantic-Aware Code Model: Elevating the Future of Software Development
 
BLOCKCHAIN TECHNOLOGY - Advantages and Disadvantages
BLOCKCHAIN TECHNOLOGY - Advantages and DisadvantagesBLOCKCHAIN TECHNOLOGY - Advantages and Disadvantages
BLOCKCHAIN TECHNOLOGY - Advantages and Disadvantages
 
Using LLM Agents with Llama 3, LangGraph and Milvus
Using LLM Agents with Llama 3, LangGraph and MilvusUsing LLM Agents with Llama 3, LangGraph and Milvus
Using LLM Agents with Llama 3, LangGraph and Milvus
 
Feature sql server terbaru performance.pptx
Feature sql server terbaru performance.pptxFeature sql server terbaru performance.pptx
Feature sql server terbaru performance.pptx
 
Types of Weaving loom machine & it's technology
Types of Weaving loom machine & it's technologyTypes of Weaving loom machine & it's technology
Types of Weaving loom machine & it's technology
 
How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...
 
Uncharted Together- Navigating AI's New Frontiers in Libraries
Uncharted Together- Navigating AI's New Frontiers in LibrariesUncharted Together- Navigating AI's New Frontiers in Libraries
Uncharted Together- Navigating AI's New Frontiers in Libraries
 
Acumatica vs. Sage Intacct _Construction_July (1).pptx
Acumatica vs. Sage Intacct _Construction_July (1).pptxAcumatica vs. Sage Intacct _Construction_July (1).pptx
Acumatica vs. Sage Intacct _Construction_July (1).pptx
 
Sonkoloniya documentation - ONEprojukti.pdf
Sonkoloniya documentation - ONEprojukti.pdfSonkoloniya documentation - ONEprojukti.pdf
Sonkoloniya documentation - ONEprojukti.pdf
 
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
 
Vertex AI Agent Builder - GDG Alicante - Julio 2024
Vertex AI Agent Builder - GDG Alicante - Julio 2024Vertex AI Agent Builder - GDG Alicante - Julio 2024
Vertex AI Agent Builder - GDG Alicante - Julio 2024
 
Camunda Chapter NY Meetup July 2024.pptx
Camunda Chapter NY Meetup July 2024.pptxCamunda Chapter NY Meetup July 2024.pptx
Camunda Chapter NY Meetup July 2024.pptx
 

Oracle - Checklist for performance issues

  • 1. BASLE BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE MUNICH STUTTGART VIENNA ZURICH Checklist: Database Performance Issues Markus Flechtner
  • 2. Trivadis – Our mission. Checklist: Performance Issues 2 17.05.2022 Trivadis makes IT easier: We provide significant support for our customers in the smart use of data in the digital age. We reduce complexity for our customers through outstanding technological expertise. We take over key tasks in the existing and future IT of our customers.
  • 3. Trivadis – What sets us apart. Checklist: Performance Issues 3 17.05.2022 We understand the business processes and economic challenges of our customers and support them through IT consulting and in the development of comprehensive IT solutions. Our proven products, developed by Trivadis, are based on in-depth expertise in the key technologies offered by Microsoft, Oracle and Open Source. That sets us apart from the competition. A selection of awards we have received OPEN SOURCE
  • 4. Trivadis – Our key figures Checklist: Performance Issues 4 17.05.2022 Founded in 1994 15 Trivadis locations with more than 650 employees Sales of CHF 111 million (EUR 96 million) Over 250 Service Level Agreements More than 4000 training participants Research and development budget: CHF 5.0 million More than 1900 projects each year with over 800 customers Financially independent and sustainably profitable
  • 5. About me .. Markus Flechtner Principal Consultant, Trivadis, Duesseldorf/Germany, since April 2008 Working with Oracle since the 1990’s – Development (Forms, Reports, PL/SQL) – Support – Database Administration Focus – Oracle Real Application Clusters – Database Upgrade and Migration Projects Teacher – O-RAC – Oracle Real Application Clusters – O-NF-DBA – Oracle Database New Features for the DBA – O-MT – Oracle Multitenant – PG4ORA – PostgreSQL for Oracle DBAs Blog: https://markusdba.net/ @markusdba 17.05.2022 Checklist: Performance Issues 5
  • 6. 17.05.2022 Checklist: Performance Issues 6 Technology on its own won't help you. You need to know how to use it properly.
  • 7. Agenda Checklist: Performance Issues 7 17.05.2022 1. Specify the Problem 2. Performance Analysis Methodology 3. Tools: Diagnostic Pack & Statspack 4. Tools: Tuning Pack 5. Common Measures 6. More Information
  • 8. Checklist: Performance Issues 8 17.05.2022 Specify the problem
  • 9. Specify the problem (1) What is slow? – "everything" – A single query – One or more parts of the application (e.g. a specific batch job) When did it happen? – Permanent, ongoing – At specific times (specific days, day of week, hours, ..) – At irregular times – Can you reproduce it at will? Is there a response time specification in a SLA which was/is currently violated? – If not, then there is no problem  What are the current and the expected response time? Checklist: Performance Issues 9 17.05.2022
  • 10. Specify the problem (2) How do you evaluate the performance problem? – Poor end-user response time – Long job duration – Timeout – Irregular response time – SLA not fullfiled – Database Call hanging – Other .. Was anything changed? What other activities were/are occuring when the problem occurred? Checklist: Performance Issues 10 17.05.2022
  • 11. Checklist: Performance Issues 11 17.05.2022 Performance Analysis Methodology
  • 12. Problem areas Check database key performance indicators Check server resources Identify top wait events Check memory advisories Identify hot segments Identify top/ involved sessions Check optimizer statistics/ configuration Identify recent changes Analyse Statspack report Perform SQL Trace Not a database problem Check/tune other databases Problem occurred in the past Problem occurs now Analyse AWR report and ADDM findings Perform SQL tuning Analyse ASH report Identify top/ important SQL Diagnostic pack Diagnostic pack Inefficient SQL Slow IO High CPU- usage Memory Locking Identify the bottleneck Detail analysis Bad Indexing Hot Segments CPU, IO-rate, DB-time, Redo Tx-Rate, Logons... Memory, Swap, CPU (user, kernel) High execu- tions Many logons Oracle Database Tuning, Performance Analysis Methodology Parsing High IO rate Performance Analysis Methodology (1) Checklist: Performance Issues 12 17.05.2022 Source: Trivadis-Training O-TUN
  • 13. Performance Analysis Methodology (2) Historical Problem – Run AWR for the period – Run ADDM for the period – Run AWR compare report to compare the period in question with a period with "good performance" Current Problem – Check if there is a "Real-Time-ADDM" – Check ASH – Check Real-Time-SQL-Monitoring Checklist: Performance Issues 13 17.05.2022
  • 14. Enterprise Manager Cloud Control OEM is very helpful for generating performance analysis reports (ADDM, AWR, ASH, ..) Checklist: Performance Issues 14 17.05.2022
  • 15. SQL Developer If you don't have OEM Cloud Control available, SQL Developer may help to generate performance related reports Checklist: Performance Issues 15 17.05.2022
  • 16. Use SQL*Plus If you have neither OEM Cloud Control nor SQL Developer (nor TOAD) at hand, you can use the PL/SQL packages in the database Checklist: Performance Issues 16 17.05.2022 SQL> SELECT output FROM TABLE (DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_TEXT(3693619282,1,566,567)); SQL> SELECT output FROM TABLE (DBMS_WORKLOAD_REPOSITORY.AWR_DIFF_REPORT_TEXT (3693619282,1,566,567,3693619282,1,568,569));
  • 17. Checklist: Performance Issues 17 17.05.2022 Tools: Statspack
  • 18. 18 Statspack Introduction Statspack is a set of SQL, PL/SQL, and SQL*Plus scripts – All scripts are located in $ORACLE_HOME/rdbms/admin Statspack allows collection, storage, and viewing of performance data Statspack separates the data collection from the report generation The performance data is collected when a snapshot is taken A snapshot is a set of statistics gathered at a single time and is identified by the snapshot id; each time a new collection is taken, a new snap_id is generated All instances in a RAC environment have to be configured separately Performance Analysis 18
  • 19. 19 Statspack Installation Statspack has to be installed by a DBA on a per database instance basis – The installation script spcreate.sql creates the repository schema PERFSTAT with a number tables and the STATSPACK package – By default the repository is placed in the SYSAUX tablespace A batch installation is also possible Performance Analysis 19 SQL> @?/rdbms/admin/spcreate SQL> connect / as sysdba SQL> define default_tablespace='sysaux' SQL> define temporary_tablespace='temp' SQL> define perfstat_password='<passwd>' SQL> @?/rdbms/admin/spcreate SQL> undefine perfstat_password
  • 20. 20 Statspack Levels The amount of performance data gathered by the package is controlled by specifying a snapshot level Snapshot levels – level 0: general performance statistics – DEFAULT level 5: level 0 + SQL statements in library cache exceeding one of the predefined thresholds – level 6: level 5 + SQL plans and SQL plan usage of statements gathered in level 5 – level 7: level 6 + Segment level statistics exceeding one of the predefined thresholds – level 10: level 7 + Parent and Child latches Trivadis recommends Statspack level 7 as it allows to report hot segments and includes SQL plans – The creation of a snapshot may take a few seconds and consume 50 – 100 millions of logical reads Performance Analysis 20
  • 21. 21 Statspack Collection Can be performed manually by calling the procedure SNAP In order to automate the snapshot collection a batch job is needed – on busy systems such a job may hang - therefore it is important to monitor its runtime and stop it if it does not finish within a short time period; this can be performed in a cron job or by two DBMS_SCHEDULER jobs • the first job calls the procedure SNAP and raises an event if its max_run_duration exceeds a predefined time interval (e.g. 5 minutes) • such an event (JOB_OVER_MAX_DUR) is consumed by the second job that stops the first job – old snapshot data should be deleted regularly (e.g. right before or after the collection) Performance Analysis 21 SQL> EXEC perfstat.statspack.snap SQL> EXEC perfstat.statspack.purge(I_PURGE_BEFORE_DATE=>sysdate-7);
  • 22. 22 Statspack Session Collection In addition to instance-level data the procedure SNAP can collect session data of one selected session per snapshot – session data includes session wait events, session time model statistics and session statistics (V$SESSTAT) – session data is shown in reports if the session ID and its serial# match between the selected snapshot IDs Performance Analysis 22 SQL> EXEC perfstat.statspack.snap(i_session_id => 21) SQL> SELECT snap_id, snap_level, session_id, serial# FROM perfstat.stats$snapshot ORDER BY snap_id; SNAP_ID SNAP_LEVEL SESSION_ID SERIAL# ---------- ---------- ---------- ---------- 3151 7 0 0 3152 7 21 3434 3153 7 21 3434 3154 7 0 0
  • 23. 23 Statspack Parameters Altering default parameters and thresholds can be performed by MODIFY_STATSPACK_PARAMETER – changed parameters apply only for the current database ID Performance Analysis 23 SQL> EXEC perfstat.statspack.modify_statspack_parameter - ( i_snap_level => 7, - i_disk_reads_th => 10000, - i_buffer_gets_th => 1000000, - i_seg_phy_reads_th => 10000, - i_seg_log_reads_th => 1000000 - ); SQL> SELECT dbid, snap_level, disk_reads_th, buffer_gets_th FROM perfstat.stats$statspack_parameter; DBID SNAP_LEVEL DISK_READS_TH BUFFER_GETS_TH ---------- ---------- ------------- -------------- 3693619282 7 10000 1000000
  • 24. 24 Statspack Baselines Snapshots data worthy of keeping can be marked a baselines and will not be purged by the purge procedure – The procedure MAKE_BASELINE marks snapshots IDs as baselines but it does not perform any consistency checks on the snapshots requested to be baselined – The procedure CLEAR_BASELINE removes the baseline marker Performance Analysis 24 SQL> EXEC perfstat.statspack.make_baseline - ( i_begin_snap => 3151, - i_end_snap => 3152 - ); SQL> SELECT snap_id, snap_level, baseline FROM perfstat.stats$snapshot WHERE snap_id > 3150; SNAP_ID SNAP_LEVEL BASELINE ---------- ---------- -------- 3151 7 Y 3152 7 Y 3153 7
  • 25. 25 Statspack Reports Statspack allows the generation of performance reports – instance reports (spreport.sql) - covering all aspects of instance performance during a time interval defined be two snapshot IDs – SQL reports (sprepsql.sql) - for a specific SQL statement identified by its HASH_VALUE during one time interval A batch report generation is also possible Performance Analysis 25 SQL> @?/rdbms/admin/spreport ... Specify the Begin and End Snapshot Ids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enter value for begin_snap: SQL> connect / as sysdba SQL> define begin_snap=3151 SQL> define end_snap=3153 SQL> define report_name=sp_3151_3153 SQL> @?/rdbms/admin/spreport
  • 26. 26 Statspack Reports A report can only be generated if the specified time period does not span an instance shutdown The time units used in reports are specified in the column headings of each timed column – (s) - a second – (cs) - a centisecond – a 100th of a second – (ms) - a millisecond – a 1000th of a second – (us) - a microsecond – a 1000000th of a second Some aspects of the instance report can be configured by altering the script sprepins.sql – num_rows_per_hash (default 4) - number of rows of text per SQL – top_pct_sql (default 1.0%) - only SQLs exceeding this percentage of resources used are shown on reports – top_n_segstat (default 5) - number of hot segments to be displayed Performance Analysis 26
  • 27. 27 Statspack Report Sections Performance Analysis 27  Summary Page  Load Profile  Instance Efficiency  Top 5 Wait Events  Host CPU, Instance CPU and Memory  Time Model Statistics  Wait Events and Wait Event Histograms  Top SQL  System Statistics  OS Statistics  Session Statistics (if exist)  Session Wait Events  Time Model Statistics  Session Statistics  IO Stats by Function  Tablespace and File IO  Buffer Cache and SGA Advisories  PGA Memory Advisory  Top Process Memory  Enqueue Activity  Latch Activity  Mutex Sleeps  Top Segments by (4 categories)  Shared Pool, Java Pool, SGA Target Advisories  SGA Memory Summary  Instance Parameters
  • 28. 28 Performance Analysis with Statspack Reports The methodology when performing an analysis based on a Statspack report is very similar to the methodology used for the analysis using Dynamic Performance Views The details analysis steps and their corresponding report sections – “Check database KPI”: Load Profile and Time Model Statistics – “Identify top wait events”: Top 5 Wait Events – “Check memory advisories”: Memory Advisories – “Identify hot segments”: Segments by… – “Identify top SQL”: SQL ordered by … – “Check server resources”: Host CPU, OS Statistics – “Perform SQL tuning”: generate SQL reports for the top statements Performance Analysis 28
  • 29. 29 Statspack Load Profile Section Performance Analysis 29  This section shows some important key performance indicators and allows to quantify the workload  Physical reads: values greater than 5000-10000 indicate a very high IO load  Physical writes: values greater than 1000 indicate many data loads  Logons: values greater 1 may indicate connection pool/application interface problems  Recursive Call %: high values may indicate a lot of PL/SQL  Rollback per transaction %: values over 1% may indicate application errors  In this section new important indicators are displayed (~ AWR)  DB time(s) and DB CPU(s)  W/A MB processed (SQL WorkArea MB processed)
  • 30. 30 | a huge DWH | |a huge multi- | | an OLTP | | database | |appl. database| | database | --------------- ---------------- -------------- Redo size: 11,656,147.85 502,627.91 96,893.15 Logical reads: 223,194.51 293,538.76 5,201.45 Block changes: 34,132.96 3,631.52 597.63 Physical reads: 36,669.91 1,892.25 48.58 Physical writes: 7,607.93 172.92 21.11 User calls: 377.54 3,459.70 194.43 Parses: 105.67 1,484.86 12.19 Hard parses: 2.19 32.76 4.39 Sorts: 67.11 1,011.47 4.90 Logons: 0.51 7.02 0.03 Executes: 12,282.65 7,001.83 119.98 Transactions: 34.22 106.53 12.35 Recursive Call%: 98.24 79.03 55.08 Rollback per transaction %: 0.15 7.14 0.11 Rows per Sort: 8687.73 203.01 24.05  Three different Load Profiles (only per second values) Perf orm ance 30 Statspack Load Profile Section
  • 31. 31 Instance Efficiency Percentages (Target 100%) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Buffer Nowait %: 99.83 Redo NoWait %: 100.00 Buffer Hit %: 90.80 In-memory Sort %: 99.96 Library Hit %: 100.20 Soft Parse %: 97.93 Execute to Parse %: 99.14 Latch Hit %: 99.34 Parse CPU to Parse Elapsd %: 37.19 % Non-Parse CPU: 99.85 Shared Pool Statistics Begin End ------ ------ Memory Usage %: 88.55 92.26 % SQL with executions>1: 88.85 90.22 % Memory for SQL w/exec>1: 90.36 92.24  All efficiency percentages should be “not far” from 100%  Low “Parse CPU to Parse Elapsed” might indicate latch waits during parse operations  Low “%SQL with executions>1” might indicate bad cursor sharing Perf orm ance 31 Statspack Load Profile Section
  • 32. 32 Statspack Top 5 Timed Events Section Performance Analysis 32 Top 5 Timed Events Avg %Total ~~~~~~~~~~~~~~~~~~ wait Call Event Waits Time (s) (ms) Time ------------------------------- ----------- ----------- ------ ------ CPU time 3,083 db file scattered read 28,938,043 755 26 13.3 db file sequential read 107,055,270 505 5 8.9 read by other session 30,882,189 378 12 6.7 direct path read 17,681,836 209 12 6.0 ----------------------------------------------------  This section includes the total CPU time used by all instance processes (this statistic may include wait time for CPU)  Total Call Time is the sum of DB time and Background elapsed time  Events are important if their percentage of Total Call Time is relevant  The average wait for “read” events should not exceed 10ms
  • 33. 33 Statspack Top SQL Section Performance Analysis 33 CPU CPU per Elapsd Old Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value ---------- ------------ ---------- ------ ---------- --------------- ---------- 19.17 1 19.17 46.5 134.47 5,597 3309535418 Module: SQL*Plus select * from sh.sales s, sh.products p where p.prod_id=s.prod_i d minus select * from sh.sales s, sh.products p where p.prod_id= s.prod_id ... Elapsed Elap per CPU Old Time (s) Executions Exec (s) %Total Time (s) Physical Reads Hash Value ---------- ------------ ---------- ------ ---------- --------------- ---------- 134.47 1 134.47 107.9 19.17 56,791 3309535418 ... CPU Elapsd Old Physical Rds Executions Rds per Exec %Total Time (s) Time (s) Hash Value -------------- ------------ -------------- ------ -------- --------- --------- 56,791 1 56,791.0 102.6 19.17 134.47 3309535418  Often a specific SQL statement appears in more than one category  Old Hash Value corresponds to the column V$SQL.OLD_HASH_VALUE and is the input parameter for Statspack SQL reports
  • 34. 34 Querying Statspack Repository Performance Analysis 34  … allows the access to more than one interval  “DB time (s/s)” means DB time in seconds per second and corresponds to “Average Active Sessions” SELECT TO_CHAR(snap_date,'DD-MON HH24:MI:SS') snap_date, ROUND((value-prev_value)/1000000,2) "DB time(s)", ROUND((value-prev_value)/1000000/((snap_date-prev_date)*24*3600),2) AS "DB time(s/s)" FROM (SELECT sn.snap_time snap_date, s.value value, LAG(sn.snap_time, 1, NULL) OVER (ORDER BY sn.snap_id) prev_date, LAG(s.value, 1, NULL) OVER (ORDER BY sn.snap_id) prev_value FROM v$statname n, stats$sys_time_model s, stats$snapshot sn WHERE n.name='DB time' AND n.stat_id=s.stat_id AND s.snap_id=sn.snap_id ORDER BY sn.snap_id); SNAP_DATE DB time(s) DB time(s/s) --------------- ---------- ------------ 04-DEC 14:22:04 3456.63 .96 04-DEC 15:22:04 4028.95 1.12 04-DEC 16:22:04 4535.32 1.26 04-DEC 17:22:04 5208.32 1.45
  • 35. 35 Querying Statspack Repository Performance Analysis 35  Looking for periods with an average physical reads rate > 100 SELECT TO_CHAR(snap_date,'DD-MON HH24:MI:SS') snap_date, (value-prev_value) "Physical Reads", ROUND((value-prev_value)/((snap_date-prev_date)*24*3600),2) AS "Physical Reads/s" FROM (SELECT sn.snap_time snap_date, s.value value, LAG(sn.snap_time, 1, NULL) OVER (ORDER BY sn.snap_id) prev_date, LAG(s.value, 1, NULL) OVER (ORDER BY sn.snap_id) prev_value FROM v$statname n, perfstat.stats$sysstat s, perfstat.stats$snapshot sn WHERE n.name='physical reads' AND n.statistic#=s.statistic# AND s.snap_id=sn.snap_id ORDER BY sn.snap_id) WHERE (value-prev_value)/((snap_date-prev_date)*24*3600) > 100; SNAP_DATE Physical Reads Physical Reads/s --------------- -------------- ---------------- 04-DEC 14:22:04 1969884 547.19 04-DEC 18:22:04 1623744 451.04
  • 36. Checklist: Performance Issues 36 17.05.2022 Tools: Automatic Workload Repository AWR
  • 37. 37 Automatic Workload Repository (AWR) AWR collects, processes, and maintains performance statistics for problem detection and self- tuning purposes AWR was initially based on the Statspack and there still are many similarities – The snapshot concept – Source tables and repository tables – Reports The differences are – Automatic installation, snapshot creation and purging – AWR repository is part of data dictionary – AWR processes ASH data – No explicit session data gathering – AWR requires "Diagnostic Pack" license Performance Analysis 37
  • 38. Snapshots A snapshots is a sets of performance data for a time period – Snapshots are stored in the SYSAUX tablespace by a special background process called Manageability Monitor (MMON) – By default, snapshots are performed every 60 minutes and are retained for 8 days (10g 7 days) Manual snapshot creation is possible – Default snap_level is ‘TYPICAL’ Performance Analysis 38 EXEC DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT(); EXEC DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT(flush_level=>'ALL'); SELECT snap_id, begin_interval_time, flush_elapsed, snap_level FROM dba_hist_snapshot; SNAP_ID BEGIN_INTERVAL_TIME FLUSH_ELAPSED SNAP_LEVEL ---------- ------------------------- ------------------- ---------- 569 10-JUN-10 11.00.46.505 AM +00000 00:00:01.7 1 570 10-JUN-10 11.28.13.310 AM +00000 00:00:01.8 2 38
  • 39. Configuration AWR settings can be modified using Enterprise Manager or the package DBMS_WORKLOAD_REPOSITORY – Settings are bound to a specific database ID – In case of a DBID change a new default record is automatically added Next example shows a configuration change – TOPNSQL defines the number of top SQLs stored per category – The default of 30 SQLs is some cases too small Performance Analysis 39 SELECT * FROM dba_hist_wr_control; DBID SNAP_INTERVAL RETENTION TOPNSQL ---------- -------------------- -------------------- ---------- 3693619282 +00000 01:00:00.0 +00008 00:00:00.0 DEFAULT EXEC DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(TOPNSQL=>100); DBID SNAP_INTERVAL RETENTION TOPNSQL ---------- -------------------- -------------------- ---------- 3693619282 +00000 01:00:00.0 +00008 00:00:00.0 100 39
  • 40. Baselines A baseline contains performance data from a specific time period – it is defined as a range of snapshots that are excluded from the purging process – baseline snapshots are retained indefinitely – the data is preserved for future comparison with other periods Baselines can be created directly – static (fixed) baselines – correspond to a contiguous time period in the past Baselines can be automatically created by baseline templates – Templates define a future time period – “single template” creates one baseline for a defined time period – “repeating template” creates and drops baselines based on a repeating time schedule (e.g. every Friday morning) Performance Analysis 40 40
  • 41. Baseline Creation Creating a static baseline for a past three hours with an expiration of one year – default expiration is null (indefinitely) Baselines can be renamed and dropped Performance Analysis 41 BEGIN DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE( start_time=>sysdate-3/24,end_time=>sysdate, baseline_name=>'Normal workload',expiration=>365); END; / SELECT BASELINE_TYPE, START_SNAP_ID, END_SNAP_ID, EXPIRATION FROM dba_hist_baseline WHERE baseline_name='Normal workload'; BASELINE_TYPE START_SNAP_ID END_SNAP_ID EXPIRATION ------------- ------------- ----------- ---------- STATIC 566 569 365 41
  • 42. Baseline Metrics Average, minimum and maximum values off all metrics stored in baseline’s snapshots can be selected Performance Analysis 42 SELECT metric_name,average,"MINIMUM","MAXIMUM" FROM table(DBMS_WORKLOAD_REPOSITORY.SELECT_BASELINE_METRIC( 'Normal workload')); METRIC_NAME AVERAGE MINIMUM MAXIMUM ------------------------------ ---------- ---------- ---------- Average Active Sessions .003979012 0 .100404983 Buffer Cache Hit Ratio 99.7082894 0 100 CPU Usage Per Sec .24042959 0 3.773565 Database Time Per Sec .397901245 0 10.0404983 Logical Reads Per Sec 20.0128217 0 188.166667 Logons Per Sec .058639403 0 .15 Physical Reads Per Sec .261634595 0 47.6302918 ... 158 rows selected. 42
  • 43. Baseline Templates Creating a repeating template ‘Friday morning’ – Validity one year – Expiration of created baselines 90 days Performance Analysis 43 BEGIN DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE( day_of_week => 'FRIDAY', hour_in_day => 8, duration => 2, start_time => sysdate, end_time => sysdate+365, baseline_name_prefix => 'fr_8_10_', template_name => 'Friday morning', expiration => 90); END; / SELECT repeat_interval FROM DBA_HIST_BASELINE_TEMPLATE; REPEAT_INTERVAL ----------------------------------------------------------------- FREQ=WEEKLY;INTERVAL=1;BYDAY=FRI;BYHOUR=8;BYMINUTE=0;BYSECOND=0 43
  • 44. AWR Repository Views There exist around 100 AWR views; most interesting are listed here – DBA_HIST_ACTIVE_SESS_HISTORY ASH samples – DBA_HIST_BASELINE Existing baselines – DBA_HIST_BASELINE_TEMPLATE Baseline templates – DBA_HIST_COLORED_SQL Colored SQL statements – DBA_HIST_FILESTATXS Datafile statistics – DBA_HIST_MEMORY_TARGET_ADVICE Memory Target advisory – DBA_HIST_PARAMETER Initialization parameters – DBA_HIST_SEG_STAT Segment statistics – DBA_HIST_SEG_STAT_OBJ Segment names – DBA_HIST_SNAPSHOT Existing snapshots – DBA_HIST_SQLSTAT SQL statistics – DBA_HIST_SQLTEXT SQL text – DBA_HIST_SYSMETRIC_HISTORY System metrics – DBA_HIST_SYSMETRIC_SUMMARY MIN, MAX, AVG, STDDEV over a longer snapshot interval – DBA_HIST_SYSSTAT System statistics – DBA_HIST_SYS_TIME_MODEL Time model statistics – DBA_HIST_TBSPC_SPACE_USAGE TS space usage – DBA_HIST_TEMPSTATXS Tempfile statistics – DBA_HIST_WR_CONTROL AWR configuration Performance Analysis 44 44
  • 45. Reports AWR reports can be created in Enterprise Manager or by calling SQL*plus scripts in ?/rdbms/admin Following report types can be created – Single period single instance • Scripts awrrpt.sql (current instance), awrrpti.sql (selected instance) – Single period multiple instance • Scripts awrgrpt.sql (all instances), awrgrpti.sql (selected instances) – Compare periods singe instance • Scripts awrddrpt.sql (current instance), awrddrpi.sql (selected instance) – Compare periods multiple instance • Scripts awrgdrpt.sql (all instances), awrgdrpi.sql (selected instances) – SQL reports (single period single instance) • awrsqrpt.sql, awrsqrpi.sql Reports can be created in HTML or text format Performance Analysis 45 45
  • 46. Reports Reports can also be created by directly calling the package DBMS_WORKLOAD_REPOSITORY Examples – Single period report for DBID 3693619282, instance 1, snapshot ID period 566-567 – Compare period report shows differences between periods 566-567 and 568-569 SQL> SELECT output FROM TABLE (DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_TEXT(3693619282,1,566,567)); SQL> SELECT output FROM TABLE (DBMS_WORKLOAD_REPOSITORY.AWR_DIFF_REPORT_TEXT (3693619282,1,566,567,3693619282,1,568,569)); 46 Performance Analysis 46
  • 47. 47 AWR Report Sections Performance Analysis 47  Summary Page  Load Profile  Instance Efficiency  Top 5 Wait Events  Host CPU, Instance CPU and Memory  OS Statistics  Time Model Statistics  Wait Events and Wait Event Histograms  Service Statistics  Service Wait Class  Top SQL  System Statistics  IO Stats by Function  IO Stats by Filetype  Tablespace and File IO  Memory Advisories  Enqueue Activity  Latch Activity  Mutex Sleeps  Top Segments (14 categories)  SGA Memory Summary  Memory Resize Operations  AQ Statistics  Shared Server Statistics  Instance Parameters  Use of ADDM Reports alongside AWR (12c) (bold sections are not available on Statspack reports)
  • 48. AWR- Report – 12c Container/PDB Information visible in just five sections: – SQL Statistics – Tablespace I/O Stats – File I/O Stats – Segment Statistics – Init-Ora-Parameter New is at End of Report a corresponding ADDM-Report Performance Analysis 48 48
  • 49. Compare Period Reports Such reports begin with the comparison of the host configuration, SGA configuration and the workload Performance Analysis 49 49
  • 50. Checklist: Performance Issues 50 17.05.2022 Tools: Active Session History ASH
  • 51. ASH Parameters Initialization parameter “_ash_enable” controls if ASH is enabled The sampling interval is controlled by “_ash_sampling_interval” – Default is 1000 (ms) Following query shows current retention and amount of available ASH data in memory – The retention depends on the activity of sessions – AS only active sessions are sampled the KPI “AAS” (Average Active Sessions) can be easily computed Performance Analysis 51 SELECT min(sample_time), max(sample_time), count(*), count(*)/(max(sample_id)-min(sample_id)+1) as AAS FROM v$active_session_history; MIN(SAMPLE_TIME) MAX(SAMPLE_TIME) COUNT(*) AAS ------------------ ------------------- -------- ---------- 09-JUN-10 07.38.40 09-JUN-10 11.07.12 1868 .1492966 51
  • 52. ASH Data ASH data is stored in a circular buffer in SGA – Around 150 bytes per sample row – One permanent active session needs around 500kB per hour The contents of the buffer are flushed to disk during an AWR snapshot or when the buffer is 2/3 full – every 10th sample is written to the table WRH$_ACTIVE_SESSION_HISTORY (view DBA_HIST_ACTIVE_SESS_HISTORY) – This is controlled by parameter “_ash_disk_filter_ratio” (default 10) Performance Analysis 52 SELECT min(sample_time), max(sample_time), (max(sample_id) - min(sample_id) + 1)/10 as samples, count(*)/(max(sample_id)-min(sample_id)+1)*10 as AAS FROM dba_hist_active_sess_history; MIN(SAMPLE_TIME) MAX(SAMPLE_TIME) SAMPLES AAS ------------------ ------------------- -------- ---------- 28-MAY-10 12.38.40 09-JUN-10 11.07.12 13075 .1663949 52
  • 53. ASH Memory Allocated ASH memory can be obtained from v$sgastat – 11gR2 uses around 20% more memory than 10gR2 (new attributes) ASH sampling is also available for Active Data Guard physical standby instances and Automatic Storage Management (ASM) instances – data is collected and displayed in V$ACTIVE_SESSION_HISTORY – data is not written to WRH$_ACTIVE_SESSION_HISTORY Performance Analysis 53 SELECT * FROM v$sgastat WHERE name='ASH buffers'; POOL NAME BYTES ------------ -------------------------- ---------- shared pool ASH buffers 4194304 53
  • 54. ASH Contents ASH data is multi-dimensional and can be used to find – Top sessions – Top SQLs / execution plans – Top PL/SQL objects – Top programs – Top clients – Top modules and actions – Top services – Top blockers – Top wait events – Top transaction – Top resource manager consumer groups – Top PGA memory consumers – Top temporary tablespace consumers Performance Analysis 54 54
  • 55. ASH Contents Analysis of ASH contents is based on the number of samples – It is assumed that 1 sample equals 1 second of DB time • if the sampling interval is 1 second (default) – Important metrics • accumulated DB time and CPU time • number of Read/Write- I/O, I/O requests and I/O bytes – These metrics could be used to rank the activity not only by DB time but also by their CPU or I/O usage Newer ASH samples include – PGA allocated in bytes at sample time – TEMP tablespace usage at sample time • This feature could be very useful when trying to find out which session used a lot of TEMP space in the past – Time Model information (e.g. connection mgmt, hard parse, SQL execution) Performance Analysis 55 55
  • 56. ASH Contents – 12c New Releases will bring new features, new information, new attributes… Since 12.1. ASH samples include – Container/PDB-Information (CON_ID, CON_DBID) – Dbreplay- and Capture information (IS_CAPTURED, IS_REPLAYED,…) – In_Memory data ( IN_INMEMORY_QUERY,…, since 12.1.0.2) – DBOP_NAME (db operation name like ‘Database Pump Job’, Null means SQL) Performance Analysis 56 56
  • 57. ASH Report – 12c Container/PDB Information visible in just two sections: – Top Containers – Top SQL Performance Analysis 57 57
  • 58. ASH Reports ASH reports is an easy way to analyze ASH data ASH reports can be generated using Enterprise Manager Or using SQL*Plus scripts located in ?/rdbms/admin – ashrpt.sql reports all activity within the specified period – ashrpti.sql allows • to filter for a specific session, SQL, wait class, service hash, module & action names, client identifier, PL/SQL entries • enter multiple instances in order to create a RAC report – both scripts interactively ask for parameters and call the function ASH_REPORT_TEXT or ASH_REPORT_HTML in package DBMS_WORKLOAD_REPOSITORY Performance Analysis 58 58
  • 59. ASH Report Sections ASH reports perform ranking by the highest percentages of ASH samples and are divided into the following sections – Load Profile (Average Active Sessions, Avg. Active Session per CPU) – Top Events (User events, background events and event parameters) – Top Containers – Top Service/Module – Top Phases of Execution (time model) – Top SQL with Top Events – Top SQL using literals – Top Parsing Module/Action – Top PL/SQL Procedures – Top Java Workloads – Top Sessions (including Event, Program and number of distinct TX-IDs) – Top Blocking Sessions – Top Sessions running parallel operations – Top Objects (Application, Cluster, User I/O and buffer busy waits only) – Top Latches – Activity Over Time Performance Analysis 59 59
  • 60. Section Activity Over Time This section divides the analysis period into smaller time slots – typically 10 slots; it can be specified when using ashrpti.sql • “Specify Slot Width in seconds to use in the 'Activity Over Time' section” Top 3 events are reported in each of those slots – 'Slot Count‘ is the number of samples in that slot – 'Event Count‘ is the number of samples waiting for that event – '% Event' is 'Event Count' over all samples in the analysis period Performance Analysis 60 Slot Event Slot Time (Duration) Count Event Count % Event -------------------- -------- ------------------------------ -------- ------- 14:30:00 (1.0 min) 204 log file switch (checkpoint in 148 11.59 CPU + Wait for CPU 31 2.43 log file switch completion 10 0.78 14:31:00 (1.0 min) 622 log file switch (checkpoint in 361 28.27 CPU + Wait for CPU 168 13.16 log file parallel write 21 1.64 60
  • 61. Checklist: Performance Issues 61 17.05.2022 Tools: Automatic DB Diagnostic Monitor ADDM
  • 62. 62 Automatic Database Diagnostic Monitor (ADDM) ADDM diagnoses the root causes of performance problems ADDM analysis is based on a pair of AWR snapshots (the period) Analysis is performed each time an AWR snapshot is taken – the analysis period is defined by the last two snapshots – the results are saved in the database The goal of ADDM is to reduce DB Time – ADDM outputs quantified recommendations – Recommendations are sorted by DB time savings Performance Analysis 62
  • 63. Considered Areas ADDM considers the following problem areas – CPU bottlenecks – Is this database using the CPU or others? – Undersized Memory Structures – I/O capacity issues – High load SQL statements, PL/SQL execution and compilation – High load Java usage – RAC specific issues - global cache hot blocks and objects, interconnect latency issues? – Sub-optimal use of database by the application - poor connection management, excessive parsing or application level lock contention? – Database configuration issues – Concurrency issues - buffer busy problems? – Hot objects Performance Analysis 63 63
  • 64. ADDM Results ADDM analysis results are represented as a set of findings Findings can belong to the following types – Problem findings – quantified by its impact (portion of DB time) and possibly associated with a list of recommendations • Database configuration: changing initialization parameter settings • Schema changes: hash partitioning a table or index, or using automatic segment-space management (ASSM) • Application changes: using the cache option for sequences or using bind variables • Using other advisors: running SQL Tuning Advisor or the Segment Advisor • Hardware changes: adding CPUs or changing the I/O subsystem configuration – Symptom findings - information that may lead to problem findings – Information findings - relevant for understanding the situation – Warning findings - about problems that may affect the completeness or accuracy of the analysis Performance Analysis 64 64
  • 65. ADDM Setup ADDM is enabled if parameter – CONTROL_MANAGEMENT_PACK_ACCESS is DIAGNOSTIC or DIAGNOSTIC+TUNING – STATISTICS_LEVEL is not BASIC The advisor parameter DBIO_EXPECTED influences the analysis of the I/O performance – it describes the expected average time for a single block read operation in microseconds – the default value is 10 milliseconds and should only be adjusted if the underlying hardware performs significantly different Performance Analysis 65 SQL> EXEC DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER( - 'ADDM','DBIO_EXPECTED', 5000); SQL> SELECT parameter_name,parameter_value FROM dba_advisor_def_parameters WHERE advisor_name='ADDM' AND parameter_name='DBIO_EXPECTED'; PARAMETER_NAME PARAMETER_VALUE -------------------- -------------------- DBIO_EXPECTED 5000 65
  • 66. ADDM Reports Reports can be created – in Enterprise Manager – by calling SQL*plus scripts in ?/rdbms/admin • @addmrpt.sql: reports on current instance • @addmrpti.sql: prompts for a DBID and an instance number – by calling the functions • DBMS_ADDM.GET_REPORT or • DBMS_ADVISOR.GET_TEXT_REPORT (TYPE=>’ADDM’) A report contains the following sections – Analysis Period & Target – Activity During the Analysis Period (DB time and AAS) – Summary of Findings (with activity percentage) – Findings and Recommendations – Additional Information Performance Analysis 66 66
  • 67. Running addmrpt.sql Performance Analysis 67 SQL> @?/rdbms/admin/addmrpt … Activity During the Analysis Period ----------------------------------- Total database time was 2447 seconds. The average number of active sessions was .68. Summary of Findings ------------------- Description Active Sessions Recommendations Percent of Activity ---------------------- --------------------- --------------- 1 Top SQL Statements .67 | 98.33 5 2 Log File Switches .4 | 58.4 2 3 "Other" Wait Class .03 | 4.95 0 4 "Concurrency" Wait Class .03 | 4.46 0 … Finding 1: Top SQL Statements Impact is .67 active sessions, 98.33% of total activity. -------------------------------------------------------- SQL statements consuming significant database time were found. 67
  • 68. Running GET_REPORT SQL> SELECT task_name, how_created FROM dba_addm_tasks WHERE created>sysdate-1/24; TASK_NAME HOW_CREATED ------------------------------ ------------------------------ ADDM:33378954_1_56 AUTO SQL> set long 1000000 pagesize 0 longchunksize 1000 SQL> SELECT dbms_addm.get_report('ADDM:33378954_1_56') FROM dual; ... Summary of Findings ------------------- Description Active Sessions Recom Percent of Activity mend. ------------------------------------- ------------------- ----- 1 Top SQL Statements .6 | 95.78 5 2 "Other" Wait Class .19 | 30.21 0 3 I/O Throughput .05 | 7.29 2 4 Top Segments by "User I/O" .03 | 4.99 1 5 Unusual "Other" Wait Event .03 | 4.81 4 6 Shared Pool Latches .02 | 3.85 0 7 Buffer Cache Latches .02 | 3.21 1 8 Unusual "Other" Wait Event .02 | 2.43 1 68 Performance Analysis 68
  • 69. ADDM Views DBA_ADDM_FINDINGS – contains a subset of the findings displayed in DBA_ADVISOR_FINDINGS – can be queried in order to find out if there exist any findings DBA_ADVISOR_RECOMMENDATIONS – displays the results of completed diagnostic tasks with recommendations Performance Analysis 69 SQL> SELECT task_name, finding_name, type FROM dba_addm_findings WHERE finding_name!='normal, successful completion' ORDER BY task_id; TASK_NAME FINDING_NAME TYPE ------------------------- ------------------------------ ---------- ADDM:3333478954_1_56 Log File Switches PROBLEM ADDM:3333478954_1_56 "Configuration" Wait Class SYMPTOM ADDM:3333478954_1_56 Top SQL Statements PROBLEM 69
  • 70. ADDM Views DBA_ADVISOR_FINDING_NAMES – contains all possible findings Performance Analysis 70 SQL> SELECT finding_name FROM DBA_ADVISOR_FINDING_NAMES WHERE advisor_name='ADDM' ORDER BY id; FINDING_NAME ---------------------------------------- "Administrative" Wait Class "Application" Wait Class "Cluster" Wait Class ... Undersized instance memory Top SQL Statements Top Segments by "User I/O" and "Cluster" Buffer Busy - Hot Block Buffer Busy - Hot Objects 83 rows selected. 70
  • 71. Every 3 seconds the MMON obtains and checks performance statistics If it detects any of the following issues, it triggers a real-time ADDM analysis – High load – I/O bound – CPU bound – Over-allocated memory – Interconnect bound – Session limit – Process limit – Hung session – Deadlock detected Performance Analysis 71 Real-Time ADDM
  • 72. October 2018 Oracle 12c & 18c - New Features for DBAs - Performance & Optimizer 72 Real-Time ADDM in EM DB Express
  • 73. Checklist: Performance Issues 73 17.05.2022 Common Measures
  • 74. Common Measures Gather New Statistics – Database Statistics – Data Dictionary Statistics – Fixed Object Statistics – System Statistics Check Memory Advisors – SGA-Size – PGA-Size – Shared Pool Size – DB-Cache-Size Checklist: Performance Issues 74 17.05.2022
  • 75. 75 SGA Target Advisory - Example This example shows that an increase of the SGA from the current size of 500MB to 750MB would result in – a potential reduction of the DB time from 1087453 to 118424 sec – a potential reduction of the physical reads from 563 to 68 millions Performance Analysis 75 SELECT SGA_SIZE, SGA_SIZE_FACTOR FACTOR, ESTD_DB_TIME, ESTD_DB_TIME_FACTOR DB_TIME_FACTOR, ESTD_PHYSICAL_READS FROM v$sga_target_advice ORDER BY SGA_SIZE; SGA_SIZE FACTOR ESTD_DB_TIME DB_TIME_FACTOR ESTD_PHYSICAL_READS ---------- ---------- ------------ -------------- ------------------- 250 .5 2810957 2.5849 1425938513 375 .75 1821484 1.675 926155240 500 1 1087453 1 563834920 625 1.25 401270 .369 217076444 750 1.5 118424 .1089 68844244 875 1.75 90259 .083 68844244 1000 2 87540 .0805 47418517
  • 76. Checklist: Performance Issues 76 17.05.2022 Analysis with Dynamic Performance Views
  • 77. Detail Analysis with Dynamic Performance Views Analysis tasks that can be performed by querying the dynamic performance views – Check database performance indicators – Identify top sessions – Identify top wait events – Check memory advisories – Identify hot segments – Identify top and important SQL statements – Check server resources • only partially possible over V$-Views – Check optimizer statistics/configuration • see chapter about the Cost Based Optimizer Performance Analysis 77
  • 78. 78 Performance Indicators - Cumulative Select cumulative values of DB time, DB CPU time and compute AAS, #CPUs used and the wait time using Time Model Statistics Performance Analysis 78 SELECT stat_name, value/1000000 seconds, ROUND(value/1000000/((sysdate-startup_time)*24*60*60),2) avg_active FROM v$sys_time_model, v$instance WHERE stat_name like 'DB%' UNION ALL SELECT 'Wait time' stat_name, (dbt.value-dbc.value)/1000000 seconds, NULL FROM (SELECT value FROM v$sys_time_model WHERE stat_name='DB time') dbt, (SELECT value FROM v$sys_time_model WHERE stat_name='DB CPU') dbc; STAT_NAME SECONDS AVG_ACTIVE -------------------- -------- ----------- DB time 35684.97 1.32 (Average Active Sessions) DB CPU 6391.72 .24 (Average #CPUs used) Wait time 29293.25
  • 79. 79 Performance Indicators - Metrics Most important indicators can be found in V$SYSMETRIC Performance Analysis 79 SELECT metric_name, value, metric_unit FROM v$sysmetric WHERE group_id=(SELECT group_id FROM v$metricgroup WHERE name='System Metrics Long Duration') AND metric_id IN (2003,2004,2006,2016,2018,2030,2044,2046,2057,2075,2107,2108,2123); METRIC_NAME VALUE METRIC_UNIT ------------------------- -------- --------------------- CPU Usage Per Sec 469.5218 CentiSeconds Per Second Database CPU Time Ratio 17.47001 % Cpu/DB_Time Database Time Per Sec 2687.589 CentiSeconds Per Second Database Wait Time Ratio 82.52999 % Wait/DB_Time Host CPU Utilization (%) 94.07062 % Busy/(Idle+Busy) Logical Reads Per Sec 127489.2 Reads Per Second Logons Per Sec 3.427 Logons Per Second Physical Reads Per Sec 253.24 Reads Per Second Physical Writes Per Sec 163.4667 Writes Per Second Redo Generated Per Sec 1251923 Bytes Per Second User Transaction Per Sec 25.3145 Transactions Per Second
  • 80. 80 Performance Metric Groups Performance Analysis 80 SELECT g.name, g.interval_size/100 int_sec, max_interval hist#, count(*) metric# FROM v$metricgroup g,v$metricname m WHERE g.group_id=m.group_id GROUP BY g.name, g.interval_size, g.max_interval ORDER BY 1; NAME INT_SEC HIST# METRIC# ---------------------------- ------ ------ ------- Event Class Metrics 60 60 6 Event Metrics 60 1 5 File Metrics Long Duration 600 6 6 I/O Stats by Function Metrics 60 60 10 Resource Manager Stats 60 60 9 Service Metrics 60 60 5 Service Metrics (Short) 5 24 5 Session Metrics Long Duration 60 60 1 Session Metrics Short Duration 15 1 10 System Metrics Long Duration 60 60 158 -- history 3600s System Metrics Short Duration 15 12 47 -- history 180s Tablespace Metrics Long Duration 60 0 2 ...
  • 81. 81 Performance Statistics (metrics) This example shows the short term history of the system metric “Physical Reads Per Sec” (long duration) Performance Analysis 81 SELECT TO_CHAR(begin_time,'HH24:MI:SS') time,ROUND(value,2) value FROM v$sysmetric_history WHERE group_id=(SELECT group_id FROM v$metricgroup WHERE name='System Metrics Long Duration') AND metric_name = 'Physical Reads Per Sec' ORDER BY begin_time; TIME VALUE -------- ---------- 14:25:25 60.19 14:26:25 42.93 14:27:25 13.92 ... 15:23:25 124.18 15:24:25 18.61 15:25:25 16.95 61 rows selected.
  • 82. 82 Performance Indicators - Metrics Performance Analysis 82  All statistics in the metric group ‘System Metrics Long Duration’ are refreshed with an interval of 60 seconds  The metric group ‘System Metrics Short Duration’ is refreshed every 15 seconds but it does not contain all metrics included in the group ‘System Metrics Long Duration’  V$SYSMETRIC_HISTORY contains last 60 intervals of ‘System Metrics Long Duration’ metrics and 12 intervals of ‘System Metrics Short Duration’  Warning: the METRIC_ID’s may change in future releases
  • 83. 83 Performance Indicators - Interpretation Performance Analysis 83  If host CPU utilization is over 70% then focus on the CPU usage of all database instances running on this server  Low DB CPU time ratio (<40%) means a lot of waits  check the wait classes  Total Physical IO…  Physical Reads + Physical Writes > 5000/s  system could be IO bound, find the causer  Logons per second > 1  check connection pooling / application interface  Redo per second > 1000000 bytes(1MB/s=3,6GB/h=86,4GB/day)  check data loads  Hard parse rate > 10/s may result in a high latch contention
  • 84. 84 Identification of Top Sessions Performance Analysis 84 SELECT session_id, ROUND(cpu/intsize_csec,2) cpus_used, ROUND(physical_reads/intsize_csec*100) preads /*per second*/, ROUND(pga_memory/1024/1024,2) pga_mb, program FROM v$sessmetric m, v$session s WHERE m.session_id=s.sid AND (m.cpu>10 OR m.physical_reads>10) ORDER BY cpus_used; SESSION_ID CPUS_USED PREADS PGA_MB PROGRAM ---------- ---------- ---------- ---------- ----------------------- 9906 .04 3 1.65 perl@srv1 5638 .05 0 1.65 JDBC Thin Client 9428 .08 374 4.08 oe331.exe 8089 .29 1004 11.9 sqlplus@srv1 3707 .36 2333 95.13 oe331.exe 7398 .71 0 1.58 CTEXT.exe  Top sessions can be identified by querying V$SESSMETRIC  Data is refreshed every 15 seconds  Other V$SESSION columns give more details about the session
  • 85. 85 Identification of Top Sessions Performance Analysis 85 SELECT session_id, ROUND(cpu/intsize_csec,2) cpus_used, ROUND(physical_reads/intsize_csec*100) ph_reads /*per second*/, SUBSTR(sql_text,1,50) sql_text FROM v$sessmetric m, v$session s, v$sqlstats sql WHERE m.session_id=s.sid AND s.sql_id=sql.sql_id(+) AND (m.cpu>10 OR m.physical_reads>10) ORDER BY ph_reads DESC; SESSION_ID CPUS_USED PH_READS SQL_TEXT ---------- ---------- ---------- --------------------------------- 4966 .48 3309 BEGIN kutil2.export... 9428 .21 1993 SELECT COUNT (COMP_ID) FROM... 8585 .04 153 BEGIN kutil2.export... 4412 .11 98 SELECT DISTINCT COMP_ID FROM... ...  V$SQLSTATS can be joined in order to retrieve the last SQL command executed by the session  Results are sorted by the number of physical reads per second
  • 86. 86 Identification of Top Sessions Performance Analysis 86  V$SESSMETRIC contains only selected session statistics  CPU, physical reads, logical reads, PGA memory and the number of hard and soft parses  In order to rate sessions by other criteria (e.g. redo size) the session statistics have to be temporary saved (e.g. in a table) in order to compute the metric manually  Warning: statistic#’s may change in future releases CREATE GLOBAL TEMPORARY TABLE my_sesstat (sid NUMBER, value NUMBER, dt DATE); INSERT into my_sesstat SELECT sid,value,sysdate FROM v$sesstat where statistic#=134 /*redo size*/; <WAIT A MINUTE> INSERT into my_sesstat SELECT sid,value,sysdate FROM v$sesstat where statistic#=134 /*redo size*/;
  • 87. 87 Identification of Top Wait Events Performance Analysis 87 SELECT SUBSTR(name,1,30) name, num_sess_waiting sess#, ROUND(time_waited/100,2) time_s, wait_count waits, ROUND(time_waited_fg/100,2) time_fg_s, wait_count_fg waits_fg FROM v$event_name n, v$eventmetric m WHERE n.event#=m.event# AND n.wait_class!='Idle' AND time_waited>0 ORDER BY time_waited DESC; NAME SESS# TIME_S WAITS TIME_FG_S WAITS_FG ---------------------------- ------ ------ ------ --------- ------- latch: enqueue hash chains 2 50.25 4499 50.25 4499 log file parallel write 0 40.64 146552 0 0 latch: cache buffers chains 0 37.42 3635 37.4 3634 latch free 0 26.15 2651 26.15 2649 latch: cache buffers lru chain 3 10.11 1017 10.1 1016 ...  Top wait events of last 60 seconds, refreshed every minute  SESS#: number of sessions waiting at the end of the interval  FG: foreground sessions
  • 88. 88 Identification of Top Wait Events Performance Analysis 88 SELECT begin_time, wait_class, ROUND(dbtime_in_wait) "DBTIME%", ROUND(m.time_waited/100,2) time_s, m.wait_count waits, ROUND(m.time_waited_fg/100,2) time_fg_s, m.wait_count_fg waits_fg FROM v$system_wait_class n, v$waitclassmetric_history m WHERE n.wait_class#=m.wait_class# AND m.time_waited>100 AND wait_class!='Idle' ORDER BY begin_time, m.time_waited; BEGIN_TIME WAIT_CLASS DBTIME% TIME_S WAITS TIME_FG_S WAITS_FG ------------ ----------- ------- ------- ------- --------- -------- ... 14-OCT 15:23 System I/O 8 45.45 178801 0 0 14-OCT 15:23 Concurrency 13 76.49 8227 76.33 8223 14-OCT 15:23 User I/O 21 123.23 425747 116.66 0 14-OCT 15:24 System I/O 8 48.9 171067 0 0 14-OCT 15:24 Concurrency 12 74.66 7794 74.54 7789 14-OCT 15:24 User I/O 21 124.18 424620 118.42 0  Wait class history (one hour retention), refreshed every minute  DBTIME% Percent of database time spent in the wait
  • 89. 89 Identification of Top Wait Events Current waits per session – Refreshed immediately Performance Analysis 89 SELECT sid, event, state, wait_time_micro wait_us,time_since_last_wait_micro no_wait_us FROM v$session WHERE type!='BACKGROUND' AND status='ACTIVE'; SID EVENT STATE WAIT_US NO_WAIT_US ----- ------------------------------ ------------------- ------- ---------- 13 db file scattered read WAITED KNOWN TIME 1924 15119 15 latch: cache buffers lru chain WAITED KNOWN TIME 10328 86844 17 latch: enqueue hash chains WAITED KNOWN TIME 5257 31407 22 latch: cache buffers chains WAITING 95 0 24 latch: enqueue hash chains WAITING 15471 0 125 SQL*Net message to client WAITED SHORT TIME 2 104 141 db file sequential read WAITING 520 0 146 db file sequential read WAITING 2273 0 148 db file sequential read WAITING 3683 0 149 direct path read WAITED KNOWN TIME 4035 4645 150 direct path write temp WAITING 363 0
  • 90. 90 Wait State and Wait Time (V$SESSION) Possible wait states are – WAITING - Session is currently waiting – WAITED UNKNOWN TIME - Duration of the last wait is unknown (when the parameter TIMED_STATISTICS is set to false) – WAITED SHORT TIME - Last wait was less than a hundredth of a second – WAITED KNOWN TIME - Duration of the last wait is specified in the WAIT_TIME column Wait time columns – WAIT_TIME - duration of the last wait in hundreds of a second (0 if session is currently waiting) – SECONDS_IN_WAIT - time waited for the current event or amount of time since the start of the last wait – 11g: WAIT_TIME_MICRO - amount of time waited (in microseconds) in the current or last wait – 11g: TIME_SINCE_LAST_WAIT_MICRO - time elapsed since the end of the last wait (in microseconds); 0 if the session is currently in a wait Performance Analysis 90
  • 91. 91 Identification of Top Wait Events All wait events of a particular session Performance Analysis 91 SELECT sid, event, total_waits waits, ROUND(time_waited/100,2) time_sec, ROUND(average_wait*10,1) avg_ms FROM v$session_event WHERE sid = 15 ORDER BY time_waited DESC; SID EVENT WAITS TIME_SEC AVG_MS ------ ----------------------------- ------ -------- ------- 15 db file sequential read 22700 61.59 2.7 15 latch: cache buffers chains 2115 20.52 9.7 15 latch: row cache objects 1344 13.03 9.7 15 cursor: pin S 302 3.32 11.0 15 latch: shared pool 137 .89 6.5 15 log file switch (checkpoint in 12 .65 54.2 ...
  • 92. 92 Identification of Top Wait Events All non-idle waits since the last instance startup Performance Analysis 92 SELECT wait_class, event, ROUND(time_waited/100,2) time_s, total_waits waits, ROUND(average_wait*10,1) avg_ms FROM v$system_event WHERE wait_class!='Idle' ORDER BY time_waited DESC; WAIT_CLASS EVENT TIME_S WAITS AVG_MS ----------- ------------------------------ -------- -------- -------- User I/O db file sequential read 3100.82 500132 6.2 Other latch: enqueue hash chains 2419.66 219071 11 System I/O log file parallel write 2000.32 7792650 .3 Concurrency latch: cache buffers chains 1933.61 195386 9.9 Concurrency latch: row cache objects 1255.14 125995 10 Other latch: cache buffers lru chain 486.85 51686 9.4 Concurrency cursor: pin S 315.55 28213 11.2
  • 93. 93 Identify Hot Segments Hot segments are objects with high number of physical reads, logical reads, buffer busy waits or row lock waits Can be identified by viewing the segment statistics The statistics values should be related to – the size of the segment (columns bytes or blocks in DBA_SEGMENTS) – the uptime of the instance (values are cumulative) Hot segments in a specific time interval are shown on Statspack and AWR reports – unfortunately not related to the segment size Performance Analysis 93
  • 94. 94 Identify Hot Segments Using segment statistics to find top segments in the category “physical reads” Performance Analysis 94 SELECT owner, object_name, object_type, value FROM (SELECT * FROM v$segment_statistics WHERE statistic_name='physical reads' ORDER BY value DESC) WHERE rownum<5 ORDER BY value DESC; OWNER OBJECT_NAME OBJECT_TYPE VAL ---------- -------------------- -------------------- ---------- SH CUSTOMERS TABLE 293008 SH COSTS_PROD_BIX INDEX PARTITION 95280 SH COSTS TABLE PARTITION 56112 SH SALES TABLE PARTITION 23680
  • 95. 95 Identify Hot Segments Every block of the relatively small table CONTRACT is read from the disk 1694 times per day! Performance Analysis 95 SELECT seg.owner||'.'||seg.segment_name as name, ROUND(bytes/1024/1024) size_mb, ROUND(value/(sysdate-startup_time)) preads_per_day, ROUND(value/blocks/(sysdate-startup_time)) preads_per_block_and_day FROM dba_segments seg, v$instance, v$segment_statistics segst WHERE seg.owner=segst.owner AND seg.segment_name=segst.object_name AND statistic_name='physical reads' AND NVL(seg.partition_name,'NULL')=NVL(segst.subobject_name,'NULL') ORDER BY value DESC; PREADS_PER_ NAME SIZE_MB PREADS_PER_DAY BLOCK_AND_DAY ----------------------------- ---------- -------------- ------------ ADM21.QUOTATION 592 88177349 1164 ADM21.CONTRACT 61 13223731 1694 ADM21.PK_DESCRIPTION 864 911662 8 ADM21.FK_QUOTATION 72 780679 85 ADM21.PARTNER 32 641029 157
  • 96. 96 Identify Top SQL Top SQL a.k.a. high load SQL are statements consuming a lot of CPU, IO and memory The library cache holds typically thousands of SQL statements – Retention of statements in library cache depends on many factors such as size of the cache, number of distinct statements and their complexity – The retention typically varies between a few minutes and a few hours • depends on the workload • may differ for each V$-view Many V$-views show the contents of the library cache and summarize the consumption of resources per statement or per cursor Performance Analysis 96
  • 97. 97 Identify Top SQL V$SQLSTATS provides statistics for unique combinations of SQL_ID and PLAN_HASH_VALUE – SQL_ID: SQL identifier of the parent cursor in the library cache – PLAN_HASH_VALUE: numeric representation of the SQL plan for this cursor – SQL_FULLTEXT: text for the SQL statement exposed as CLOB – CPU_TIME (in microseconds) for parsing, executing and fetching – ELAPSED_TIME: (in microseconds) for parsing, executing and fetching – DISK_READS: physical reads – BUFFER_GETS: logical reads – SHARABLE_MEM: shared memory (in bytes) currently occupied by all cursors – EXECUTIONS: number of executions that took place since it was brought into the library cache V$SQL contains more details about each cursor – USERS_EXECUTING: number of sessions executing this statement – FIRST_LOAD_TIME: time the cursor was brought into the library cache – OPTIMIZER_COST: cost of this query given by the optimizer Performance Analysis 97
  • 98. 98 Identify Top SQL V$SQL_BIND_CAPTURE displays information on bind variables used in cursors – NAME: of the bind variable – VALUE_STRING: value of the bind represented as a string (one of the values used during a past execution of its associated cursor) – captured are bind variables used in the WHERE or HAVING clauses V$SQL_OPTIMIZER_ENV displays the contents of the optimizer environment used to build the execution plan of a SQL cursor – NAME: of the parameter (e.g. “optimizer_features_enable”) – VALUE: value of the parameter (e.g. “10.2.0.4”) V$SQL_WORKAREA_ACTIVE shows work areas currently allocated – OPERATION_TYPE (SORT, HASH JOIN, GROUP BY, …) – ACTUAL_MEM_USED: PGA memory (in bytes) currently allocated – TEMPSEG_SIZE: size (in bytes) of the temporary segment used Performance Analysis 98
  • 99. 99 Identify Top SQL Statements ordered by their total elapsed_time (=> DB time) Performance Analysis 99 SELECT sql_id, ROUND(cpu_time/1000000,2) "CPU", ROUND(elapsed_time/1000000,2) "Elaps", ROUND(disk_reads) "PhReads", ROUND(buffer_gets) "LogReads", executions "Executes", ROUND((sysdate-last_active_time)*1440) "Minut. Ago" FROM v$sqlstats ORDER BY elapsed_time DESC; SQL_ID CPU Elaps PhReads LogReads Executes Minut. Ago ------------- -------- -------- -------- -------- -------- ---------- 9sg53ps2g0ps2 9500.75 40593.8 27924322 5.11E+08 178 112 3c2nr9k15xccf 5511.38 5514.84 19 4.48E+08 1.89E+08 1 dm333j6hxzm3g 5401.42 5402.68 0 4.45E+08 1.89E+08 1 26ywz70p2ct0u 4971.57 4983.9 69 2.01E+08 48236 0 ajgappu6rtcyv 4737.91 4744.84 1310 1.92E+08 4421 0 2bj1jsb0tt717 2630.46 2636.98 1 2.00E+08 66682196 0 gtxzycka2u5fy 1768.69 1904.29 140315 83788901 718 2 gydrp3fmpsm4f 442.63 891.61 40946 11958948 1 141
  • 100. 100 Identify Top SQL Statements ordered by their average elapsed_time Performance Analysis 100 SELECT sql_id, ROUND(cpu_time/executions/1000000,2) "CPU/ex", ROUND(elapsed_time/executions/1000000,2) "Elaps/ex", ROUND(disk_reads/executions) "PR/ex", ROUND(rows_processed/executions) "Rows/ex", executions "Execs", optimizer_cost "Cost", users_executing "Current" FROM v$sql WHERE executions>0 ORDER BY elapsed_time/executions DESC; SQL_ID CPU/ex Elaps/ex PR/ex Rows/ex Execs Cost Current ------------- ------ -------- ------ ------- ------ ------ ------- gydrp3fmpsm4f 442.63 891.61 40946 1 1 0 0 9sg53ps2g0ps2 53.37 228.3 157027 1 179 0 1 5z0r8s71drxc7 25.18 180.1 69306 300 5 62 0 bjzjsujrpg7s4 2.69 48.82 10466 2057 5 38 0 7w7a2atyp74cy 28.72 48.35 47302 985212 1 11227 0 bm1k2mqxrdncn 1.34 40.15 14099 2 2 20130 0 b0mdju9p7najn 8.68 37.99 132 1 1 0 0 8uqtp9m4su2f6 11.72 21.24 68785 9053 5 337751 1
  • 101. 101 Identify Top SQL Current PGA memory and temporary tablespace usage per statement (in bytes) – workarea_size is the allocated memory Performance Analysis 101 SELECT sql_id, SUM(ACTUAL_MEM_USED) actual_mem_used, SUM(WORK_AREA_SIZE) workarea_size, SUM(TEMPSEG_SIZE) alloc_temp_size FROM v$sql_workarea_active GROUP BY sql_id ORDER BY 2 DESC; SQL_ID ACTUAL_MEM_USED WORKAREA_SIZE ALLOC_TEMP_SIZE ------------- -------------- ------------- --------------- 8j0n5xh1smbs3 529778688 693104640 9736028160 8wr3bvwqwasuf 88411136 98216960 36ksnj22f6vss 5901312 5898240 5583667200 bkzt5zg8r6ac4 5901312 5898240 5426380800 7cccyn1874ah3 692224 692224 48z2qrs9pdv17 145408 178176 67hpxuxsg867w 0 3648512
  • 102. 102 Check Server Resources An average CPU utilization of the database server within the last 15 seconds can be directly selected from V$SYSMETRIC – This metric is reliable and matches the results shown by the system utility mpstat Performance Analysis 102 SQL> !mpstat 15 1 04:15:13 PM CPU %user %nice %sys %iowait %idle intr/s 04:15:28 PM all 26.63 0.00 0.77 0.02 72.11 4513.47 Average: all 26.63 0.00 0.77 0.02 72.11 4513.47 SQL> SELECT value, metric_unit FROM v$sysmetric WHERE metric_name='Host CPU Utilization (%)' AND group_id=(SELECT group_id FROM v$metricgroup WHERE name='System Metrics Short Duration'); VALUE METRIC_UNIT ---------- ------------------------------ 27.9147097 % Busy/(Idle+Busy)
  • 103. 103 Check Server Resources V$OSSTAT shows information about the database server Performance Analysis 103 SELECT stat_name,value FROM v$osstat ORDER BY stat_name; STAT_NAME VALUE ------------------------------ -------------- BUSY_TIME 2405773468 IDLE_TIME 8285566777 IOWAIT_TIME 551614543 LOAD 3.029296875 /* current value */ NICE_TIME 8295 NUM_CPUS 16 /* threads */ NUM_CPU_CORES 8 /* cores */ NUM_CPU_SOCKETS 2 /* sockets */ PHYSICAL_MEMORY_BYTES 67585912832 /* RAM */ RSRC_MGR_CPU_WAIT_TIME 39073870 SYS_TIME 558782034 USER_TIME 1758082647 VM_IN_BYTES 4526080 VM_OUT_BYTES 20391936
  • 104. Checklist: Performance Issues 104 17.05.2022 More information
  • 105. More information .. - MOS notes (1) Diagnosing Performance Issues – How to Investigate Slow or Hanging Database Performance Issues (Doc ID 1362329.1) – Collecting Diagnostic Information For DB Performance Issues (Doc ID 1998964.1) – Diagnostics For Database Performance Issues (Doc ID 781198.1) – How to Use AWR Reports to Diagnose Database Performance Issues (Doc ID 1359094.1) – How to use OS Commands to Diagnose Database Performance Issues? (Doc ID 1401716.1) – How to Collect Diagnostics for Database Hanging Issues(Doc ID 452358.1) – How to Get Historical Session Information in Standard Edition(Doc ID 2055993.1) Avoiding Performance Issues – Avoiding and Resolving Database Performance Related Issues After Upgrade (Doc ID 1528847.1) – Best Practices: Proactively Avoiding Database and Query Performance Issues (Doc ID 1482811.1) – Best Practices: Proactive Data Collection for Performance Issues(Doc ID 1477599.1) Checklist: Performance Issues 105 17.05.2022
  • 106. More information .. - MOS notes (2) Statspack – Systemwide Tuning Using STATSPACK Reports(Doc ID 228913.1) – FAQ- Statspack Complete Reference(Doc ID 94224.1) – Performance overhead when running statspack(Doc ID 396061.1) – Gathering a StatsPack Snapshot(Doc ID 149121.1) – Installing and Configuring StatsPack Package(Doc ID 149113.1) – How To Automate Purging of Statspack Snapshots(Doc ID 464214.1) – Installing and Using Standby Statspack(Doc ID 454848.1) Checklist: Performance Issues 106 17.05.2022
  • 107. More information .. - MOS notes (3) Automatic Workload Repository (AWR) – Performance Diagnosis with Automatic Workload Repository (AWR)(Doc ID 1674086.1) – Automatic Workload Repository (AWR) Reports - Main Information Sources(Doc ID 1363422.1) – Comparing The AWR Report With The Baseline Values.(Doc ID 1258564.1) – FAQ: Automatic Workload Repository (AWR) Reports(Doc ID 1599440.1) – How to generate 'Automatic Workload Repository' ( AWR), 'Automatic Database Diagnostic Monitor' (ADDM), 'Active Session History' (ASH) reports.(Doc ID 2349082.1) Automatic Database Diagnostic Monitor (ADDM) – How to Compare ADDM Reports(Doc ID 2168126.1) – How to Generate and Check an ADDM report(Doc ID 1680075.1) Active Session History (ASH) – Analysis of Active Session History (Ash) Online and Offline(Doc ID 243132.1) Checklist: Performance Issues 107 17.05.2022
  • 108. More information .. Christian Antognini "Troubleshooting Oracle Performance" – Part II "Identification" • Analysis of Reproducable Problems • Real-Time-Analysis of Irreproducable Problems • Postmortem Analysis of Irreproducable Problems Checklist: Performance Issues 108 17.05.2022
  • 109. More information .. Trivadis Training "O-TUN - Oracle Database Performance Troubleshooting and Tuning" – 3 days Contents – Terminology – Statistics, Metrics, Waits, Locks, Latches, Performance Indicators – Performance Analysis – Procedure – Introduction to the Cost-Based Optimizer – Memory Tuning: SGA and PGA Management, Recommendations – Optimizing Data Structures and Data Access – I/O Analysis, Calibration, Storage Issues, Tuning Recommendations – Oracle Performance and Resource Management Checklist: Performance Issues 109 17.05.2022
  • 110. Questions and Answers Markus Flechtner Principal Consultant Phone +49 211 5866 64725 Markus.Flechtner@Trivadis.com @markusdba http://markusdba.de Download the slides from https://www.slideshare.net/markusflechtner Please don‘t forget the session evaluation – Thank you! 17.05.2022 Checklist: Performance Issues 110