Oracle SQL Tuning for Day-to-Day Data Warehouse Support
1. Oracle SQL Tuning
for Day-to-Day
Data Warehouse Support
Nikos Karagiannidis
oradwstories.blogspot.com
2. About Me
• “Oracle Learner”
– “I learn or re-learn something new about SQL Tuning
and Oracle everyday!” (a Tom Kyte quote)
• Oracle blogger
– oradwstories.blogspot.com
• DW Architect
– working with Enterprise-level Oracle DWs since 2003
• Ph.D.
– in Computer Science/Databases/Query Processing &
Optimization for DWs
• OCP
– Oracle Database 11g Administrator Certified Professional
2
3. *Disclaimer*
• SQL Tuning is hard and there is no silver bullet
(i.e., a “magical solution”)!
• The only way to learn is by practicing SQL
Tuning on real-world cases; and practicing a
lot!
• Judge on your own which of the following tips
are useful and which are not
3
4. Note
• In the presentation we reference a set of
SQL*plus scripts. All the scripts are available at:
https://github.com/nkarag/sqltuning4DWsupport
• The scripts that query ETL metadata are based on
Oracle Warehouse Builder 11gR2.
• The examples shown are executed on an Oracle
Database 11g (11.2.0.3.0) running on an X2-2
Exadata machine.
4
6. • We have a running SQL statement that has a
performance problem
• We know that there is no locking/blocking
issue that is causing the delay
• and no other database-level problem
• How do we fix it?
6
Problem in Question (2)
7. Your GOAL
(as a DW Support Engineer)
Just find a way (any way!) to make the statement
finish in a reasonable time, so as to unblock the
ETL flow, and keep your SLA to the business
DON’T try to fix the problem permanently (unless
it is something obvious and trivial)
Leave this task for the developers on the next
business day
7
8. Two Methods: You choose!
• Method A (Black Box approach)
– You DONT understand what is wrong in the
execution plan
– You just try to make the query run
• Method B (White Box approach)
– You understand what is wrong in the execution
plan
– You fix it (typically with hints )
8
9. Two Methods
9
Black Box Approach
(ordered by
Most Popular DESC)
White Box Approach
Run it “manually” from TOAD/SQL
Developer and see if it finishes …
1. Find which operations in the
execution plan cause the problem
gather_table_stats and try again … 2. Change the problematic operations
with Hints
Check recommendations of SQL Tuning
Advisor
Find an older execution plan and try
this one
10. You Choose!
10
Easier to apply
BUT …
If it does not work early on,
you might end-up wasting a lot of time trying out solutions blindfold!
More difficult to apply
BUT …
If you spend some time to find out what is wrong, then you will fix the problem for sure
At the end of the day only the result matters!
Black Box
White Box
11. But before you choose a method for SQL
Tuning …
• Find if the delay is due to an exceptionally
large delta from the source?
• It might be the case that you just can do
anything either than wait for it to finish
11
12. exceptionally large delta from the source?
• Check execution history (owb_execs.sql) to see the
usual input of rows
12
Source table in the USING clause of the MERGE
• The mapping must finish in order OWB to record the num of rows
• Use SELECT count(*) from (<using subquery>) to find the input
number of rows
13. The Black Box Approach
13
Black Box Approach
(ordered by
Most Popular DESC)
Run it “manually” from TOAD/SQL
Developer and see if it finishes …
gather_table_stats and try again …
Check recommendations of SQL Tuning
Advisor
Find an older execution plan and try
this one
14. Run it “manually” from TOAD/SQL
Developer and see if it finishes …
…
• Why the plan in TOAD is different?
• Different “optimizer environment”
– I.e., the DB user from which the ETL flows run has different
values in some init parameters than your DB account
– Most typical example (for an Exadata):
init parameter optimizer_index_cost_adj (it “shows” how
expensive is an index range scan)
– E.g., ETL_USER optimizer_index_cost_adj = 100
MY_USER optimizer_index_cost_adj = 10000
– To run query as ETL_USER, use hint:
• /*+ OPT_PARAM('optimizer_index_cost_adj' 100)*/
– To run query as MY_USER, use hint:
• /*+ OPT_PARAM('optimizer_index_cost_adj' 10000)*/
14
15. gather_table_stats and try again …
• Check NUM_ROWS versus count(*)
select owner, table_name, num_rows
from dba_tables
where
owner = 'TARGET_DW'
and table_name = 'CUSTOMER_DIM‘
/*OWNER TABLE_NAME NUM_ROWS
TARGET_DW CUSTOMER_DIM 8.962.009*/
select count(*) from TARGET_DW.CUSTOMER_DIM
/*COUNT(*)
9.161.368*/
• A problem exists ONLY when the difference is in orders of
magnitude and not by a factor of two or three!
– Actual : 1000, Statistics: 3000 Not a Problem
– Actual : 1e3, Statistics: 1e6 a Problem!
• exec dbms_stats.gather_table_stats(‘OWNER’, ‘TABNAME’)
15
16. Check recommendations of SQL Tuning
Advisor
• Scripts:
– sqltune_exec.sql
• Tune a specific sql_id by creating a tuning task and
calling DBMS_SQLTUNE.EXECUTE_TUNING_TASK
(Note: you must login to the same instance as the one
running the sql_id because the script assumes the
sql_id is loaded in the library cache)
– sqltune_report.sql
• Report of the results of a sql tuning task (including
recommendations with respective sql statements)
16
18. 18
Check recommendations of SQL Tuning
Advisor (example)
An SQL
Profile is
to a
query
what are
statistics
to a
table!
It does
NOT fix
the plan!
19. Find an older execution plan and try this one
• Check execution history of the specific node and
find some good executions in the past
• Check execution plan history of the specific sql_id
and find a different plan in the past,
corresponding to a good execution
• Try to run the query with the old plan
See also “Detecting a change in the execution plan of a query” at:
http://oradwstories.blogspot.gr/2015/02/detecting-change-in-execution-plan-of.html
19
21. • DURATION_MINS_P80 =
80% of the executions (in the last 15 days) are below this time
• Percentile 80 is a better approximation of the Characteristic Time of the
mapping than the average
• Verify that indeed there is a delay!
21
Check execution history for this task (2)
(owb_execs.sql)
22. Check execution plan history (1)
(fs_awr.sql)
• You need the sql id and just query DBA_HIST_SQLSTAT
• The Execution Plan is represented by the Plan Hash Value (phv)
• Compare current phv (see e-mail) with the phv of a good day in the past
22
23. Check execution plan history (2)
(fs_awr.sql)
• You need the sql id and just query DBA_HIST_SQLSTAT
• The Execution Plan is represented by the Plan Hash Value (phv)
• Compare current PHV (fsess_owb.sql) with the PHV of a good day in the past
23
Execution
Plan History
Can you see the current PHV in the plan history?
If not, then this is a new plan!
No plan
change
A plan
change
24. Check execution plan history (3)
(fs_awr.sql)
Note: You might NOT be able to find the query
in AWR!
• Execution plan history is stored in Oracle AWR
(dba_hist_sqlstat)
• A snapshot is taken every 1 hour of selected
queries (not for all queries):
Default retention is 8 days.
24
25. Try to run the query with the old plan
1. Note the sql_id and phv of the old plan
2. Run xplan_awr_all.sql script and Copy hints
from the Outline Data section
3. Embed hints in the target query
4. Check that the old plan is generated
5. Run the query from TOAD and skip the node
in the flow
25
29. 29
2. Run xplan_awr_all.sql script and Copy hints from the
Outline Data section (2)
This syntax is called a “Global
Hint” :
e.g.,
/*+ FULL(@qblock_name,
table@qblock_name) */
30. 3. Embed hints in the target query
30
Just find an
empty spot
with no
other hints
and paste
the hints
Don’t
remove the
other hints
(especially
the APPEND
hint!)
34. The White Box Approach
34
White Box Approach
1. Find which operations in the
execution plan cause the problem
2. Change the problematic operations
with Hints
Check also:
“How to effectively tune a query that does not even finish
(SQL Monitoring and ASH in action)” at:
http://oradwstories.blogspot.gr/2015/01/how-to-effectively-tune-query-that-does.html
37. DONT Tune by Pattern!
• Tune by Pattern
– A visual inspection of the whole plan, which leads to a
conclusion of the form: “this is a bad plan because it
has too many nested loops”
• FIND the real problem!
– i.e., the operation that causes the delay
• And fix this!
• Use Active Session History
(V$ACTIVE_SESSION_HISTORY)
See: http://www.oracle.com/technetwork/database/manageability/ppt-
active-session-history-129612.pdf
37
38. Find the most waited Event (ash_events.sql)
38
Find the most time-consuming operation (ash_ops.sql)
42. The problem IS NOT in the Full Table
Scan, but …
• How many times ASSET_DIM is scanned?
• ANSWER:
As many as the rows of the driving table/row-
source in the NESTED LOOPS parent
operation!
42
43. Nested Loops vs Hash Joins
43
select * from TSMALL, TLARGE WHERE tsmall.id = tlarge.id
The FTS has run 99 times!
Each time for each row in the
driving table
We have read 99x10.000 rows just
to return 99 rows! These are a lot
of wasted rows!!!!
The FTS has run only
once!
Number of
Output
rows
Per
operation
Join
filter
44. The Problem is (90% of the time) …
• Repetition-related (i.e. the driving operation is causing this operation
to be executed more times than necessary)
– Operations that this problem typically appears:
NESTED LOOPS, FILTER (from an UNNESTED subquery IN, NOT IN, EXISTS, NOT EXISTS)
– Typical Causes:
Wrong Join Order, NO_UNNESTED subqueries, OR conditions
• Filtering-related (i.e. because of lack of a more efficient access path,
we read more rows than we actually need, and thus waste I/O on
accessing data that is not returned to the user).
– Operations that this problem typically appears:
FULL TABLE SCAN, PARTITION RANGE ALL (FULL PARTITION SCAN), INDEX FULL SCAN
where most of the rows are rejected by filter predicates!
– Typical Causes:
lack of appropriate index, no partition pruning
More details at:
http://savvinov.com/2013/01/28/efficiency-based-sql-tuning/ 44
45. The Fix is (90% of the time) …
• Fix:
– Correct the join order
• /*+ ORDERED */
• /*+ LEADING() */
– Remove Subquery by rewriting the query or UNNEST subquery
• /*+ UNNEST */
– Choose the right join method
• /*+ USE_HASH() */
• /*+ USE_NL() */
– Choose the right access method
• /*+ FULL() PARALLEL() */
• /*+ INDEX() */
• Use a Partition filter that does partition pruning
45
46. How do we fix the join order?
• Put the most restricted table/row-source first
/*+ LEADING(
PRM_CONSTR_STATUS PRM_CONSTR_STATUS PRM_CONSTR_V PERCLI CUST
PRM_CONSTR_TYPE PRODUCT CONSTR_CONN_REASON CL PI PC TRANSTAT
PI_CLI ROOT_ASSET ASSET TV) */
Often, only the first table is enough
Or, use /*+ ORDERED */ if the right order is in the
FROM CLAUSE
(from more restrictive to less restrictive)
46
***General Strategy***:
eliminate as much data as possible as early as possible!
47. So the fix is …
elapsed time before fix: 5 hours
elapsed time after fix: 23 secs!
SELECT /*+
LEADING(PRM_CONSTR_STATUS) USE_HASH(ASSET ROOT_ASSET) */
…
FROM PERIF.CONSTR_FCT_NMR_V PRM_CONSTR_V,
PRESENT_PERIF.CLI_DIM PERCLI,
TARGET_DW.CUSTOMER_DIM CUST,
PRESENT_PERIF.CONSTR_ORDER_TYPE_DIM PRM_CONSTR_TYPE,
PRESENT_PERIF.ORDER_STATUS_DIM PRM_CONSTR_STATUS,
PRESENT_PERIF.PROVISION_ITEM_TYPE_DIM PRODUCT,
PRESENT_PERIF.CONSTR_REASON_DIM CONSTR_CONN_REASON,
TARGET_DW.CLI_DIM cl,
TARGET_DW.PRODUCT_INSTANCE_DIM pi,
TARGET_DW.PRODUCT_CATALOG_DIM pc,
ORDERS_DW.TRANSFER_STATUS_REASON_DIM TRANSTAT,
TARGET_DW.CLI_DIM PI_CLI,
ORDERS_SOC_DW.ASSET_DIM ROOT_ASSET,
ORDERS_SOC_DW.ASSET_DIM ASSET,
TARGET_DW.PRIMARY_CLI_TV TV
47
48. eliminate as much data as possible as
early as possible!
from more restrictive to less restrictive
• Start with the table with the most selective
filter
• Choose the next table in the join order:
– the table where rows will be eliminated the most
(and NOT multiplied!)
– Go from child to the parent and not the opposite
• More details at:
http://www.slideshare.net/khailey/kscope-2013-vst
48
49. An example (star query)
SELECT ...
FROM
PRESENT_PERIF.CONSTR_WCRM_FCT,
TARGET_DW.PROVIDER_DIM DWH2_WCRM_DEKTIS_PROVIDER_DIM,
TARGET_DW.PROVIDER_DIM DWH2_WCRM_DOTIS_PROVIDER_DIM,
PRESENT_PERIF.ORDER_ORDER_STATUS_DIM DWH2_WCRM_CONST_STATUS_DIM,
PRESENT_PERIF.CONSTR_ORDER_TYPE_DIM DWH2_WCRM_ORDER_TYPE_DIM,
PRESENT_PERIF.CONSTR_ORDER_TYPE_DIM DWH2_WCRM_ORIG_ORDER_TYPE_DIM,
PRESENT_PERIF.SOURCE_SYSTEMS_DIM
WHERE
( DWH2_WCRM_DEKTIS_PROVIDER_DIM.PROVIDER_SK=PRESENT_PERIF.CONSTR_WCRM_FCT.PROVIDER_SK )
AND ( DWH2_WCRM_DOTIS_PROVIDER_DIM.PROVIDER_SK=PRESENT_PERIF.CONSTR_WCRM_FCT.PROVIDER_SK_OWN )
AND ( PRESENT_PERIF.CONSTR_WCRM_FCT.SOURCESYSTEM_SK=PRESENT_PERIF.SOURCE_SYSTEMS_DIM.SOURCESYSTEM_SK )
AND ( PRESENT_PERIF.CONSTR_WCRM_FCT.ORDER_STATUS_SK=DWH2_WCRM_CONST_STATUS_DIM.STATUS_SK )
AND ( PRESENT_PERIF.CONSTR_WCRM_FCT.CONSTR_ORDER_SK_ORIG=DWH2_WCRM_ORIG_ORDER_TYPE_DIM.CONSTR_ORDER_SK )
AND ( PRESENT_PERIF.CONSTR_WCRM_FCT.CONSTR_ORDER_SK=DWH2_WCRM_ORDER_TYPE_DIM.CONSTR_ORDER_SK )
AND ( PRESENT_PERIF.CONSTR_WCRM_FCT.BUSINESS_SOURCE = 'WNP' )
AND
(
case
when ( PRESENT_PERIF.CONSTR_WCRM_FCT.NP_ID_NUMBER ) like 'P%' then 'Portability'
when ( PRESENT_PERIF.CONSTR_WCRM_FCT.NP_ID_NUMBER ) like 'D%' then 'Disconnection'
when ( PRESENT_PERIF.CONSTR_WCRM_FCT.NP_ID_NUMBER ) like 'U%' then 'Update'
else 'Undefined'
end IN ( 'Portability' )
AND
PRESENT_PERIF.CONSTR_WCRM_FCT.CONSTR_DATE_TRUNC >= '01-01-2015 00:00:00'
AND
DWH2_WCRM_CONST_STATUS_DIM.STATUS_CUSTGROUP_DESCR IN ( 'Ολοκληρωμένη' )
)
AND DWH2_WCRM_ORDER_TYPE_DIM.CONSTR_ORDER_CODE in ('PRM_301', 'PRM_303')
GROUP BY ...
49
Filters
Joins
50. Find the “right” join order
50
CONSTR_WCRM_FCT
DWH2_WCRM_DEKTIS_PROVIDER_DIM DWH2_WCRM_DOTIS_PROVIDER_DIM
DWH2_WCRM_CONST_STATUS_DIM
DWH2_WCRM_ORDER_TYPE_DIM
DWH2_WCRM_ORIG_ORDER_TYPE_DIM
PRESENT_PERIF.SOURCE_SYSTEMS_DIM
FR: 0,02 (31M ->800K)
FR: 0,07
FR: Filter Ratio = (select count(*) from table where <condition> )/ (select count(*) from table)
FR: 0,003
/*+ LEADING(DWH2_WCRM_ORDER_TYPE_DIM,
CONSTR_WCRM_FCT,
DWH2_WCRM_CONST_STATUS_DIM) */
The rest order does not matter! The number of rows will remain the same
51. Subqueries (IN, NOT IN, EXISTS, NOT EXISTS) and
the FILTER operation
select *
from tsmall
where
id in (select /*+ UNNEST */ id from tlarge)
51
select *
from tsmall
where
id in (select /*+ NO_UNNEST */ id from tlarge)
The FastFullIndexScan
has run 99 times!
Each time for each
row in the driving
table
We have read
99x10.000 “rows” just
to return 99 rows!
These are a lot of
wasted rows!!!!
52. Use the /*+ UNNEST */ hint
or
Rewrite the query
52
select *
from tsmall
where
id between 10 and 100
or
id not in (select /*+ UNNEST */ id from tlarge)
The UNNEST hint
is ignored by the
optimizer
select *
from tsmall
where
id between 10 and 100
UNION ALL
select *
from tsmall
where
id not in (select id from tlarge)
More details at: https://jonathanlewis.wordpress.com/2007/01/24/join-ordering-1/
53. Partition Pruning Blockers
• When you see operation:
PARTITION RANGE ALL
• No partition pruning takes place
• Blocker Conditions:
– Inequality (!= or <>)
– NOT IN
– IS NOT NULL
– restrictions based on expressions and functions
53
55. NOT IN (…)
Returns no rows!
• But, if one of the values involved is a NULL, then that component
evaluates to neither true nor false, it evaluates to null – so the whole
expression evaluates to null
• Rewrite like this (if possible):
where
colX is not null
and colX not in (
select colY from ... where ... and colY is not null
)
• Or Rewrite as an ANTI-JOIN
55
More details at: https://jonathanlewis.wordpress.com/2007/02/25/not-in/
colX NOT in (value1, value2, value3)
Means:
colX != value1
AND colX != value2
AND colX != value3
select *
from tlarge
where
id between 10 and 100
and
id not in (select id from tsmall)
select *
from tlarge left outer join tsmall on
(tlarge.id = tsmall.id)
where
tlarge.id between 10 and 100
and tsmall.id IS NULL
56. NOTE: The table in USE_HASH (or USE_NL)
must be the second table in the join order
Oracle First Decides on the Join Order and then on the
Join Method
• LEADING(X Y) USE_HASH(X)
• LEADING(X Y) USE_HASH(Y)
• USE_HASH(X Y) = USE_HASH(X) USE_HASH(Y)
• For more details:
http://oradwstories.blogspot.gr/2015/03/join-hints-and-join-
ordering-or-why-is.html
56
57. /*+ PARALLEL(t degree) */
• PARALLEL DOES NOT work if the table is
accessed by index!
• You need to specify FULL too
– /*+ FULL(t) PARALLEL(t degree) */
57
58. REMOVE ORs!!!
rewrite via UNION ALL
58
SELECT ...
FROM ETL_DW.CMPN_INTERACTIONS_FCT_EVENT_V FCT,
KPI_DW.CRITICAL_EVENT_FCT ORDFCT,
KPI_DW.ACTIVATION_STATUS_DIM STAT,
CMPN_DW.CMPN_INTACT_CAT_DIM CATEG,
CMPN_DW.CMPN_INTER_TYPE_DIM TYPE,
TARGET_DW.UCM_CUSTOMER_DIM UCM,
TARGET_DW.SEGMENT_OTE_BASIC_DIM SEGM,
TARGET_DW.BILLING_ACCOUNT_DIM BILL,
TARGET_DW.CLI_DIM CLI,
TARGET_DW.CUSTOMER_DIM CUST
WHERE ( (FCT.CLI_SK = ORDFCT.CLI_SK AND FCT.CLI_SK != 0)
OR ( ( FCT.UCM_CUSTOMER_SK = ORDFCT.UCM_CUSTOMER_SK
AND FCT.UCM_CUSTOMER_SK != 0)))
AND …
Elapsed Time: more than 3 hours …(we killed it and never found out)
59. Rewrite via UNION ALL
59
SELECT ...
FROM ETL_DW.CMPN_INTERACTIONS_FCT_EVENT_V FCT,
KPI_DW.CRITICAL_EVENT_FCT ORDFCT,
KPI_DW.ACTIVATION_STATUS_DIM STAT,
CMPN_DW.CMPN_INTACT_CAT_DIM CATEG,
CMPN_DW.CMPN_INTER_TYPE_DIM TYPE,
TARGET_DW.UCM_CUSTOMER_DIM UCM,
TARGET_DW.SEGMENT_OTE_BASIC_DIM SEGM,
TARGET_DW.BILLING_ACCOUNT_DIM BILL,
TARGET_DW.CLI_DIM CLI,
TARGET_DW.CUSTOMER_DIM CUST
WHERE ( (FCT.CLI_SK = ORDFCT.CLI_SK AND FCT.CLI_SK != 0)
)
AND …
UNION ALL
SELECT ...
FROM ETL_DW.CMPN_INTERACTIONS_FCT_EVENT_V FCT,
KPI_DW.CRITICAL_EVENT_FCT ORDFCT,
KPI_DW.ACTIVATION_STATUS_DIM STAT,
CMPN_DW.CMPN_INTACT_CAT_DIM CATEG,
CMPN_DW.CMPN_INTER_TYPE_DIM TYPE,
TARGET_DW.UCM_CUSTOMER_DIM UCM,
TARGET_DW.SEGMENT_OTE_BASIC_DIM SEGM,
TARGET_DW.BILLING_ACCOUNT_DIM BILL,
TARGET_DW.CLI_DIM CLI,
TARGET_DW.CUSTOMER_DIM CUST
WHERE (( FCT.UCM_CUSTOMER_SK = ORDFCT.UCM_CUSTOMER_SK
AND FCT.UCM_CUSTOMER_SK != 0))
AND …
Elapsed Time: 1,5mins!
60. For Very Large INSERTS/MERGE:
use APPEND and PARALLEL DML!!
alter session enable parallel dml;
insert /*+ APPEND PARALLEL(t 32) */ into tab t
select ...
merge /*+ APPEND PARALLEL(t 32) */ into tab t
using ...
60
61. For Very Large INSERTS/MERGE:
How do you know you are doing parallel DML?
How do you know you are doing an APPEND?
61
This is a
conventional insert
(i.e., NOAPPEND )
This is an APPEND!
BUT … no PARALLEL
DML:
The PX _COORDINATOR
is doing the insert alone!
This is a PARALLEL
INSERT. The PX
COORDINATOR is
above the insert
operation
62. How much time to Rollback?
(rollback_t.sql)
• Just give as input the sql_id
62
This is parallel
DML (one
transaction
per parallel
slave and the
QC
63. Other Tools:
See the SQL executing live:
Real-Time SQL Monitoring
-- simple text report
SELECT DBMS_SQLTUNE.report_sql_monitor(
sql_id => '&&sql_id',
type => 'TEXT',
report_level => 'ALL') AS report
FROM dual;
-- active html report
SELECT
DBMS_SQLTUNE.REPORT_SQL_MONITOR(
sql_id => '&&sql_id',
report_level=>'ALL',
type => 'ACTIVE') as report
FROM dual;
63
To force monitoring use: /*+ MONITOR */
66. Final Notes
• In order to effectively SQL Tune YOU MUST:
– Learn to read execution plans
– Try to identify the problematic operation(s) and attack this problem
– Familiarize yourself with hints
(Oracle SQL Reference Manual)
http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements006.htm#SQL
RF51108)
– Practice hard, practice hard, practice hard, practice hard, practice
hard, practice hard, practice hard, practice hard, practice hard,
practice hard, practice hard, practice hard,practice hard, practice
hard, practice hard, practice hard, practice hard, practice hard,
practice hard,
– AND …
• Practice Hard !-)
66
67. Scripts
https://github.com/nkarag/sqltuning4DWsupport
67
Script Description
owb_execs.sql See OWB node execution
history
sqltune_exec.sql Create an SQL Tuning Advisor
Task
sqltune_report.sql Create SQL Tuning Advisor
report
fs_awr.sql Get execution plan history of a
query from AWR
xplan_awr_all.sql Show a specific execution plan
of the past (from AWR)
xplan_rac.sql Show execution plan of a
query in library cache (format:
'ALL ALLSTATS LAST’
xplan_rac_all.sql Show execution plan of a
query in library cache (format:
'ADVANCED ALLSTATS LAST’
rollback_t.sql Estimate rollback time for an
sql_id / session