6. Appendix B Miscellaneous features..............................................................................................................................................................................................................216
1. Tablespace enhancements.....................................................................................................................................................................................................................216
2 DDL wait............................................................................................................................................................................................................................................... 216
3 Non published statistics.........................................................................................................................................................................................................................217
4 Stale percentage.....................................................................................................................................................................................................................................217
5 Gathering statistics on changed partitions only.....................................................................................................................................................................................217
6 Dynamic Sampling................................................................................................................................................................................................................................ 217
7 LOB Enhancements...............................................................................................................................................................................................................................218
8 SIMPLE_INTEGER..............................................................................................................................................................................................................................218
9 Flush buffer_cache feature from Oracle 10g.........................................................................................................................................................................................219
10 Create and Rebuild index online..........................................................................................................................................................................................................220
11 Online Table redefinition.....................................................................................................................................................................................................................221
12 Materialized View Catalog Views ......................................................................................................................................................................................................221
13 NOT Null with Default........................................................................................................................................................................................................................221
14 Object enhancements...........................................................................................................................................................................................................................221
Appendix C References.................................................................................................................................................................................................................................222
6 of 222
7. 1 Introduction
Aim of this document is to understand the Oracle 10g and 11g new features – this would be particularly useful for developers migrating from 9i to 11g – illustration based.
Attempts to highlight the new features and how the same was done in Oracle 9i.
Oracle 10g / 11g new features are highlighted in RED; how the same was done using Oracle 9i is highlighted in BLUE (Note: For certain features, I have not given the
equivalent 9i version).
Note: I have installed 11g on my laptop – so most of the timing of run specified here might vary.
Oracle 9i Oracle 11g
...
LOOP
v_item_num := v_item_num + 5;
IF MOD(v_item_num,25) = 0 THEN
EXIT;
END IF;
END LOOP;
...
...
LOOP
v_item_num := v_item_num + 5;
CONTINUE WHEN MOD(v_item_num,25) <> 0;
Dbms_output.put_line('Item number = ' ||
v_item_num);
END LOOP;
...
Quick look at the Key new Features -
● Adaptive cursor sharing (11g)
● Conditional compilation (10g)
● Continue (11g)
● Compile time warnings (10g)
● Commit_write (10g)
● PL/SQL Inlining (11g)
● Invisible indexes (11g)
● Model (10g)
● Pivot / Unpivot (11g)
● Enhancements in Triggers (11g)
● Virtual columns (11g)
7 of 222
8. Following is the script to create tables - that is used in most of my examples.
Drop sequence dummy_seq
/
create sequence dummy_seq start with 1
/
create table emp
as
select
dummy_seq.nextval empno,
object_name empname,
object_id sal,
CASE WHEN ROWNUM BETWEEN 1 and 28000 then 'CLERK'
WHEN ROWNUM BETWEEN 28001 and 30000 then 'SALESMAN'
WHEN ROWNUM BETWEEN 30001 and 30150 then 'PRESIDENT'
WHEN ROWNUM BETWEEN 30151 and 35000 then 'MANAGER'
ELSE 'ANALYST' end Job,
round(
dbms_random.value(1000,100000)) comm,
CASE WHEN ROWNUM BETWEEN 1 and 10000 then 320
WHEN ROWNUM BETWEEN 10001 and 13051 then 120
WHEN ROWNUM BETWEEN 13052 and 26001 then 380
WHEN ROWNUM BETWEEN 26002 and 27002 then 630
ELSE 550 end deptno
from all_objects
/
DELETE FROM emp tnm WHERE tnm.rowid IN
8 of 222
Table Creation Script
9. (SELECT rowid FROM (SELECT ROWID, ROW_NUMBER () OVER (PARTITION BY empname ORDER BY empno, empname ) duplicate FROM emp ) qry
WHERE qry.duplicate > 1)
(The above delete is to run the examples selecting from various blocks with gaps)
create unique index emp_idx on emp (empno )
/
exec dbms_stats.gather_table_stats(ownname => 'SYSTEM', tabname => 'emp', cascade => TRUE)
CREATE TABLE DEPT AS
SELECT distinct deptno ,
CASE WHEN deptno = 380 then 'ACCOUNTING'
WHEN deptno = 120 then 'RESEARCH'
WHEN deptno = 550 then 'SALES'
WHEN deptno = 320 then 'OPERATIONS'
ELSE 'IT' end dname, 'SINGAPORE' Loc
FROM emp
create unique index dept_idx on dept (deptno )
/
CREATE TABLE BONUS
AS select empname, job, sal, comm from emp
/
create index bonus_idx1 on bonus (empname, job )
/
create table salgrade as
select distinct job job,
min(sal) losal, max(sal) hisal from bonus
group by job
/
9 of 222
10. create index sal_idx1 on salgrade (job );
2 Autotrace / DBMS_XPLAN / Tkprof
2.1 Autotrace
Autotrace is a handy utility that shows the “actual” execution statistics and explain plan of a query.
SET AUTOTRACE ON – Enables autotrace and displays the EXPLAIN PLAN of the query and displays the STATISTICS.
Here it actually executes the query
SET AUTOTRACE ON EXPLAIN – Displays the EXPLAIN PLAN and the results.
SET AUTOTRACE ON STATISTICS – Displays the STATISTICS and the results.
SET AUTOTRACE TRACEONLY – Does not display the results. Displays the EXPLAIN PLAN and STATISTICS. Executes the query.
SET AUTOTRACE TRACEONLY EXPLAIN – Displays only the EXPLAIN PLAN; Does not execute the query.
SET AUTOTRACE TRACEONLY STATISTICS – Executes the query and displays the statistics only.
SET AUTOTRACE OFF – Switches off autotrace.
Explain plan vs autotrace:
Explain plan ON <select statement> - Shows what the database will do if the query is executed – it is an estimate.
Autotrace (SET AUTOTRACE ...) - Displays the actual – what the database has actually done by firing the query – actual time taken, number of rows hit, consistent gets
etc.,
Autotrace should always be the first tool – Just after writing the query its good to run the query in autotrace on full volume environment to see how much time, resources,
rows etc., are retrieved by executing the query. Note - while running the query we tend to supply literals, but the actual program might have bind variable. We cannot say
that the performance of a query with literals and bind variables would be exactly the same.
So, if your query has bind variable it is always good to trace like this so that you can have the explain plan same as you will get when the process runs.
10 of 222
A
11. set autotrace on
variable :var_name <datatype>
exec :var_name := <value>
select * from <table_name> WHERE condition = :var_name;
Refer Appendix A for further information.
ORACLE 9i ORACLE 11g
SQL> set autotrace traceonly
SQL> SELECT qry.empno, qry.ename, qry.job, qry.mgr,
qry.hiredate,qry.sal,qry.com
m,qry.deptno,qry.row_desc from (SELECT A.*, COUNT(1) OVER (ORDER BY
ROWNUM ASC) ROW_ASC, COUNT(1) OVER (ORDER BY ROWNUM
DESC) ROW_DESC FROM scott.EMP A) QRY whe
re qry.row_Asc between 1 and 11;
11 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 2175649969
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 1582 | 8 |
|* 1 | VIEW | | 14 | 1582 | 8 |
| 2 | WINDOW SORT | | 14 | 518 | 8 |
| 3 | WINDOW SORT | | 14 | 518 | 8 |
| 4 | COUNT | | | | |
SQL> set autotrace traceonly
SQL> SELECT qry.empno, qry.ename, qry.job, qry.mgr,
qry.hiredate,qry.sal,qry.comm,qry.deptno,qry.row_desc from (SELECT A.*,
COUNT(1) OVER (ORDER BY ROWNUM ASC) ROW_ASC, COUNT(1)
OVER (ORDER BY ROWNUM DESC) ROW_DESC FROM EMP A) QRY
where qry.row_Asc between 1 and 11;
11 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 2175649969
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 1582 | 5 (40)| 00:00:01 |
|* 1 | VIEW | | 14 | 1582 | 5 (40)| 00:00:01 |
| 2 | WINDOW SORT | | 14 | 518 | 5 (40)| 00:00:01 |
| 3 | WINDOW SORT | | 14 | 518 | 5 (40)| 00:00:01 |
| 4 | COUNT | | | | | |
11 of 222
12. | 5 | TABLE ACCESS FULL | EMP | 14 | 518 | 2 |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("QRY"."ROW_ASC">=1 AND "QRY"."ROW_ASC"<=11)
Note
-----
- cpu costing is off (consider enabling it)
Statistics
----------------------------------------------------------
124 recursive calls
0 db block gets
25 consistent gets
0 physical reads
0 redo size
1360 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
11 rows processed
| 5 | TABLE ACCESS FULL| EMP | 14 | 518 | 3 (0)| 00:00:01 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("QRY"."ROW_ASC">=1 AND "QRY"."ROW_ASC"<=11)
Statistics
----------------------------------------------------------
419 recursive calls
0 db block gets
79 consistent gets
8 physical reads
0 redo size
1360 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
8 sorts (memory)
0 sorts (disk)
11 rows processed
2.2 DBMS_XPLAN
DBMS_XPLAN is used to query the execution plan. It is now more improved with Oracle 10g and 11g. Internally it queries the PLAN_TABLE. Best part about this
package is – filter and warning.
12 of 222
13. DELETE FROM PLAN_TABLE
SQL> EXPLAIN PLAN for
2 SELECT qry.empno, qry.ename, qry.job, qry.mgr,
qry.hiredate,qry.sal,qry.com
m,qry.deptno,qry.row_desc from (SELECT A.*, COUNT(1) OVER (ORDER BY
ROWNUM ASC)
ROW_ASC, COUNT(1) OVER (ORDER BY ROWNUM DESC) ROW_DESC
FROM scott.EMP A) QRY whe
re qry.row_Asc between 1 and 11;
Explained.
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2175649969
--------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 1582 | 8 |
|* 1 | VIEW | | 14 | 1582 | 8 |
| 2 | WINDOW SORT | | 14 | 518 | 8 |
| 3 | WINDOW SORT | | 14 | 518 | 8 |
| 4 | COUNT | | | | |
| 5 | TABLE ACCESS FULL| EMP | 14 | 518 | 2 |
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
DELETE FROM PLAN_TABLE
EXPLAIN PLAN for
SELECT qry.empno, qry.ename, qry.job, qry.mgr,
qry.hiredate,qry.sal,qry.comm,qry.deptno,qry.row_desc from (SELECT A.*,
COUNT(1) OVER (ORDER BY ROWNUM ASC) ROW_ASC, COUNT(1)
OVER (ORDER BY ROWNUM DESC) ROW_DESC FROM scott.EMP A) QRY
where qry.row_Asc between 1 and 11;
SELECT * FROM TABLE(dbms_xplan.display)
SQL>
SQL>
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2175649969
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 1582 | 5 (40)| 00:00:01 |
|* 1 | VIEW | | 14 | 1582 | 5 (40)| 00:00:01 |
| 2 | WINDOW SORT | | 14 | 518 | 5 (40)| 00:00:01 |
| 3 | WINDOW SORT | | 14 | 518 | 5 (40)| 00:00:01 |
| 4 | COUNT | | | | | |
| 5 | TABLE ACCESS FULL| EMP | 14 | 518 | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
13 of 222
14. Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("QRY"."ROW_ASC">=1 AND "QRY"."ROW_ASC"<=11)
21 rows selected.
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("QRY"."ROW_ASC">=1 AND "QRY"."ROW_ASC"<=11)
17 rows selected.
DELETE FROM PLAN_TABLE
SELECT /* XPLAN_CURSOR */ qry.empno, qry.ename, qry.job, qry.mgr, qry.hiredate,qry.sal,qry.comm,qry.deptno,qry.row_desc from (SELECT A.*, COUNT(1)
OVER (ORDER BY ROWNUM ASC) ROW_ASC, COUNT(1) OVER (ORDER BY ROWNUM DESC) ROW_DESC FROM scott.EMP A) QRY where
qry.row_Asc between 1 and 11;
SELECT * FROM TABLE(dbms_xplan.display_cursor)
SQL> SELECT * FROM TABLE(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID 58kbvx9st3twq, child number 0
-------------------------------------
SELECT /* XPLAN_CURSOR */ qry.empno, qry.ename, qry.job, qry.mgr,
qry.hiredate,qry.sal,qry.comm,qry.deptno,qry.row_desc from (SELECT A.*,
COUNT(1) OVER (ORDER BY ROWNUM ASC) ROW_ASC, COUNT(1) OVER (ORDER BY
ROWNUM DESC) ROW_DESC FROM EMP A) QRY where qry.row_Asc between 1
and 11
Plan hash value: 2175649969
14 of 222
16. 2.3 Tkprof
Tkprof is there for sometime now. We can generate trace file which cannot be read by us – tkprof enables to read the output.
ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
ALTER SESSION SET SQL_TRACE = TRUE;
TKPROF f:appluxanandadiagrdbmsluxluxtracelux_ora_4100.trc F:appluxanandadiagrdbmsluxluxtraceoutput
SQL ID : 6g5knydsx1m5d
select *
from
emp where deptno = 630
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 35 0.00 0.04 11 79 0 500
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 37 0.01 0.05 11 79 0 500
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
------- ---------------------------------------------------
500 TABLE ACCESS BY INDEX ROWID EMP (cr=79 pr=11 pw=11 time=28 us cost=118 size=397292 card=8108)
500 INDEX RANGE SCAN EMP_IDX1 (cr=37 pr=3 pw=3 time=39 us cost=17 size=0 card=8108)(object id 69882)
16 of 222
17. *******************************************************************************
.......
********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 12 0.01 0.12 0 3 0 0
Execute 13 0.01 0.14 2 267 3 5
Fetch 43 0.03 0.20 12 127 0 528
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 68 0.06 0.47 14 397 3 533
Misses in library cache during parse: 9
Misses in library cache during execute: 4
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 263 0.04 0.09 0 0 0 0
Execute 921 0.26 0.68 0 1006 34 14
Fetch 1838 0.18 1.55 107 3867 23 3573
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3022 0.50 2.33 107 4873 57 3587
Misses in library cache during parse: 63
Misses in library cache during execute: 57
21 user SQL statements in session.
864 internal SQL statements in session.
17 of 222
18. 885 SQL statements in session.
********************************************************************************
Trace file: f:appluxanandadiagrdbmsluxluxtracelux_ora_4100.trc
Trace file compatibility: 11.01.00
Sort options: default
4 sessions in tracefile.
51 user SQL statements in trace file.
2969 internal SQL statements in trace file.
885 SQL statements in trace file.
73 unique SQL statements in trace file.
7399 lines in trace file.
128 elapsed seconds in trace file.
Notes – interpreting TKPROF output :
There are quite a lot of sql statements in the TKPROF output.
Count number of times OCI procedure was executed
CPU cpu time in seconds executing
Elapsed elapsed time in seconds executing
Disk number of physical reads of buffers from disk
Query number of buffers gotten for consistent read
Current number of buffers got in current mode – generally for update
Rows number of rows processed by the fetch or execute call
18 of 222
19. 3 Adaptive cursor sharing
Feature available from : Oracle 11g Release 1
Cursor_Sharing enables to notice similar SQL statements that are already parsed and available in SQL area. When the query is issued for the first time, the same is stored
in the SQL area. Later, on issuing same / similar SQL statements, the query in the memory would be processed – parsing will not take place again. Default cursor sharing
parameter is EXACT. Only if the SQL statement is exactly similar the query will be used otherwise it will be parsed again.
Setting cursor_sharing to FORCE or SIMILAR enables similar statements to share the SQL.
FORCE – forces similar SQL statements to share the SQL area, detoriating the explain plans.
SIMILAR - similar SQL statements to share the SQL area, without detoriating the explain plans.
EXACT – only exact SQL statements share the SQL area. This is the default value.
Caution: Setting CURSOR_SHARING to FORCE or SIMILAR prevents any outlines generated with literals from being used if they were generated with
CURSOR_SHARING set to EXACT.
All the above three modes will work similarly for bind variables and literals. This results in bind variable peaking problem - it will choose the same explain plan however
the data is distributed without considering the percentage of data returned by the predicate.
Oracle 11g has introduced a new feature adaptive cursor sharing to to choose different plan for queries containing bind variables on skewed data .
create index emp_idx1 on emp (deptno)
exec dbms_stats.gather_index_stats(ownname => 'SYSTEM', indname => 'emp_idx1')
exec dbms_stats.gather_table_stats ( ownname => 'SYSTEM', tabname => 'EMP', method_opt => 'for all indexed columns size skewonly', cascade
=> TRUE );
Oracle 9i Oracle 11g
SQL> Select * from emp where deptno = 630;
alter session set “_optimizer_adaptive_cursor_sharing”=true
SQL> Select * from emp where deptno = 630;
19 of 222
20. 500 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3085206398
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8108 | 356K| 118 (0)| 00:00:02 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 8108 | 356K| 118
(0)| 00:00:02 |
|* 2 | INDEX RANGE SCAN | EMP_IDX1 | 8108 | | 17 (0)|
00:00:01 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("DEPTNO"=630)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
79 consistent gets
0 physical reads
0 redo size
31937 bytes sent via SQL*Net to client
779 bytes received via SQL*Net from client
35 SQL*Net roundtrips to/from client
0 sorts (memory)
500 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3085206398
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 517 | 25333 | 9 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 517 | 25333 | 9 (0)|
00
:00:01 |
|* 2 | INDEX RANGE SCAN | EMP_IDX1 | 517 | | 2 (0)| 00:00:01|
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("DEPTNO"=630)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
79 consistent gets
0 physical reads
0 redo size
31937 bytes sent via SQL*Net to client
779 bytes received via SQL*Net from client
35 SQL*Net roundtrips to/from client
0 sorts (memory)
20 of 222
21. 0 sorts (disk)
500 rows processed
0 sorts (disk)
500 rows processed
Select * from emp where deptno = 550
23929 rows selected.
Elapsed: 00:00:00.29
Execution Plan
----------------------------------------------------------
Plan hash value: 3085206398
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8108 | 356K| 118 (0)| 00:00:02 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 8108 | 356K| 118
(0)| 00:00:02 |
|* 2 | INDEX RANGE SCAN | EMP_IDX1 | 8108 | | 17 (0)|
00:00:01 |
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("DEPTNO"=550)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
3517 consistent gets
alter session set “_optimizer_adaptive_cursor_sharing”=true
Select * from emp where deptno = 550
SQL> Select * from emp where deptno = 550;
23929 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23977 | 1147K| 143 (1)| 00:00:02 |
|* 1 | TABLE ACCESS FULL| EMP | 23977 | 1147K| 143 (1)| 00:00:02 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("DEPTNO"=550)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2099 consistent gets
0 physical reads
21 of 222
22. 0 physical reads
0 redo size
1564144 bytes sent via SQL*Net to client
17961 bytes received via SQL*Net from client
1597 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
23929 rows processed
Note: For both the cases INDEX scan was performed.
0 redo size
1272097 bytes sent via SQL*Net to client
17961 bytes received via SQL*Net from client
1597 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
23929 rows processed
With adaptive cursor sharing for the first query retrieving 500 records index scan;
second query where 23929 records were fetched – FTS was performed.
22 of 222
23. 4 Bind variable peeking
If a query needs to be tuned that has bind variable peeking issue then, we can deactivate bind peeking.
Adaptive cursor sharing is the solution for bind variable peeking problem. It only shares the plan only if bind variables are equal. If the bind variables are equal and falls
within the range then, it uses the same plan. If the bind values are not equivalent then it creates a new plan.
Oracle 9i Oracle 11g
variable l_value number
exec :l_value := 5
SQL> Select * from test_bind where a = :l_value;
Elapsed: 00:00:00.00
Execution Plan
----------------------------------------------------------
Plan hash value: 317434058
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 11 | 33 | 1 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| A_IDX1 | 11 | 33 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("A"=TO_NUMBER(:L_VALUE))
CREATE TABLE bind_test (
object_id NUMBER,
object_type varchar2(100),
CONSTRAINT bind_test_pk PRIMARY KEY (object_id));
CREATE INDEX bind_test_idx ON bind_test(object_type);
BEGIN
FOR cur IN (select dummy_seq.nextval id, object_type from all_objects) LOOP
INSERT INTO bind_test VALUES (cur.id, cur.object_type);
end loop;
COMMIT;
END;
/
EXEC DBMS_STATS.gather_table_stats(USER, 'bind_test', method_opt=>'for all
indexed columns size skewonly', cascade=>TRUE);
alter session set “_optimizer_adaptive_cursor_sharing”=true
23 of 222
B
24. set autotrace traceonly
VARIABLE l_obj_type VARCHAR2(100)
EXEC :l_obj_type := 'EDITION'
SELECT COUNT(object_id) FROM bind_test where object_type = :l_obj_type
Execution Plan
-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 9 | 9 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 9 | | |
|* 2 | INDEX RANGE SCAN| BIND_TEST_IDX | 1781 | 16029 | 9 (0)|
00:00:01 |
-----------------------------------------------------------------------------------
variable l_value number
exec :l_value := 501
Select * from test_bind where a = :l_value;
Execution Plan
----------------------------------------------------------
Plan hash value: 317434058
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 11 | 33 | 1 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| A_IDX1 | 11 | 33 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------
VARIABLE l_obj_type VARCHAR2(100)
EXEC :l_obj_type := 'PACKAGE'
SELECT COUNT(object_id) FROM bind_test where object_type = :l_obj_type
Execution Plan
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 9 | 51 (2)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 9 | | |
|* 2 | TABLE ACCESS FULL| BIND_TEST | 1781 | 16029 | 51 (2)|
00:00:01 |
24 of 222
25. Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("A"=TO_NUMBER(:L_VALUE))
--------------------------------------------------------------------------------
25 of 222
26. 5 Conditional compilation
Completely new Feature; This feature is available from : Oracle 10g
This feature enables you to customize your PL/SQL code.
Major advantages -
(1) Helps to enable debugging in development environment and switch it off in LIVE environment
(2) Helps to utilize the latest functionality with the latest DB release and disable the new features against the older release. This enables to compile the code in older
versions of Oracle even with the new feature in place ! The benefit is that one source code with version specific new feature can be applied to any database version.
How does it work? -
It is implemented by setting compiler / directive commands in the source code. Following are the directives used
A) Selection directives
Use the $IF directive to evaluate expressions and determine which code should be included or avoided.
B) Inquiry directives
Use the $$identifier syntax to refer to conditional compilation flags. These inquiry directives can be referenced within an $IF directive or used
independently in your code.
C) Error directives
Use the $ERROR directive to report compilation errors based on conditions evaluated when the preprocessor prepares your code for compilation.
Reference : http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14261/fundamentals.htm#BABIHIHF
Example – Using Conditional compilation selection directives
Evaluates static expression to determine which text should be included in the compilation.
Form of selection directive -
26 of 222
C
27. $IF boolean_static_expression $THEN text
[ $ELSIF boolean_static_expression $THEN text ]
[ $ELSE text ]
$END
Example :
set serveroutput on size 100000
BEGIN
$IF DBMS_DB_VERSION.VER_LE_9_2 $THEN
DBMS_OUTPUT.PUT_LINE( 'Dummy message – this code will not work in this db release');
$ELSE
DBMS_OUTPUT.PUT_LINE ('Release ' || DBMS_DB_VERSION.VERSION || '.' ||
DBMS_DB_VERSION.RELEASE || ' is supported.');
COMMIT;
$END
END;
/
Oracle 10g Oracle 11g
Dummy message – this code will not work in this db release
PL/SQL procedure successfully completed.
Release 11.1 is supported.
PL/SQL procedure successfully completed.
Or we can write something like this -
set serveroutput on size 100000
BEGIN
$IF DBMS_DB_VERSION.VER_LE_9_2 $THEN
-- use this code
$ELSE
-- use this code
$END
27 of 222
28. END;
/
Example – Using conditional compilation Inquiry directives
Inquiry directives enables to use values set against PLSQL_CCFLAGS. Form is
inquiry_directive ::$$id
Example 1 – Debug
ALTER SESSION SET PLSQL_CCFLAGS = 'debug_flag:true, trace_level_val:10';
SET SERVEROUTPUT ON SIZE 10000
BEGIN
$IF $$debug_flag AND $$trace_level_val >= 10
$THEN
DBMS_OUTPUT.PUT_LINE ('The tracing level is set to 10 or higher');
$END
NULL;
END;
/
Oracle 10g Oracle 11g
The tracing level is set to 10 or higher
PL/SQL procedure successfully completed.
The tracing level is set to 10 or higher
PL/SQL procedure successfully completed.
Example 2 – Setting a Common value that can be used across all programs
ALTER SESSION SET PLSQL_CCFLAGS = 'debug_flag:true, trace_level_val:10, max_days:100';
28 of 222
29. Note: The above example is to show how to maintain multiple values and use certain values for certain programs.
DECLARE
v_days number := 200;
BEGIN
IF v_days >= $$max_days THEN
DBMS_OUTPUT.PUT_LINE('The value of v_days ' || v_days ||' greater than ' || $$max_days);
ELSE
DBMS_OUTPUT.PUT_LINE('The value of v_days ' || v_days ||' lesser than ' || $$max_days);
END IF;
END;
/
Oracle 10g Oracle 11g
The value of v_days 200 greater than 100
PL/SQL procedure successfully completed.
The value of v_days 200 greater than 100
PL/SQL procedure successfully completed.
Note: In the above example we did not use $IF / $THEN or $END because, we are not using BOOLEAN expression.
The following information (corresponding to the values in the USER_PLSQL_OBJECT_SETTINGS data dictionary view) is available via inquiry directives:
$$PLSQL_DEBUG - Debug setting for this compilation unit
$$PLSQL_OPTIMIZE_LEVEL - Optimization level for this compilation unit
$$PLSQL_CODE_TYPE - Compilation mode for the unit
$$PLSQL_WARNINGS - Compilation warnings setting for this compilation unit
$$NLS_LENGTH_SEMANTICS - Value set for the NLS length semantics
Example – Using conditional compilation Error directives
$ERROR raises user defined error. Form is
$ERROR '<varchar2 message>' $END
29 of 222
30. set serveroutput on size 100000
BEGIN
$IF DBMS_DB_VERSION.VER_LE_9_2 $THEN
$ERROR 'Dummy message – this code will not work in this db release' $END
$ELSE
DBMS_OUTPUT.PUT_LINE ('Release ' || DBMS_DB_VERSION.VERSION || '.' ||
DBMS_DB_VERSION.RELEASE || ' is supported.');
COMMIT;
$END
END;
/
Oracle 10g Oracle 11g
$ERROR 'Dummy message - this code will not work in this db release' $END
*
ERROR at line 3:
ORA-06550: line 3, column 3:
PLS-00179: $ERROR: Dummy message - this code will not work in this db
release
Release 11.1 is supported.
PL/SQL procedure successfully completed.
30 of 222
31. 6 Collections
6.1 Indicies of
New feature in Oracle 10g.
In FORALL collections.FIRST .. collections.LAST it is not possible to use the indices sequntially if the collection is sparse. But, the same can be handled in Oracle 10g by
using FORALL <counter> IN INDICES OF <collections name> keyword.
create table emp1(empno number, still_employed varchar2(20))
insert into emp1 values (1, 'Y');
insert into emp1 values (2, 'N');
insert into emp1 values (3, 'Y');
Oracle 9i Oracle 11g
SET SERVEROUTPUT ON SIZE 10000
DECLARE
TYPE emp_id_list IS TABLE OF VARCHAR2(100) INDEX BY
PLS_INTEGER;
TYPE emp_list IS TABLE OF emp1%ROWTYPE;
emp_id_tab emp_id_list ;
emp_tab emp_list := emp_list();
BEGIN
emp_Tab.extend;
emp_tab(1).empno := 10;
emp_Tab.extend;
emp_tab(2).empno := 100;
emp_Tab.extend;
emp_Tab.extend;
SET SERVEROUTPUT ON SIZE 10000
DECLARE
TYPE emp_id_list IS TABLE OF VARCHAR2(100) INDEX BY
PLS_INTEGER;
TYPE emp_list IS TABLE OF emp1%ROWTYPE;
emp_id_tab emp_id_list ;
emp_tab emp_list := emp_list();
BEGIN
emp_Tab.extend;
emp_tab(1).empno := 10;
emp_Tab.extend;
emp_tab(2).empno := 100;
emp_Tab.extend;
emp_Tab.extend;
31 of 222
32. emp_tab(4).empno := 1000;
emp_Tab.extend;
emp_id_tab(1) := 'Y';
emp_Tab.extend;
emp_id_tab(2) := 'N';
emp_Tab.extend;
emp_Tab.extend;
emp_id_tab(4) := 'Y';
FORALL i IN emp_id_tab.first .. emp_id_tab.last
UPDATE EMP1 SET ROW = emp_tab(i)
WHERE still_employed = emp_id_tab(i);
END;
/
OUTPUT
DECLARE
*
ERROR at line 1:
ORA-22160: element at index [3] does not exist
ORA-06512: at line 24
emp_tab(4).empno := 1000;
emp_Tab.extend;
emp_id_tab(1) := 'Y';
emp_Tab.extend;
emp_id_tab(2) := 'N';
emp_Tab.extend;
emp_Tab.extend;
emp_id_tab(4) := 'Y';
FORALL i IN INDICES OF emp_id_tab
UPDATE EMP1 SET ROW = emp_tab(i)
WHERE still_employed = emp_id_tab(i);
END;
/
OUTPUT
PL/SQL procedure successfully completed.
6.2 Values of
New feature in Oracle 10g.
VALUES OF clause enables to match the elements of one collection against the value of another collection and helps to perform DML operations based on the same.
DELETE FROM EMP1;
32 of 222
33. Oracle 9i Oracle 11g
SET SERVEROUTPUT ON SIZE 10000
DECLARE
TYPE emp_id_list IS TABLE OF PLS_INTEGER
INDEX BY PLS_INTEGER;
emp_id_tab emp_id_list;
TYPE emp_list IS TABLE OF emp1%ROWTYPE
INDEX BY PLS_INTEGER;
emp_tab emp_list;
BEGIN
emp_id_tab(1) := 10;
emp_id_tab(2) := 9;
emp_id_tab(3) := 8;
SELECT rownum,'Y' BULK COLLECT INTO emp_tab
FROM emp
WHERE ROWNUM <= 50;
FORALL i IN emp_id_tab.FIRST .. emp_id_tab.LAST
INSERT INTO EMP1 VALUES emp_tab(i);
END;
/
OUTPUT
PL/SQL procedure successfully completed.
EMPNO STILL_EMPLOYED
1 Y
2 Y
3 Y
SET SERVEROUTPUT ON SIZE 10000
DECLARE
TYPE emp_id_list IS TABLE OF PLS_INTEGER
INDEX BY PLS_INTEGER;
emp_id_tab emp_id_list;
TYPE emp_list IS TABLE OF emp1%ROWTYPE
INDEX BY PLS_INTEGER;
emp_tab emp_list;
BEGIN
emp_id_tab(1) := 10;
emp_id_tab(2) := 9;
emp_id_tab(3) := 8;
SELECT rownum,'Y' BULK COLLECT INTO emp_tab
FROM emp
WHERE ROWNUM <= 50;
FORALL i IN VALUES OF emp_id_tab
INSERT INTO EMP1 VALUES emp_tab(i);
END;
/
OUTPUT
PL/SQL procedure successfully completed.
EMPNO STILL_EMPLOYED
10 Y
9 Y
8 Y
33 of 222
34. Only solution is – create nested table and perform and compare empno
against TABLE(nested_table) in SELECT statement.
If we note the above results, the VALUES OF clause exactly matches the elements of one collection vs the elements of other collection and inserts values.
This cannot be achieved in Oracle 9i unless we explicitly match up the elements in the WHERE clause.
6.3 Error handling
Oracle 9i Oracle 11g
SET SERVEROUTPUT ON SIZE 10000
DECLARE
TYPE emp_id_list IS TABLE OF PLS_INTEGER
INDEX BY PLS_INTEGER;
emp_id_tab emp_id_list;
TYPE emp_list IS TABLE OF emp1%ROWTYPE
INDEX BY PLS_INTEGER;
emp_tab emp_list;
BEGIN
emp_id_tab(1) := 10;
emp_id_tab(2) := 9;
emp_id_tab(3) := 100;
SELECT rownum,'Y' BULK COLLECT INTO emp_tab
FROM emp
WHERE ROWNUM <= 50;
FORALL i IN emp_id_tab.FIRST .. emp_id_tab.LAST
INSERT INTO EMP1 VALUES emp_tab(i);
SET SERVEROUTPUT ON SIZE 10000
DECLARE
TYPE emp_id_list IS TABLE OF PLS_INTEGER
INDEX BY PLS_INTEGER;
emp_id_tab emp_id_list;
TYPE emp_list IS TABLE OF emp1%ROWTYPE
INDEX BY PLS_INTEGER;
emp_tab emp_list;
BEGIN
emp_id_tab(1) := 10;
emp_id_tab(2) := 9;
emp_id_tab(3) := 100;
SELECT rownum,'Y' BULK COLLECT INTO emp_tab
FROM emp
WHERE ROWNUM <= 50;
FORALL i IN VALUES OF emp_id_tab
INSERT INTO EMP1 VALUES emp_tab(i);
34 of 222
35. EXCEPTION WHEN OTHERS THEN
dbms_output.put_line('Error message ' || SQLERRM);
END;
/
OUTPUT
PL/SQL procedure successfully completed.
EMPNO STILL_EMPLOYED
1 Y
2 Y
3 Y
This will still work – because it does not try to match the elements.
EXCEPTION WHEN OTHERS THEN
dbms_output.put_line('Error message ' || SQLERRM);
END;
/
OUTPUT
Error message ORA-22160: element at index [100] does not exist
35 of 222
36. 6.4 Collect
New feature from Oracle 11g Release 1
COLLECT enables to transform rows into columns just based on a single function.
Running it on sql developer -
Oracle 9i Oracle 11g
Not possible select deptno , collect(ename) enm from scott.emp
group by deptno ;
10 VARCHAR(CLARK,KING,MILLER)
20 VARCHAR(SMITH,FORD,ADAMS,SCOTT,JONES)
30 VARCHAR(ALLEN,BLAKE,MARTIN,TURNER,JAMES,WARD)
We cannot do something like this !!
SELECT qry.dno, SUBSTR(qry.enm,8) empnm from
(select deptno dno , collect(ename) enm from scott.emp
group by deptno ) qry
Running it on sqlplus with report related command-
SQL> break on deptno skip 1;
SQL> select deptno , collect(ename) as empnm from scott.emp group by deptno ;
Oracle 9i Oracle 11g
Not possible DEPTNO EMPNM
--------------------------------------------------------------------------------
10 SYSTPi7LdWP3QSeisIuh7s78iIg==('CLARK', 'KING', 'MILLER')
36 of 222
37. 20 SYSTPi7LdWP3QSeisIuh7s78iIg==('SMITH', 'FORD', 'ADAMS', 'SCOTT', 'JONES')
30 SYSTPi7LdWP3QSeisIuh7s78iIg==('ALLEN', 'BLAKE', 'MARTIN', 'TURNER', 'JAMES', 'WARD')
6.5 Collection Assignment
Now, the collections assignment is improved with various additional features viz., multiset union, multiset intersect, multiset distcint etc.,
SET SERVEROUTPUT ON
DECLARE
TYPE software_tab IS TABLE OF VARCHAR2(1000);
soft_list_1 software_tab := software_tab('Oracle','C','C#','VB','Sql');
soft_list_2 software_tab := software_tab('Oracle','C','PHP','Java');
soft_list_3 software_tab;
BEGIN
-- this is as usual
soft_list_3 := soft_list_1;
FOR i IN soft_list_3.first .. soft_list_3.last LOOP
DBMS_OUTPUT.put_line('Assignment eg ' || soft_list_3(i));
END LOOP;
soft_list_3 := soft_list_1 MULTISET UNION DISTINCT soft_list_2;
FOR i IN soft_list_3.first .. soft_list_3.last LOOP
DBMS_OUTPUT.put_line('Multiset union distinct eg ' || soft_list_3(i));
END LOOP;
37 of 222
38. soft_list_3 := soft_list_1 MULTISET INTERSECT DISTINCT soft_list_2;
FOR i IN soft_list_3.first .. soft_list_3.last LOOP
DBMS_OUTPUT.put_line('Multiset intersect distinct eg ' || soft_list_3(i));
END LOOP;
soft_list_3 := soft_list_1 MULTISET EXCEPT soft_list_2;
FOR i IN soft_list_3.first .. soft_list_3.last LOOP
DBMS_OUTPUT.put_line('Multiset except eg ' || soft_list_3(i));
END LOOP;
soft_list_3 := soft_list_1 MULTISET INTERSECT soft_list_2;
FOR i IN soft_list_3.first .. soft_list_3.last LOOP
DBMS_OUTPUT.put_line('Multiset intersect eg ' || soft_list_3(i));
END LOOP;
soft_list_3 := soft_list_1 MULTISET EXCEPT DISTINCT soft_list_2;
FOR i IN soft_list_3.first .. soft_list_3.last LOOP
DBMS_OUTPUT.put_line('Multiset except distinct eg ' || soft_list_3(i));
END LOOP;
soft_list_3 := soft_list_1 MULTISET UNION soft_list_2;
FOR i IN soft_list_3.first .. soft_list_3.last LOOP
38 of 222
39. DBMS_OUTPUT.put_line('Multiset union eg ' || soft_list_3(i));
END LOOP;
END;
/
OUTPUT-
Assignment eg Oracle
Assignment eg C
Assignment eg C#
Assignment eg VB
Assignment eg Sql
Multiset union distinct eg Oracle
Multiset union distinct eg C
Multiset union distinct eg C#
Multiset union distinct eg VB
Multiset union distinct eg Sql
Multiset union distinct eg PHP
Multiset union distinct eg Java
Multiset intersect distinct eg Oracle
Multiset intersect distinct eg C
Multiset except eg C#
Multiset except eg VB
Multiset except eg Sql
Multiset intersect eg Oracle
Multiset intersect eg C
39 of 222
40. Multiset except distinct eg C#
Multiset except distinct eg VB
Multiset except distinct eg Sql
Multiset union eg Oracle
Multiset union eg C
Multiset union eg C#
Multiset union eg VB
Multiset union eg Sql
Multiset union eg Oracle
Multiset union eg C
Multiset union eg PHP
Multiset union eg Java
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.07
6.6 Improved comparisons
SET SERVEROUTPUT ON
DECLARE
TYPE software_tab IS TABLE OF VARCHAR2(10);
software_list_1 software_tab := software_tab('Oracle','C','C#','VB','Sql');
software_list_2 software_tab := software_tab('Oracle','C','PHP','Java');
software_list_3 software_tab;
BEGIN
IF (software_list_3 IS NULL) AND (software_list_1 IS NOT NULL) THEN
40 of 222
41. DBMS_OUTPUT.put_line('Value - list3 is null and list1 is not null');
END IF;
software_list_3 := software_list_1;
IF (software_list_3 = software_list_1) AND (software_list_3 != software_list_2) THEN
DBMS_OUTPUT.put_line('list3 = list1 and list3 != list2 ');
END IF;
IF (SET(software_list_2) SUBMULTISET software_list_1) AND (software_list_1 NOT SUBMULTISET software_list_2) THEN
DBMS_OUTPUT.put_line( 'list2 submultiset of list1 and list1 is not sub multiset of list2');
END IF;
DBMS_OUTPUT.put_line('Duplicates related print list 2 -' || CARDINALITY(software_list_2));
DBMS_OUTPUT.put_line( 'Remove duplicates list2 - ' || CARDINALITY(SET(software_list_2)) || ' - Remove duplicates');
IF software_list_2 IS NOT A SET THEN
DBMS_OUTPUT.put_line( 'software_list_2 has duplicates');
END IF;
IF software_list_3 IS NOT EMPTY THEN
DBMS_OUTPUT.put_line( 'List3 is not empty');
END IF;
END;
/
41 of 222
42. OUTPUT-
Value - list3 is null and list1 is not null
list3 = list1 and list3 != list2
Duplicates related print list 2 -4
Remove duplicates list2 - 4 - Remove duplicates
List3 is not empty
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.01
6.7 Improved SET operator
Normal assignment assigns all the values including duplicates – SET assignment removes the duplicates before assignment.
SET SERVEROUTPUT ON
DECLARE
TYPE software_tab IS TABLE OF VARCHAR2(10);
software_list_1 software_tab := software_tab('Oracle','C','C#','VB','Sql', 'Oracle','Sql');
software_list_2 software_tab;
PROCEDURE display (p_text IN VARCHAR2,
p_col IN software_tab) IS
BEGIN
DBMS_OUTPUT.put_line(CHR(10) || p_text);
FOR i IN p_col.first .. p_col.last LOOP
DBMS_OUTPUT.put_line(p_col(i));
END LOOP;
42 of 222
43. END;
BEGIN
software_list_2 := software_list_1;
FOR i IN software_list_2.first .. software_list_2.last LOOP
DBMS_OUTPUT.put_line('normal Assignment - ' || software_list_2(i));
END LOOP;
software_list_2 := SET(software_list_1);
FOR i IN software_list_2.first .. software_list_2.last LOOP
DBMS_OUTPUT.put_line('set assignment - ' || software_list_2(i));
END LOOP;
END;
/
OUTPUT-
normal Assignment - Oracle
normal Assignment - C
normal Assignment - C#
normal Assignment - VB
normal Assignment - Sql
normal Assignment - Oracle
normal Assignment - Sql
set assignment - Oracle
43 of 222
44. set assignment - C
set assignment - C#
set assignment - VB
set assignment - Sql
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.04
44 of 222
45. 7 CONTINUE
Feature available from Oracle 11g
This is used to control program flow within the loop. With earlier versions of Oracle we either need to use IF condition or EXIT from the loop. CONTNUE / CONTINUE
WHEN can be used.
Oracle 9i Oracle 11g
SET SERVEROUTPUT ON SIZE 100000
DECLARE
v_item_num NUMBER := 0;
BEGIN
WHILE ( v_item_num < 200)
LOOP
LOOP
v_item_num := v_item_num + 5;
IF MOD(v_item_num,25) = 0 THEN
EXIT;
END IF;
END LOOP;
Dbms_output.put_line('Item number = ' || v_item_num);
END LOOP;
END;
/
OUTPUT
Item number = 25
Item number = 50
Item number = 75
Item number = 100
CONTINUE WHEN
SET SERVEROUTPUT ON SIZE 100000
DECLARE
v_item_num NUMBER := 0;
BEGIN
WHILE ( v_item_num < 200)
LOOP
v_item_num := v_item_num + 5;
CONTINUE WHEN MOD(v_item_num,25) <> 0;
Dbms_output.put_line('Item number = ' || v_item_num);
END LOOP;
END;
/
OUTPUT
Item number = 25
Item number = 50
Item number = 75
45 of 222
46. Item number = 125
Item number = 150
Item number = 175
Item number = 200
PL/SQL procedure successfully completed.
Item number = 100
Item number = 125
Item number = 150
Item number = 175
Item number = 200
PL/SQL procedure successfully completed.
CONTINUE
SET SERVEROUTPUT ON SIZE 100000
DECLARE
v_item_num NUMBER := 0;
BEGIN
WHILE ( v_item_num < 200)
LOOP
v_item_num := v_item_num + 5;
IF MOD(v_item_num,25) <> 0 THEN
CONTINUE;
END IF;
Dbms_output.put_line('Item number = ' || v_item_num);
END LOOP;
END;
/
OUTPUT
Item number = 25
Item number = 50
Item number = 75
Item number = 100
Item number = 125
Item number = 150
Item number = 175
Item number = 200
PL/SQL procedure successfully completed.
46 of 222
47. 8 Compile time warnings
Oracle warnings available from Oracle 10g. To avoid problems at run times and to have a clean code we can turn on checking some compile time warnings.
Syntax-
PLSQL_WARNINGS = 'value_clause' [, 'value_clause' ] ...
value_clause::=
{ ENABLE | DISABLE | ERROR }:
{ ALL
| SEVERE
| INFORMATIONAL
| PERFORMANCE
| { integer
| (integer [, integer ] ...)}}
If the value is set to ERROR then, it will not be compiled successfully. If the value is set to ENABLE then, it will be compiled but with compilation errors.
To re-compile an existing procedure
ALTER PROCEDURE <procedure> COMPILE PLSQL_WARNINGS='ENABLE:ALL'
ALTER SESSION / SYSTEM SET PLSQL_WARNINGS='ENABLE:ALL';
ALTER SESSION / SYSTEM SET PLSQL_WARNINGS='DISABLE:ALL';
ALTER SESSION / SYSTEM SET PLSQL_WARNINGS='ENABLE:PERFORMANCE';
ALTER SESSION / SYSTEM SET PLSQL_WARNINGS='DISABLE:PERFORMANCE';
ALTER SESSION /SYSTEM SET PLSQL_WARNINGS='ERROR:ALL';
47 of 222
48. ALTER SESSION / SYSTEM SET PLSQL_WARNINGS='ENABLE:SEVERE', 'DISABLE:PERFORMANCE',
Not possible ALTER SESSION SET PLSQL_WARNINGS='ERROR:ALL';
CREATE OR REPLACE PROCEDURE DUMMY
AS
v_count varchar2(10);
BEGIN
v_count := 5;
END;
/
Warning: Procedure created with compilation errors.
OUTPUT
SELECT STATUS FROM ALL_OBJECTS WHERE OBJECT_NAME = 'DUMMY'
INVALID
LINE/COL ERROR
-------- -----------------------------------------------------------------
5/5 PLW-07206: analysis suggests that the assignment to 'V_COUNT' may
be unnecessary
Not possible ALTER SESSION SET PLSQL_WARNINGS='ENABLE:ALL';
SP2-0804: Procedure created with compilation warnings
OUTPUT
SELECT STATUS FROM ALL_OBJECTS WHERE OBJECT_NAME = 'DUMMY'
VALID
48 of 222
49. Not possible ALTER SESSION SET PLSQL_WARNINGS='DISABLE:ALL';
CREATE OR REPLACE PROCEDURE DUMMY
AS
v_count varchar2(10);
BEGIN
v_count := 5;
END;
/
ALTER PROCEDURE dummy COMPILE PLSQL_WARNINGS='ENABLE:ALL'
SELECT STATUS FROM ALL_OBJECTS WHERE OBJECT_NAME = 'DUMMY'
VALID
49 of 222
50. 9 Compression
Compression is available in Oracle for sometime now. With Oracle 9i we can perform normal COMPRESSION – but, with DML operations the compression goes for a
toss. Unless we perform direct path inserts, compression does not work as expected. In Oracle 11g COMPRESS FOR ALL OPERATIONS is available which enables the
data to be still compressed with the DML operations also – even with 10 times data load !! This minimizes the overhead of using compression for OLTP tables. For
partitioned tables compression can be controlled at the partition level – this feature allows the same table to have partitions that are compressed at very different levels. If
there are two sets of compression one at table level and the other at partition level – the partition level compression overrides the compression at table level – this allows
decently finer levels compressing and managing the tables.
Require additional license for Advanced Compression in 11g.
Here the block is compressed and not the row. This does not reduce the speed of DML's because the compression does not happen when the row is inserted into the table.
But actually happens as a trigger event (batch mode) - the rows are inserted un-compressed (normal way) - after certain number of rows are updated / inserted into the
table then the compression happens on those inserted / updated rows in the block.
Important note: Compression is CPU intensive. So when the compression is enabled the CPU resource usage will be high.
How does compression work internally?
Oracle finds the repeating rows in the compressed table and puts them near the header of the block - “symbol table”. Each value in the column is assigned a symbol that
replaces the actual value in the table. This symbol values size is smaller than the original data size. If there are more repeating data then the symbol table will be more
compact. Net net, we can say that one of the key factors that drives this compression is repeating data in the table also.
Compressed table access via dblinks
It is said that it takes less time to fetch data across dblinks / network.
Note : The following is run from my PC
9.1 Insert based on Select
50 of 222
51. Oracle 9i (created these scripts on 11g environment) Oracle 11g
Alter system flush shared_pool
Table Creation Script -
CREATE TABLE obj_compress
COMPRESS
AS SELECT * FROM all_objects
/
Timing - Elapsed: 00:00:10.73
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_COMPRESS'
3072
SELECT COMPRESS_FOR FROM ALL_TABLES WHERE TABLE_NAME =
'ALL_OBJECTS'
OUTPUT - DIRECT LOAD ONLY
CREATE TABLE obj_uncompress
AS SELECT * FROM all_objects
/
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_UNCOMPRESS'
8192
Timing : 00:00:09:79
Alter system flush shared_pool
Table Creation Script -
CREATE TABLE obj_compress
COMPRESS FOR ALL OPERATIONS
AS SELECT * FROM all_objects
/
Timing - Elapsed: 00:00:08.57
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_COMPRESS'
3072
SELECT COMPRESS_FOR FROM ALL_TABLES WHERE TABLE_NAME =
'ALL_OBJECTS'
OUTPUT – COMPRESS FOR ALL OPERATIONS
CREATE TABLE obj_uncompress
AS SELECT * FROM all_objects
/
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_UNCOMPRESS'
8192
Timing : 00:00:09:79
51 of 222
52. 2.7 : 1 – (compressed vs uncompressed ratio) 2.7 : 1 – (compressed vs uncompressed ratio)
One time load of existing data
Insertion -
INSERT INTO obj_compress
SELECT * from all_objects
Timing - Elapsed: 00:00:08.43
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_COMPRESS'
10240
INSERT INTO obj_uncompress
SELECT * from all_objects
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_UNCOMPRESS'
16384
Timing : 00:00:08:36
Since the value has almost doubled in both the cases its pretty obvious that the
INSERTION has also compressed the data.
Data compression ratio:
1.6 : 1 (compressed vs compressed ratio)
One time load of existing data
Insertion -
INSERT INTO obj_compress
SELECT * from all_objects
Timing - Elapsed: 00:00:13.29
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_COMPRESS'
6144
INSERT INTO obj_uncompress
SELECT * from all_objects
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_UNCOMPRESS'
16384
Timing : 00:00:08:36
Since the value has almost doubled in both the cases its pretty obvious that the
INSERTION has also compressed the data.
Data compression ratio:
2.7 : 1 (compressed vs compressed ratio)
10 times load of the current data 10 times load of the current data
52 of 222
53. INSERT INTO obj_compress
SELECT a.* from all_objects a, (select * from dual connect by level <= 10) b
Timing : Elapsed: 00:00:18.37
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_COMPRESS'
81920
INSERT INTO obj_uncompress
SELECT a.* from all_objects a, (select * from dual connect by level <= 10) b
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_UNCOMPRESS'
97280
Data compression ratio:
1.18 : 1 - (compressed vs compressed ratio)
Drop table obj_compress
Drop table obj_uncompress
INSERT INTO obj_compress
SELECT a.* from all_objects a, (select * from dual connect by level <= 10) b
Timing - Elapsed: 00:02:11.39
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_COMPRESS'
49152
INSERT INTO obj_uncompress
SELECT a.* from all_objects a, (select * from dual connect by level <= 10) b
select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_UNCOMPRESS'
97280
Timing - Elapsed: 00:00:20.39
Data compression ratio:
1.97 : 1 - (compressed vs compressed ratio)
Drop table obj_compress
Drop table obj_uncompress
STATISTICS
SELECT * FROM obj_compress WHERE rownum < 5000
statistics
---------------------------------------------------------
1270 recursive calls
0 db block gets
STATISTICS -
SELECT * FROM obj_compress WHERE rownum < 5000
Statistics
----------------------------------------------------------
331 recursive calls
0 db block gets
53 of 222
54. 695 consistent gets
996 physical reads
2624 redo size
235739 bytes sent via SQL*Net to client
4079 bytes received via SQL*Net from client
335 SQL*Net roundtrips to/from client
7 sorts (memory)
0 sorts (disk)
4999 rows processed
SELECT * FROM obj_uncompress WHERE rownum < 5000
Statistics
----------------------------------------------------------
31 recursive calls
0 db block gets
653 consistent gets
1730 physical reads
5172 redo size
511350 bytes sent via SQL*Net to client
4079 bytes received via SQL*Net from client
335 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4999 rows processed
535 consistent gets
228 physical reads
0 redo size
235739 bytes sent via SQL*Net to client
4079 bytes received via SQL*Net from client
335 SQL*Net round trips to/from client
0 sorts (memory)
0 sorts (disk)
4999 rows processed
9.2 Insert for single column
If we try to insert single column with full of unique columns – it really doesn't reduce the size of the compressed table!!
Oracle 9i Oracle 11g
54 of 222
55. CREATE TABLE obj_uncompress
AS SELECT object_id FROM all_objects
SELECT COUNT(1) FROM obj_uncompress
67690
DECLARE
j number := 0;
BEGIN
FOR i in 1 .. 500000 loop
j := i;
INSERT INTO OBJ_UNCOMPRESS VALUES (j);
END LOOP;
END;
/
OUTPUT:
SQL> DECLARE
2 j number := 0;
3 BEGIN
4 FOR i in 1 .. 500000 loop
5 j := i;
6 INSERT INTO OBJ_UNCOMPRESS VALUES (j);
7 END LOOP;
8 END;
9 /
PL/SQL procedure successfully completed.
Elapsed: 00:01:32.40
OLTP INSERT – like we do real time
CREATE TABLE obj_compress
COMPRESS FOR ALL OPERATIONS
AS SELECT object_id FROM all_objects
SELECT COUNT(1) FROM obj_compress
67690
DECLARE
j number := 0;
BEGIN
FOR i in 1 .. 500000 loop
j := i;
INSERT INTO OBJ_COMPRESS VALUES (j);
END LOOP;
END;
/
OUTPUT:
PL/SQL procedure successfully completed.
Elapsed: 00:01:45.14
Elapsed: 00:00:00.07
SQL> select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_COMPRESS';
SUM(BYTES)/1024
---------------
55 of 222
56. select sum(bytes)/1024 from user_segments where segment_name =
'OBJ_UNCOMPRESS';
SUM(BYTES)/1024
---------------
7168
Elapsed: 00:00:01.10
30720
Note: OLTP UPDATE / DELETE / MERGE for 500 rows took too much of time – suspect mainly because I have installed 11g in my laptop and it has limited RAM and
CPU. Test it in a environment with proper resource.
For one time load of data the data compression was close to 2.7 : 1. whereas with 10 times load of data the data compression ratio was 1.97:1. based on the above example
at least 50-60% compression can be achieved. (90% performance gain is based on direct inserts. Real time scenarios we perform inserts quite differently).
Notice that the time taken to insert into COMPRESSED FOR ALL OPERATIONS table takes a little bit more time than inserting into DIRECT PATH ONLY
compression table. This is due to the overhead in the CPU operations. Given the advantages of the compression – the slight minimal overhead for the DML operations are
acceptable !!
Benefits -
1. Reduction of disk space
2. Extra savings on I/O and cache efficiency – Oracle operates directly on the compressed data without incurring the overhead to uncompress the data then use it.
3. Performance of full table scans where ever required also becomes more efficient
4. Fair reduction in the consistent gets as the blocks used to store the data is less
56 of 222
57. 10 Commit_write parameter
Feature available from Oracle 10g.
COMMIT_WRITE - COMMIT_WRITE = '{IMMEDIATE | BATCH},{WAIT |NOWAIT}'
COMMIT_WAIT - COMMIT_WAIT = { NOWAIT | WAIT | FORCE_WAIT }
COMMIT_POINT_STRENGTH Values range 0 to 225
COMMIT_LOGGING = COMMIT_LOGGING = { IMMEDIATE | BATCH }
Examples:
Oracle 11g
COMMIT_WRITE – All to do with writing to REDO logs
commit_write wait;
Doesn't return unless the redo information is written to the online redo log.
CREATE TABLE TEST_COMMIT
(a number,
b varchar2(100))
/
set timing on
declare
j number := 1;
BEGIN
FOR i IN 1 .. 50000 loop
insert into test_Commit values ( j, 'COMMIT WRITE WAIT');
j := j + 1;
end loop;
COMMIT WRITE WAIT;
57 of 222
58. END;
/
Elapsed: 00:00:25.90
commit_write immediate;
Redo information is written immediate to the logs
truncate table test_commit;
set timing on
declare
j number := 1;
BEGIN
FOR i IN 1 .. 50000 loop
insert into test_Commit values (j, 'COMMIT WRITE NOWAIT');
j := j+1;
END LOOP;
COMMIT WRITE NOWAIT;
END;
/
Elapsed: 00:00:08.76
commit_write batch;
Redo information writes are deferred
truncate table test_commit;
set timing on
declare
58 of 222
59. j number := 1;
BEGIN
FOR i IN 1 .. 50000 loop
insert into test_Commit values (j, 'COMMIT WRITE BATCH;');
j := j+1;
END LOOP;
COMMIT WRITE BATCH;
END;
/
Elapsed: 00:00:05.45
commit_write nowait;
returns before the redo information is written to the online redo log
truncate table test_commit;
set timing on
declare
j number := 1;
BEGIN
FOR i IN 1 .. 50000 loop
insert into test_Commit values (j, 'COMMIT WRITE IMMEDIATE');
j := j+1;
END LOOP;
COMMIT WRITE IMMEDIATE;
END;
/
Elapsed: 00:00:04.48
It can be set at SESSION / SYSTEM level -
59 of 222
60. 11 Dynamic SQL for PLSQL - Functional completeness
Allows to write dynamic SQL statements larger than 32KB. Now, DBMS_SQL.PARSE function is overloaded with CLOB data type. Now a REF cursor can be converted
into a DBMS_SQL cursor.
DBMS_SQL – to execute dynamic SQL statements that has unknown in / out variables – similar to Method 4 in pro*c. When we do not know the columns a select
statement would return / the data type then DBMS_SQL is the best way to go.
Native dynamic SQL – Available since Oracle 8i and enables to perform dynamic SQL. This enables to retrieve the records – variables should be known at compile time.
We can use the cursor attributes like %ISOPEN, %FOUND, %NOTFOUND and %ROWCOUNT.
Oracle 11g
CREATE OR REPLACE PROCEDURE native_dyn_sql
(in_source_code IN CLOB) AS
BEGIN
EXECUTE IMMEDIATE in_source_code;
dbms_output.put_line('The value of source code is ' || in_source_code);
END native_dyn_sql;
/
exec native_dyn_sql ('begin dbms_output.put_line(''hello how are you !!''); end;')
SQL> exec native_dyn_sql ('begin dbms_output.put_line(''hello how are you !!'');
end;')
hello how are you !!
The value of source code is begin dbms_output.put_line('hello how are you !!');
end;
60 of 222
D
61. PL/SQL procedure successfully completed.
Elapsed: 00:00:00.06
SQL>
Now, it is possible to switch between DBMS_SQL to Native dynamic SQL which was not possible prior to Oracle 11g. This can be achieved by
DBMS_SQL.TO_REF_CURSOR and DBMS_SQL.TO_CURSOR_NUMBER.
Oracle 11g
Create or replace procedure convert_native_dbms
AS
TYPE native_cursor IS REF CURSOR;
native_cursor_tab native_cursor;
desc_vars dbms_sql.desc_tab;
cursor_handle NUMBER;
cursor_return NUMBER;
BEGIN
cursor_handle := dbms_sql.open_cursor;
dbms_sql.parse(cursor_handle, 'select empno from emp where rownum < 10', dbms_sql.native);
cursor_return := dbms_sql.execute(cursor_handle);
/* Save the cursor handler to native_cursor_Tab */
native_cursor_tab := dbms_sql.to_refcursor (cursor_handle);
/* dbms_sql into native dynamic sql now */
FETCH native_cursor_Tab BULK COLLECT INTO empno_tab;
FOR i in empno_tab.first .. empno_tab.last LOOP
DBMS_OUTPUT.PUT_LINE('The value of empno_Tab is - ' || empno_tab(i));
END LOOP;
CLOSE native_cursor_tab;
END convert_dbms_native;
61 of 222
62. /
RESULT
SQL> exec convert_dbms_native;
The value of empno_Tab is - 1
The value of empno_Tab is - 2
The value of empno_Tab is - 3
The value of empno_Tab is - 4
The value of empno_Tab is - 5
The value of empno_Tab is - 6
The value of empno_Tab is - 7
The value of empno_Tab is - 8
The value of empno_Tab is - 9
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.42
Also, it is equally possible to transform REF CURSOR into DBMS_SQL cursor – this can be achieved using DBMS_SQL.TO_CURSOR_NUMBER.
Oracle 11g
Create or replace procedure convert_dbms_native( in_cursor IN VARCHAR2)
AS
TYPE native_cursor IS REF CURSOR;
native_cursor_tab native_cursor;
TYPE empno_list IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
empno_tab empno_list;
cursor_handle NUMBER;
cursor_return NUMBER;
v_emp_no NUMBER;
62 of 222
63. v_num_columns number;
v_describe dbms_sql.desc_tab;
BEGIN
OPEN native_cursor_tab FOR in_cursor;
cursor_handle := DBMS_SQL.TO_CURSOR_NUMBER(native_cursor_tab);
dbms_sql.describe_columns(cursor_handle, v_num_columns, v_describe);
FOR i in 1 .. v_num_columns LOOP
if v_describe(i).col_type = 1 THEN
dbms_sql.define_column(cursor_handle, i, v_emp_no);
END IF;
END LOOP;
WHILE DBMS_SQL.FETCH_ROWS(cursor_handle) > 0 LOOP
FOR i in 1 .. v_num_columns LOOP
if v_describe(i).col_type = 1 then
dbms_sql.column_value (cursor_handle, i, v_emp_no);
dbms_output.put_line('The value of empno is - ' || v_emp_no);
end if;
END LOOP;
END LOOP;
dbms_sql.close_cursor(cursor_handle);
END;
/
exec convert_dbms_native('select empno from emp where rownum <= 10')
63 of 222
64. 12 DML Error logging
When we are bulk processing records – especially when we need to perform DML operations based on global temporary tables it becomes really difficult to trap the error
– unless we use bulk exception handling.
Create table emp1(empno number, still_employed varchar2(20))
create unique index emp1_idx on emp1 (empno)
Oracle 9i Oracle 11g
< .... >
forall j in emp_tab.first .. emp_tab.last save exceptions
INSERT INTO <table name> VALUES (emp_tab(j));
exception when bulk_errors then
for j in 1 .. sql%bulk_exceptions.Count
loop
dbms_output.put_line ( 'Error -' ||
To_Char(sql%bulk_exceptions(j).error_index) || ': ' ||
Sqlerrm(SQL%bulk_exceptions(j).error_code) );
end loop;
< .... >
exec dbms_errlog.CREATE_ERROR_LOG ('EMP1','ERR_EMP1')
INSERT
insert into emp1 values (1, 'Y');
insert into emp1 values (2, 'N');
insert into emp1 values (3, 'Y');
insert into emp1 SELECT rownum, 'N' from dual connect by level <= 20
log errors into err_emp1
reject limit 50
17 rows created.
insert into emp1 SELECT rownum, 'N' from dual connect by level <= 20
log errors into err_emp1
reject limit 10
SQL> insert into emp1 SELECT rownum, 'N' from dual connect by level <= 20
2 log errors into err_emp1
3 reject limit 10;
insert into emp1 SELECT rownum, 'N' from dual connect by level <= 20
*
64 of 222
65. ERROR at line 1:
ORA-00001: unique constraint (SYSTEM.EMP1_IDX) violated
SELECT * FROM ERR_EMP1
Output-
1 "ORA-00001: unique constraint (SYSTEM.EMP1_IDX) violated
" I 1 N
< .... >
forall j in emp_tab.first .. emp_tab.last save exceptions
UPDATE <table name> SET .....
exception when bulk_errors then
for j in 1 .. sql%bulk_exceptions.Count
loop
dbms_output.put_line ( 'Error -' ||
To_Char(sql%bulk_exceptions(j).error_index) || ': ' ||
Sqlerrm(SQL%bulk_exceptions(j).error_code) );
end loop;
< .... >
UPDATE
SQL> desc emp1
Name Null? Type
----------------------------------------- -------- -------------------
EMPNO NUMBER
STILL_EMPLOYED VARCHAR2(1)
UPDATE emp1
SET still_employed = decode(still_employed,'Y', 'YES', 'N') where
empno in (1,6)
log errors into err_emp1
reject limit 2
1 row updated.
UPDATE emp1
SET still_employed = 'YES' where
empno between 1 and 20
log errors into err_emp1
reject limit 2
ERROR at line 2:
65 of 222
66. ORA-12899: value too large for column "SYSTEM"."EMP1"."STILL_EMPLOYED"
(actual: 3, maximum: 1)
< .... >
forall j in emp_tab.first .. emp_tab.last save exceptions
DELETE FROM <table name> WHERE ...
exception when bulk_errors then
for j in 1 .. sql%bulk_exceptions.Count
loop
dbms_output.put_line ( 'Error -' ||
To_Char(sql%bulk_exceptions(j).error_index) || ': ' ||
Sqlerrm(SQL%bulk_exceptions(j).error_code) );
end loop;
< .... >
DELETE
DLETE emp1
where
empno between 1 and 20
log errors into err_emp1
reject limit 2
(Mainly useful for Referential integrity)
< .... >
forall j in emp_tab.first .. emp_tab.last save exceptions
MERGE INTO <table name> USING
(SELECT emp_tab.blah blah FROM DUAL)
....
exception when bulk_errors then
for j in 1 .. sql%bulk_exceptions.Count
loop
dbms_output.put_line ( 'Error -' ||
To_Char(sql%bulk_exceptions(j).error_index) || ': ' ||
Sqlerrm(SQL%bulk_exceptions(j).error_code) );
end loop;
< .... >
MERGE
66 of 222
67. 13 External table
Prior to Oracle 9i we can only read from external tables – from Oracle 10g we can write to external table !! ie., LOADING & UNLOADING is possible via external
tables.
Lets take a look at a quick example for unloading part -
create or replace directory ext_dir as 'F:appLuxanandaoradatalux'
CREATE TABLE external_test
ORGANIZATION EXTERNAL
(TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY ext_dir LOCATION ('DEPT.DAT'))
reject limit unlimited
AS select * from dept
As we have not specified the log file it got automatically created:
EXTERNAL_TEST_2672_4200.log
SQL> select * from external_test ;
DEPTNO DNAME LOC
---------- ---------- ---------
380 ACCOUNTING SINGAPORE
67 of 222
E
68. 630 IT SINGAPORE
120 RESEARCH SINGAPORE
320 OPERATIONS SINGAPORE
550 SALES SINGAPORE
How does DEPT.DAT contents look like ? - Its in XML Format !!
Now, lets query Select dbms_metadata.get_ddl('TABLE','EXTERNAL_TEST') from dual; -> returns the Table creation syntax.
DBMS_METADATA.GET_DDL('TABLE','EXTERNAL_TEST')
CREATE TABLE "SYSTEM"."EXTERNAL_TEST"
( "DEPTNO" NUMBER,
"DNAME" VARCHAR2(10),
"LOC" CHAR(9)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY "EXT_DIR"
LOCATION
( 'DEPT.DAT'
)
) REJECT LIMIT UNLIMITED
Now, if we create another external table in some other environment and make it access the same table we would be able to read it from the file.
68 of 222
69. 14 Flashback
Flashback feature – available since Oracle 9i. There are quite a few new features in Oracle 10g.
The new and improved flashback technique performs recovery in a faster pace. Flashback features offers the capability to query past versions of schema objects, query
historical data perform change analysis or perform self-service repair to recover from logical corruptions everything when the database is online.
Advantages:
● 24X7 database availability
● Saves time
Includes
Flashback Database
● Flash Recovery Area
● Enable Flashback Logging through Enterprise Manager or by issuing SQL command
It is faster than the traditional recovery. Time taken to restore the database is usually based on the number of transaction that needs to be recovered than the size of the
database. The older recovery methods uses REDO LOG files to recover the database. Flashback database introduces a new type of log called FLASHBACK DATABASE
LOG.
How does it work?
Oracle database periodically logs previous images of blocks into the flashback database logs. These blocks that are stored in the flashback database is used to quickly
recover during the flashback phase.
This is DBA activity -
69 of 222
F
70. 1. The database should be in archive mode -
By executing ARCHIVE LOG LIST command at SQL Plus prompt it will let us know whether the database is in archive mode or not
ARCHIVE LOG LIST;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination F:logarchive
Oldest online log sequence 1
Current log sequence 2
If it is in "No Archive Mode" then following needs to be done to set it in archive mode
a. Shut down the database:
SQL> SHUTDOWN IMMEDIATE;
b. Open init<SID>.ora file and set following parameters:
log_archive_dest_1=’LOCATION=F:logarchive’
log_archive_dest_2=’LOCATION=F:logarchive1’ /* Only if there are two log files – optional */
log_archive_format=’%t_%s.ARCH’
c. Start database in mount exclusive mode
STARTUP MOUNT EXCLUSIVE PFILE=init<SID>.ora;
d. Start the database in ARCHIVELOG mode as follows:
ALTER DATABASE ARCHIVELOG;
e. Open the database
70 of 222
71. ALTER DATABASE OPEN;
2. Assign flashback recovery log file path, size and log retention values in init.ora file:
DB_RECOVERY_FILE_DEST=F:logarchive flasharea
Set the following :
DB_RECOVERY_FILE_DEST_SIZE
DB_FLASHBACK_RETENTION_TARGET
3. SYSDBA can only do any of this – open db in MOUNT EXCLUSIVE mode
SQL> STARTUP MOUNT EXCLUSIVE;
SQL> ALTER DATABASE FLASHBACK ON;
4. check whether flashback is enabled
select log_mode, flashback_on from v$database;
LOG_MODE FLASHBACK ON
------------ -------------------
ARCHIVELOG YES
To disable flashback database – ALTER DATABASE FLASHBACK OFF
Now, how to flashback the database
FLASHBACK DATABASE TO TIMESTAMP (SYSDATE);
We can also use SCN number to flashback the database - FLASHBACK DATABASE TO SCN <scn number>
Oracle 9i Oracle 11g
Not possible – only traditional recovery is possible. Flashback Database (10g)
71 of 222
72. Database needs to be in Archive mode.
Recovery area param needs to be changed - DB_RECOVERY_FILE_DEST and
DB_RECOVERY_FILE_DEST_SIZE
select flashback_on from v$database;
NO
To enable it the following needs to be done.
STARTUP MOUNT EXCLUSIVE;
ALTER DATABASE FLASHBACK ON;
Set up DB_FLASHBACK_RETENTION_TARGET
FLASHBACK DATABASE TO TIME = TO_DATE (‘02/13/10
12:00:00’,’MM/DD/YY HH:MI:SS’);
-- Purely DBA related
Flashback Drop
● Recycle Bin - automatically enabled with Oracle Database 10g
To check the details
NAME TYPE VALUE
------------------------------------ ----------- ------------------
buffer_pool_recycle string
db_recycle_cache_size big integer 0
recyclebin string on
Sometimes it happens that we accidently drop objects in the database. Flashback drop provides an option to recover the dropped objects like tables, triggers, indexes,
72 of 222
73. constraints etc. When a table is dropped it is not actually dropped – it is just renamed and the renamed table name is available in the recycle bin. We can either drop the
table permanently using purge or recover it using flashback. To drop the table without making it part of recycle bin then we can issue DROP TABLE <tablename>
PURGE command. USER / DBA_RECYCLEBIN contains the list of dropped objects.
Oracle 9i Oracle 11g
Not possible Flashback Drop (10g) – Does not work form SYSDBA user
DROP TABLE test_readonly;
SHOW recyclebin
-- Note when a table is dropped it is just renamed. It is not droped when the
RECYCLE mode is ON. So the space occupied is there as it is.
SQL> select object_name, original_name, operation from user_recyclebin;
OBJECT_NAME ORIGINAL_NAME OPERATION
------------------------------ -------------------------------- ---------
BIN$uCJ5kSosTM+s7chj6cJeGQ==$0 TEST_READONLY DROP
BIN$wl9CUjW7RsapzkrsoHDm+w==$0 TEST_READONLY DROP
Note: TEST_READONLY was dropped, recreated then dropped again.
To retreive the table that was dropped – issue the following command.
SQL> flashback table "BIN$uCJ5kSosTM+s7chj6cJeGQ==$0" to before drop;
Flashback complete.
Elapsed: 00:00:02.71
SQL> select * from test_Readonly;
73 of 222