Attempting to push down selection in joins (to reduce the number of rows at each step)
Determining which indexes are available for use in the plan and whether that changes the join algorithm (hash, nested loop, ...)
Calculate the I/O and CPU costs for each step of the plan
Choose plan with the best overall graph weight.
Build graphs for joins in different orders
Join R S = (R, S) = (S, R)
Join R S T = ((R, S), T) = (R, (S, T))
Join is a binary Relational Algebra operation
Follows mathematical rules for associativity and commutativity
Generally follows the process
If a table is joined but none of its attributes (fields) are projected (in the select-list), consider it for join removal
If the table being joined is redundant due to PK/FK integrity constraints, is can be considered for removal.
If the table being joined is redundant due to unique constraints, it can be considered for removal.
If all conditions are met, it can be removed from the query plan
Based on the concept of constraint exclusion
Drops (prunes) all partitions which could not possibly be used in our plan
If you have range-based partitions for Q1, Q2, Q3, and Q4 and query data BETWEEN '01-JAN-09' AND '15-JAN-09', the optimizer knows that the data could not exist in partitions Q2, Q3, and Q4 by excluding them based on their constraint (Q2>='2009-03-31' and Q2 < '2009-07-01', ...)
To try reduce the cost of a join, see if the number of rows in R or S can be reduced prior to the join by pushing down parts of the predicate to those individual tables prior to the join.
One area where this is beneficial is in nested loop joins, whose basic cost is determined by cardinality(R) * cardinality(S)
Another area where this is beneficial is in hash joins, where the server can reduce the cardinality of one of the tables significantly. This results in less CPU and memory required to build the hash table.
Build an Execution Plan
Based on the best query plan, transform the plan into an execution plan for executing the query.
Execute the Plan
For each node in the execution plan, perform the respective operation.
Execute the Plan (Index Node)
Find all rows in the index on mytable.foo where foo = 'bar'.
Open the index
Perform a search on the B*Tree
Once the index entry is found, locate the data in the heap (table data) using the ROWID.
Perform B*Tree Search
FUNCTION btree_search (x, k) i := 1 WHILE i <= n[x] AND k > key i [x] DO i := i + 1 IF i <= n[x] AND k = key i [x] THEN RETURN (x, i); IF leaf[x] THEN RETURN NIL ELSE kcbget(c i [x]); RETURN btree_search(c i [x], k); END IF;
Retrieving a Block (kcbg*())
Check the Buffer Cache
If the block isn't in the buffer cache, read it from disk (sfrfb()-System File Read File Block)
If the block is in the buffer cache, and no one has altered it without committing, use it.
If the block is in the buffer cache, and someone has altered it without committing, build a before image of the block from UNDO and use it (kcbgtcr()-Kernel Cache Buffer GeT for Consistent Read).
Updating the Data
Once the data has been found, update it
Acquire a row-level lock by placing an entry in the interested transaction list (ITL). If the row is already locked, wait (or don't wait depending on what the user requested)
The server generates an UNDO/REDO record containing the change vector for the record (ktugur()-Kernel Transaction Undo Generate Undo and Redo)
UNDO contains the column foo with value bar
REDO contains the column foo with value baz
The Oracle server returns a packet to the client regarding success/failure of the statement.
Committing the Data
The client sends a commit message to the server, which the server processes as before. Because it's a command, it does not have to go through query planning.
Flush the REDO/UNDO data to disk up to the point of the commit
Increment the SCN
Fast Commit and Delayed Block Cleanout
In Fast Commit mode, Oracle does not clean the ITL it used on the last transaction as a part of commit.
The next request to read the block will check to see whether the transactions in the ITL are still in progress, if not, there's no reason to get a consistent read version of the block.
If the next request is DML, it will itself perform ITL cleanup for the old transaction.
Fetching the Data
The client sends the server a fetch request for N number of rows from a the cursor.
The server marshalls the data to be sent over the network
The server sends as many packets as are necessary to contain the data
The client reads the data, unmarshalls it, performs any necessary encoding changes, and returns it to the application.