DEMO 1 - Magic Switch: ON/OFF
select count(*) from vanzari
where valoare < 100;
Smart Scan Magic
Cell Node 1 Cell Node 2 Cell Node 3
from vanzari where
tara = ‘RO’;
Get blocks from
Info about the
Direct path read?
● Instead of returning whole Oracle blocks, the storage layer
will pump in just the needed column values.
● The IO traffic is reduced.
● For JOINs, besides columns from the SELECT list, the ones
from the join are also transferred.
YES, “SELECT *” when just some columns are
needed isn’t a good idea on Exadata too!
● Instead of pushing full Oracle blocks from the storage to the
Oracle instance, only those records matching the predicate
will be pumped in to the wire.
● The IO traffic is reduced (the filtering is done on the storage
● Does work for joins too using bloom filters, but on the storge
Storage Indexes Explained
1MB 1MB 1MB1MB1MB1MB1MB
min max min max
from vanzari where
vanzari <= 1000;
=, <, >, BETWEEN,
<=, >=, IN,
IS NULL, IS NOT NULL
Do You Want to Drop all DB Indexes?
● There are many scenarios where SI can’t be used (e.g.
get the MIN or MAX for a column)
● SI is a transient in-memory structure. Restart a storage
server and you lost all storage indexes on that server
● The decision to create a SI is fully automated and it
depends on the workload.
● A SI can’t be used to enforce a PK constraint
● SI are limited to only 8 columns/table
● No integration between SI and CBO
● There’s no such thing as functional SI.
● A SI can’t be used for FK columns to avoid locking
Smart Flash Scan
4 cards x 4 disks x
24GB = 384GB on
every storage cell
Controlling the SFC
on segment level
on storage level
HCC: Hybrid Columnar Compression
QUERY LOW LZA compression algorithm. Low CPU
and low compression.
QUERY HIGH ZLIB (gzip) compression. 6x
ARCHIVE LOW ZLIB (gzip) but with a higher
ARCHIVE HIGH BZIP2 compression. CPU intensive,
high compression rate.
● Only tables can be compressed. Indexes are not included.
● Exadata inter-operability only. Pay attention when restoring
HCC data on a non-exadata machine..
● Not good for OLTP.
● May consume a lot of CPU (for compression, but for de-
● It’s not a true columnar model, so wasn’t really designed
having the performance idea in mind
IO Resource Manager
● Tight integration with Database Resource Manager
● Provides IO prioritization
● Can be enforced across different databases hosted on the
same Exadata machine
● A very important component for consolidated/shared
● In old versions of Exadata, only physical disks were
managed by IORM. Flash disks were not handled by IORM.
However, this is not true starting with 184.108.40.206.0.
● Workload optimization: low_latency, high_throughput,
Things I don’t Like
● The Exadata official documentation was not public
accessible. It’s fixed now via a patch: Patch 10386736:
Storage Application for the Grid Environment 220.127.116.11.0 for
Linux x86-64 - DOCUMENTATION FOR EXADATA 11.2
● It’s a closed engineered product. However, according to Kevin
Closson, Exadata was originally intended to be an open
● The lack of good integration of CBO with Exadata storage
(Bug 10248538: OPTIMIZER COST MODEL NOT EXADATA-
● Big room for performance instability (storage indexes, flash
cache, direct path read etc.)
● The huge price