Oracle Exadata for OLTP and DWH
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.  The development, release, and timing of any features or functionality described for Oracle’s products remain at the sole discretion of Oracle.
The Architecture of the Future Exadata Massively Parallel Grids Best for Data Warehousing Best for OLTP Best for Consolidation
Database Machine Success
“ Every query was faster  on Exadata compared to our current systems. The smallest performance improvement was  10x  and the biggest one was an incredible  72x .”   Simeon Dimitrov, Enterprise Resources Manager “ Call Data Record queries that  used to run for  over 30 minutes   now complete in   under 1 minute . That's extreme performance.” Grant Salmon ,  CEO, LGR Telecommunications “ A query that  used to take  24 hours  now runs in  less than 30 minutes . The Oracle Database Machine  beats competing solutions on   bandwidth, load rate, disk capacity, and transparency . ”  Christian Maar, CIO Database Machine Success
“ The Oracle Database Machine is  an ideal cost-effective platform to meet our speed and scalability needs . ” Ketan Parekh, Manager Database Systems “ After carefully testing several data warehouse platforms, we chose the Oracle Database Machine. Oracle Exadata was able to speed up one of our critical processes  from days to minutes ..”  Brian Camp, Sr. VP of Infrastructure Services Database Machine Success
Extreme Performance Gains Customer Benchmark Results Average Gain Customer Industry 20x 16x 15x Telecom 28x Retail Finance Telecom
“ When it comes to speed, Oracle Exadata technology has changed the game completely…..”  Grant Salmon CEO LGR Telecommunications from Profit Magazine, February 2009
<Insert Picture Here> Introducing  Oracle Exadata Version 2
Oracle Exadata Database Machine Version 1 World’s Fastest Machine for Data Warehousing Extreme Performance for Sequential I/O 10x Faster than other Oracle D/W Systems Version 2 World’s Fastest Machine for OLTP Extreme Performance for Random I/O 2x Version 1 Data Warehousing Performance Dramatic new Exadata Software Capabilities
Oracle Exadata V2 - Best OLTP Machine Only Oracle runs real-world business applications  “on the Grid” Unique fault-tolerant scale-out OLTP database RAC, Data Guard, Online Operations Unique fault-tolerant scale-out storage suitable for OLTP ASM, Exadata
Best Data Warehouse Machine Massively parallel high volume hardware to quickly process vast amounts of data Exadata runs data intensive processing  directly in storage  Most complete analytic capabilities  OLAP, Analytic SQL, Spatial, Data Mining, Real-time  transactional ETL, Efficient queries Powerful warehouse specific optimizations Flexible Partitioning, Bitmap Indexing, Join indexing, Materialized Views, Result Cache Data Mining OLAP ETL
Drastically Simplified Deployments Database Machine eliminates the complexity of deploying database systems Months of configuration, troubleshooting, tuning Database Machine is ready on day one Pre-built, tested, standard, supportable configuration Runs existing applications unchanged Extreme performance out of the box From Months to Days
Sun Oracle Database Machine Hardware Complete, Pre-configured, Tested for Extreme Performance Database Servers Exadata Storage Servers InfiniBand Switches Ethernet Switch Pre-cabled Keyboard, Video, Mouse (KVM) hardware Power Distribution Units (PDUs) Ready to Deploy Plug in power Connect to Network Ready to Run Database
<Insert Picture Here> Familiar  Technology with Powerful Performance
© 2009 Oracle Corporation What’s Inside Exadata?  Exadata Storage Server, Database Server, and Infiniband Switches Infiniband Switches Database Servers Storage Servers
Sun Oracle Database Machine Exadata Storage Server Grid 21 GB/sec disk bandwidth 50 GB/sec flash bandwidth 1 million I/Os per second Oracle Database Server Grid Millions of transactions  per minute Tens of millions of queries  per minute Billions of rows per minute InfiniBand Network 880 Gb/sec aggregate throughput Extreme Performance
Sun Oracle Database Machine Highest performance, lowest cost per unit of performance Fault tolerant, Scalable on demand Exadata Storage Server Grid 14 storage servers 100 TB raw SAS disk storage or 336 TB raw SATA disk storage 5TB of flash storage! Oracle Database Server Grid 8 compute servers 64 Intel Cores 576 GB DRAM InfiniBand Network 40 Gb/sec unified server and storage network Fault Tolerant
Sun Oracle Database Machine Hardware Improvements Same architecture as Exadata V1 Database Machine Same number and type of Servers, CPUs, Disks Plus Flash Storage! Latest Technologies 80%  Faster CPUs  33%  More SAS Disk Capacity 100% More SATA Disk Capacity 50%  Faster Disk Throughput 100% Faster Networking 125% More Memory 200% Faster Memory 100% More Ethernet Connectivity Xeon 5500 Nehalem 600 GB SAS Disks 2 TB SATA Disks 6 Gb SAS Links 40 Gb InfiniBand 72 GB per DB Node DDR3 DRAM 4 Ethernet links per DB Node New Faster Bigger
<Insert Picture Here> Exadata  Database Servers
Sun Oracle Database Server 8 Sun Fire X4170 DB per rack •  8 CPU cores – 2x performance •  72 GB memory – 2.5x increase •  Redundant HCA path to 2 switches •  Fully redundant power and cooling
<Insert Picture Here> Exadata  Storage Servers
Sun Oracle Exadata Storage Server •  14 Sun Fire X4275 per rack •  5x faster than conventional storage •  2x more storage capacity •  Simplifies storage to eliminate complex SAN architectures •  Sun FlashFire Technology turbocharges applications
<Insert Picture Here> Exadata  FlashFire Card
Sun FlashFire Technology Extreme Performance Accelerator •  10x better IO response time •  5.25 Terabytes Flash per rack •  1,000,000 IOPS per rack •  20x IOPS speedup for Oracle •  Integrated super caps for data retention New
<Insert Picture Here> Exadata  InfiniBand Switch
InfiniBand Network High Bandwidth, Low Latency •  Sun Datacenter InfiniBand Switch 36 •  Fully redundant non-blocking IO paths from servers to storage •  2.88 Tb/sec bi-sectional bandwidth per switch •  40 Gb/sec QDR, Dual port QSFP per server
<Insert Picture Here> Exadata  Configurations
Start Small and Grow Full Rack Half Rack Quarter Rack Basic System
Sun Oracle Database Machine Full Rack  Pre-Configured for Extreme Performance 8 Sun Fire ™  X4170 Oracle Database servers 14 Exadata Storage Servers (All SAS or all SATA) 3 Sun Datacenter InfiniBand Switch 36 36-port Managed QDR (40Gb/s) switch 1 “Admin” Cisco Ethernet switch Keyboard, Video, Mouse (KVM) hardware Redundant Power Distributions Units (PDUs) Single Point of Support from Oracle 3 year, 24 x 7, 4 Hr On-site response Add more racks for additional scalability
4 Sun Fire ™  X4170 Oracle Database servers 7 Exadata Storage Servers (All SAS or all SATA) 2  Sun Datacenter InfiniBand Switch 36  36-port Managed QDR (40Gb/s) switch 1 “Admin” Cisco Ethernet switch Keyboard, Video, Mouse (KVM) hardware Redundant PDUs Single Point of Support from Oracle 3 year, 24 x 7, 4 Hr On-site response Sun Oracle Database Machine Half Rack  Pre-Configured for Extreme Performance Can Upgrade to a Full Rack
2 Sun Fire ™  X4170 Oracle Database servers 3 Exadata Storage Servers (All SAS or all SATA) 2 Sun Datacenter InfiniBand Switch 36  36-port Managed QDR (40Gb/s) InfiniBand switch 1 “Admin” Cisco Ethernet switch Keyboard, Video, Mouse (KVM) hardware Redundant PDUs Single Point of Support from Oracle 3 year, 24 x 7, 4 Hr On-site response Sun Oracle Database Machine Quarter Rack  Pre-Configured for Extreme Performance Can Upgrade to an Half Rack
Sun Oracle Database Machine Basic System Entry Level non-HA Configuration 1 Sun Fire ™  X4170 Oracle Database servers 1 Exadata Storage Servers (All SAS or all SATA) 1 Sun Datacenter InfiniBand Switch 36  36-port Managed QDR (40Gb/s) InfiniBand switch InfiniBand Cables Installed in Customer supplied Rack Customer supplied Ethernet and KVM Infrastructure Single Point of Support from Oracle 3 year, 24 x 7, 4 Hr On-site response
Exadata Product Capacity 1 – Raw capacity calculated  using 1 GB = 1000 x 1000 x 1000 bytes and 1 TB = 1000 x 1000 x 1000 x 1000 bytes. 2 -  User Data:  Actual  space for end-user data, computed after single mirroring (ASM normal redundancy)  and after allowing space for database structures such as temp, logs, undo, and indexes. Actual user data capacity varies by application. User Data capacity calculated using 1 TB = 1024 * 1024 * 10 24 * 1024 bytes. Single Server Quarter Rack Half Rack Full Rack Raw Disk 1 SAS 7.2 TB 21 TB 50 TB 100 TB SATA 24 TB 72 TB 168 TB 336 TB Raw Flash 1 384 GB 1.1 TB 2.6 TB 5.3 TB User Data 2 (assuming no compression) SAS 2 TB 6 TB 14 TB 28 TB SATA 7 TB 21 TB 50 TB 100 TB
Exadata Product Performance 1 – Bandwidth is peak physical disk scan bandwidth, assuming no compression.  2 -  Max User Data Bandwidth assumes scanned data is compressed by factor of 10 and is on Flash.  3 – IOPs – Based on IO requests of size 8K 4 -  Actual performance will vary by application. Single Server Quarter Rack Half Rack Full Rack Raw Disk Data Bandwidth 1,4 SAS 1.5 GB/s 4.5 GB/s 10.5 GB/s 21 GB/s SATA 0.85 GB/s 2.5 GB/s 6 GB/s 12 GB/s Raw Flash Data Bandwidth 1,4 3.6 GB/s 11 GB/s 25 GB/s 50 GB/s Max User Data Bandwidth 2,4 (10x compression & Flash)   36 GB/s 110 GB/s 250 GB/s 500 GB/s Disk IOPS 3,4 SAS 3,600 10,800 25,000 50,000 SATA 1,440 4,300 10,000 20,000 Flash IOPS 3,4 75,000 225,000 500,000 1,000,000 Data Load Rate 4 0.65 TB/hr 1 TB/hr 2.5 TB/hr 5 TB/hr
<Insert Picture Here> Exadata  Storage Server Software
Exadata Storage Features Exadata Smart Scans 10X or greater reduction in data sent to database servers Exadata Storage Indexes Eliminate unnecessary I/Os to disk Hybrid Columnar Compression Efficient compression increases user data scan rates Flash increases scan rates Flash cache removed spinning magnetic media bottleneck
Exadata Smart Storage Breaks Data Bandwidth and Random I/O Bottleneck Oracle addresses data bandwidth bottleneck in three ways Massively parallel storage grid  of high performance Exadata storage servers (cells).  Data bandwidth scales with data volume Data intensive processing  runs in Exadata storage.  Queries run in storage as data streams from disk, offloading database server CPUs Columnar compression reduces data volume up to 10x Exadata Hybrid Columnar Compression provides 10x lower cost, 10x higher performance Oracle solves random I/O bottlenecks using Exadata Smart Flash Cache Increase random I/Os by factor of 20X Exadata Storage Cells
Exadata Smart Scans Exadata storage servers implement data intensive processing in storage Row filtering based on “where” predicate Column filtering Join filtering Incremental backup filtering Storage Indexing Scans on encrypted data Data Mining model scoring 10x reduction in data sent to DB servers  is common No application changes needed Processing is automatic and transparent Even if cell or disk fails during a query New
Exadata Smart Scan Query Example Exadata Storage Grid Oracle Database Grid SUM Optimizer Chooses Partitions and Indexes to Access 10 TB scanned 1 MB returned to servers What were my sales yesterday? Select sum(sales)  where Date=’24-Sept’ Scan compressed blocks in partitions/indexes Retrieve sales amounts for Sept 24
Exadata Storage Index Transparent I/O Elimination with No Overhead Exadata Storage Indexes maintain summary information about table data in memory Store MIN and MAX values of columns Typically one index entry for every MB of disk Eliminates disk I/Os if MIN and MAX can never match “where” clause of a query Completely  automatic and transparent Min B = 1 Max B =5 Table Index Min B = 3  Max B =8 Select * from Table where B<2  -  Only first set of rows can match A B C D 1 3 5 5 8 3
Exadata Hybrid Columnar Compression Data is grouped by column and then compressed Query Mode  for data warehousing Optimized for speed 10X compression  typical  Scans improve proportionally Archival Mode  for infrequently accessed data Optimized to reduce space 15X compression  is typical Up to 50X for some data 50X Up To New
Exadata Hybrid Columnar Compression Warehousing and Archiving Warehouse Compression 10x average storage savings 10x Scan I/O reduction Archive Compression 15x average storage savings Up to 50x on some data Some access overhead  For cold or historical data Optimized for Speed Optimized for Space Smaller Warehouse Faster Performance Reclaim 93% of Disks Keep Data Online Can mix compression types by partition for ILM
Exadata Real-World Compression Ratios Oracle Production E-Business Suite Tables Columnar compression ratios Query  = 14.6X Archive = 22.6X Vary by application and table 52
<Insert Picture Here> Exadata  Flash Cache
The Disk Random I/O Bottleneck Disk drives hold vast amounts of data But are limited to about  300 I/Os per second Flash technology holds much less data But can run  tens of thousands of I/Os  per second   Ideal Solution Keep most data on disk for low cost Transparently move hot data to flash Use  flash cards  instead of flash disks  to avoid disk controller limitations Flash cards in Exadata storage  High bandwidth, low latency interconnect   300 I/O per Sec Tens of Thousands of I/O’s per Second
Semiconductor Cache Hierarchy Massive throughput and IOs through innovative Cache Hierarchy Database DRAM Cache 400GB raw capacity Up to 4TB compressed user data 100 GB per sec Exadata Smart Flash Cache 5TB raw capacity Up to 50TB compressed user data 50 GB/sec raw scan 1 million I/O per sec Exadata disks 100TB  or 336TB raw Up to 500TB compressed user data 21 GB/sec scan 50,000 I/O per sec
Exadata Smart Flash Cache Extreme Performance Database Machine achieves: 20x more random I/Os Over 1 million per second 2x faster sequential query I/O 50 GB/sec 10x better I/O response time Sub-millisecond Greatly Reduced Cost 10x fewer disks needed for I/O Lower Power 5X  More I/Os than 1000 Disk Enterprise Storage Array
Exadata Flash Cache Extreme Performance for Random I/O Sun Oracle Database Machine has  5 TB  of flash storage 4 high-performance flash cards in every Exadata Storage Server Smart Flash Cache caches hot data Not just simple LRU Knows when to avoid caching to avoid flushing cache  Allows optimization by application table Oracle is the First Flash Optimized Database New
Flash Cache Flash storage more than doubles scan throughput 50 GB/sec Combined with Hybrid Columnar Compression Up to  50 TB of data fits in flash Queries on compressed data run up to  500 GB/sec HITACHI USP V TERADATA 2550 NETEZZA TwinFin 12 SUN ORACLE Database Machine Query Throughput   GB/sec Uncompressed Data Flash Disk 50
Comparison to Specialty Solutions A single database machine has over 400GB of memory usable for caching Database release 11.2 introduces parallel query processing on memory cached data Harnesses memory capacity of entire database cluster for queries Foundation for world record 1TB TPC-H Exadata Hybrid Columnar Compression enables multi-terabyte tables or partitions to be cached in memory Faster than in-memory specialized startups Source: Transaction Processing  Council, as of 9/14/2009:  Oracle on HP Bladesystem c-Class 128P RAC,  1,166,976  QphH@1000GB, $5.42/QphH@1000GB,  available 12/1/09.  Exasol on  PRIMERGY RX300 S4 ,  1,018,321  QphH@1000GB, $1.18/QphH@1000GB, available  08/01/08.  ParAccel on  SunFire X4100  315,842  QphH@1000GB, $ 4.57  /QphH@1000GB, available  10/29/07. QphH: 1 TB TPC-H Memory has 100x more bandwidth than Disk
Benefits Multiply with Compression 1 TB with compression 10 TB of user data Requires 10 TB of IO 100 GB with partition pruning 20 GB  with Storage Indexes 5 GB Smart Scan on  Memory or Flash Subsecond   On Database Machine Data is 10x Smaller, Scans are 2000x faster
<Insert Picture Here> Disk  Management With Automatic Storage Management
Automatic Storage Management (ASM) Simplifies Provisioning, Improves Performance Automatic I/O load balancing Stripes data across disks to balance load Best I/O throughput Automatic mirroring  Efficient, online add/remove of disk Consolidate data from multiple databases into same shared storage environment Automatic data rebalancing – NO HOT SPOTS! Automatic Storage Management Database C Database B Database A
Exadata Storage Layout Cell Disk Sys Area Sys Area Grid Disk  n Grid Disk 1 … ASM disk ASM disk Physical disks map to Cell Disks Cell Disks are partitioned into one or more Grid Disks Grid Disks are created in order of “hottest” (first) to “coldest” portion of the disk (last) ASM diskgroups are created from Grid Disks Transparent above the ASM layer Physical Disk
Interleaved Grid Disks Grid disks are optionally split and interleaved to place frequently accessed data in all grid disks on higher performing outer tracks All applications benefit from higher performance outer tracks of disks Grid Disk 2 Hot Data, Cold Data Grid Disk 1   Hot Data, Cold Data 11gR2
Exadata Storage Layout Example  ASM Mirroring and Failure Groups Example shows Cell Disks divided into two Grid Disks Hot and Cold Two ASM disk groups created across the two sets of grid disks  ASM striping evenly distributes I/O across the disk groups  ASM mirroring is used to protect against disk failures ASM failure groups are used to protect against cell failures Exadata Cell Exadata Cell Hot Hot Hot Cold Cold Cold ASM Disk Group … Hot Hot Hot Cold Cold Cold … ASM Failure Group ASM Failure Group
<Insert Picture Here> Resource  Management for Workload Balancing
Exadata I/O Resource Management Multi-Database Environment Ensure different databases are allocated the correct relative amount of I/O bandwidth Database A: 33% I/O resources Database B: 67% I/O resources Ensure different users and tasks within a database are allocated the correct relative amount of I/O bandwidth Database A:  Reporting: 60% of I/O resources ETL: 40% of I/O resources Database B:  Interactive: 30% of I/O resources Batch: 70% of I/O resources Exadata Cell InfiniBand Switch/Network Database A Database B Exadata Cell Exadata Cell
Oracle Database 11g Release 2 Resource Manager Instance Caging – Ideal for Exadata More flexible alternative to server partitioning Wider platform support than operating system resource managers Lower administration overhead than virtualization Set CPU_COUNT per instance and enable resource manager Instance A Instance B Instance C Instance D Sum of cpu_counts 8 12 16 Total Number  of CPUs = 16 4
<Insert Picture Here> Consolidation  of Mixed Workload Environments
Consolidating Databases Consolidate onto Database Machine High performance for all applications Low cost platform for all applications Predictable response times in a shared environment Handles all data management needs Complete, Open, Integrated CRM ERP Warehouse Data Mart HR ERP CRM Warehouse Data Mart HR
Best Machine for Consolidating Databases Consolidation mixes many different workloads in one system Warehouse oriented  bulk data processing OLTP oriented  random updates Multimedia oriented  streaming files The Sun Oracle Database Machine handles any combination of workloads with extreme performance And predictable response times ERP CRM Warehouse Data Mart HR
Consolidate Database Servers Shared Configuration Applications connect to a database  service  that runs on one or more database servers Services can  grow, shrink, & move  dynamically Large databases can  span nodes  using RAC Multiple small databases can run on a single node Predictable performance Instance caging  provides predictable CPU resources when multiple databases run  on the same node Restricts a database to subset of processors ERP CRM Warehouse Data  Mart HR
<Insert Picture Here> Exadata  High Availability and Disaster Recovery
Complete, Open, Integrated Availability Maximum Availability Architecture Protection from Server Failures Storage Failures Network Failures Site Failures Real-time remote standby open for queries Human error correction  Database, table, row, transaction level Online indexing and table redefinition Online patching and upgrades WAN Real Application Clusters ASM Fast  Recovery Area Active  Data Guard Secure Backup
Active Data Guard and Low Cost DR Either Physical or Logical Standbys Can Be Opened Primary database Standby database Exadata Racks with SAS disks Exadata Rack with SATA disks Redo transport Oracle Net Backups Reporting
<Insert Picture Here> Exadata  and Oracle Enterprise Manager
Exadata Storage Management & Administration Enterprise Manager  Manage & administer Database and ASM Exadata Storage Plug-in Enterprise Manager Grid Control Plug-in to monitor & manage Exadata Storage Cells Comprehensive CLI Local Exadata Storage cell management  Distributed shell utility to execute CLI across multiple cells Sun Embedded Integrated Lights Out Manager (ILOM) Remote management and administration of hardware
Exadata Storage Plug-in Enterprise Manager Grid Control Plug-in to monitor & manage Exadata Storage Cells Works with Enterprise Manager Grid Control 10.2.0.3 and later versions
Conclusion
Sun Oracle Database Machine Extreme Performance for all Data Management Best for Data Warehousing Parallel query on memory or Flash Compressed  4TB of data in DRAM, 50 TB in flash 10x compressed tables with storage offload Overall up to 5X faster than 11.1 for Warehousing Best for OLTP Only database that scales real-world applications on grid Smart flash cache provides 1 million IOs per second Compressed 1.2 TB of data in DRAM, 15 TB in Flash  Up to 50x compression for archival data Secure, fault tolerant Best for Database Consolidation Only database machine that runs and scales all workloads Predictable response times in multi-database, multi-application, multi-user environments
Scale Both Performance and Capacity  Scalable Scales to 8 rack database machine  by just adding wires More with external  InfiniBand switches Scales to hundreds of storage servers Multi-petabyte databases Redundant and Fault Tolerant Failure of any component  is tolerated Data is mirrored across  storage servers
Fastest Time to Value & Lowest Risk Time to Database Available for Use Build From Scratch  with Components OWI Reference  Configurations Take delivery of Sun Oracle Database Machine Weeks to Months Pre-implementation System sizing Acquisition of components Installation and  configuration Acquisition of components Installation and  configuration Testing and Validation Testing and Validation Weeks to Months Sun Oracle Database Machine Faster deployment Lower Risk Database pre-configured < 1 Week after Delivery Time To ROI  /  Level of Risk
Resources Oracle.com: http://www.oracle.com/exadata Oracle Exadata Technology Portal on OTN:  http://www.oracle.com/technology/products/bi/db/exadata Oracle Exadata white papers:  http://www.oracle.com/technology/products/bi/db/exadata/pdf/exadata-technical-whitepaper.pdf http://www.oracle.com/technology/products/bi/db/exadata/pdf/migration-to-exadata-whitepaper.pdf
 

Sun Oracle Exadata V2 For OLTP And DWH

  • 1.
  • 2.
    Oracle Exadata forOLTP and DWH
  • 3.
    The following isintended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remain at the sole discretion of Oracle.
  • 4.
    The Architecture ofthe Future Exadata Massively Parallel Grids Best for Data Warehousing Best for OLTP Best for Consolidation
  • 5.
  • 6.
    “ Every querywas faster on Exadata compared to our current systems. The smallest performance improvement was 10x and the biggest one was an incredible 72x .” Simeon Dimitrov, Enterprise Resources Manager “ Call Data Record queries that used to run for over 30 minutes now complete in under 1 minute . That's extreme performance.” Grant Salmon , CEO, LGR Telecommunications “ A query that used to take 24 hours now runs in less than 30 minutes . The Oracle Database Machine beats competing solutions on bandwidth, load rate, disk capacity, and transparency . ” Christian Maar, CIO Database Machine Success
  • 7.
    “ The OracleDatabase Machine is an ideal cost-effective platform to meet our speed and scalability needs . ” Ketan Parekh, Manager Database Systems “ After carefully testing several data warehouse platforms, we chose the Oracle Database Machine. Oracle Exadata was able to speed up one of our critical processes from days to minutes ..” Brian Camp, Sr. VP of Infrastructure Services Database Machine Success
  • 8.
    Extreme Performance GainsCustomer Benchmark Results Average Gain Customer Industry 20x 16x 15x Telecom 28x Retail Finance Telecom
  • 9.
    “ When itcomes to speed, Oracle Exadata technology has changed the game completely…..” Grant Salmon CEO LGR Telecommunications from Profit Magazine, February 2009
  • 10.
    <Insert Picture Here>Introducing Oracle Exadata Version 2
  • 11.
    Oracle Exadata DatabaseMachine Version 1 World’s Fastest Machine for Data Warehousing Extreme Performance for Sequential I/O 10x Faster than other Oracle D/W Systems Version 2 World’s Fastest Machine for OLTP Extreme Performance for Random I/O 2x Version 1 Data Warehousing Performance Dramatic new Exadata Software Capabilities
  • 12.
    Oracle Exadata V2- Best OLTP Machine Only Oracle runs real-world business applications “on the Grid” Unique fault-tolerant scale-out OLTP database RAC, Data Guard, Online Operations Unique fault-tolerant scale-out storage suitable for OLTP ASM, Exadata
  • 13.
    Best Data WarehouseMachine Massively parallel high volume hardware to quickly process vast amounts of data Exadata runs data intensive processing directly in storage Most complete analytic capabilities OLAP, Analytic SQL, Spatial, Data Mining, Real-time transactional ETL, Efficient queries Powerful warehouse specific optimizations Flexible Partitioning, Bitmap Indexing, Join indexing, Materialized Views, Result Cache Data Mining OLAP ETL
  • 14.
    Drastically Simplified DeploymentsDatabase Machine eliminates the complexity of deploying database systems Months of configuration, troubleshooting, tuning Database Machine is ready on day one Pre-built, tested, standard, supportable configuration Runs existing applications unchanged Extreme performance out of the box From Months to Days
  • 15.
    Sun Oracle DatabaseMachine Hardware Complete, Pre-configured, Tested for Extreme Performance Database Servers Exadata Storage Servers InfiniBand Switches Ethernet Switch Pre-cabled Keyboard, Video, Mouse (KVM) hardware Power Distribution Units (PDUs) Ready to Deploy Plug in power Connect to Network Ready to Run Database
  • 16.
    <Insert Picture Here>Familiar Technology with Powerful Performance
  • 17.
    © 2009 OracleCorporation What’s Inside Exadata? Exadata Storage Server, Database Server, and Infiniband Switches Infiniband Switches Database Servers Storage Servers
  • 18.
    Sun Oracle DatabaseMachine Exadata Storage Server Grid 21 GB/sec disk bandwidth 50 GB/sec flash bandwidth 1 million I/Os per second Oracle Database Server Grid Millions of transactions per minute Tens of millions of queries per minute Billions of rows per minute InfiniBand Network 880 Gb/sec aggregate throughput Extreme Performance
  • 19.
    Sun Oracle DatabaseMachine Highest performance, lowest cost per unit of performance Fault tolerant, Scalable on demand Exadata Storage Server Grid 14 storage servers 100 TB raw SAS disk storage or 336 TB raw SATA disk storage 5TB of flash storage! Oracle Database Server Grid 8 compute servers 64 Intel Cores 576 GB DRAM InfiniBand Network 40 Gb/sec unified server and storage network Fault Tolerant
  • 20.
    Sun Oracle DatabaseMachine Hardware Improvements Same architecture as Exadata V1 Database Machine Same number and type of Servers, CPUs, Disks Plus Flash Storage! Latest Technologies 80% Faster CPUs 33% More SAS Disk Capacity 100% More SATA Disk Capacity 50% Faster Disk Throughput 100% Faster Networking 125% More Memory 200% Faster Memory 100% More Ethernet Connectivity Xeon 5500 Nehalem 600 GB SAS Disks 2 TB SATA Disks 6 Gb SAS Links 40 Gb InfiniBand 72 GB per DB Node DDR3 DRAM 4 Ethernet links per DB Node New Faster Bigger
  • 21.
    <Insert Picture Here>Exadata Database Servers
  • 22.
    Sun Oracle DatabaseServer 8 Sun Fire X4170 DB per rack • 8 CPU cores – 2x performance • 72 GB memory – 2.5x increase • Redundant HCA path to 2 switches • Fully redundant power and cooling
  • 23.
    <Insert Picture Here>Exadata Storage Servers
  • 24.
    Sun Oracle ExadataStorage Server • 14 Sun Fire X4275 per rack • 5x faster than conventional storage • 2x more storage capacity • Simplifies storage to eliminate complex SAN architectures • Sun FlashFire Technology turbocharges applications
  • 25.
    <Insert Picture Here>Exadata FlashFire Card
  • 26.
    Sun FlashFire TechnologyExtreme Performance Accelerator • 10x better IO response time • 5.25 Terabytes Flash per rack • 1,000,000 IOPS per rack • 20x IOPS speedup for Oracle • Integrated super caps for data retention New
  • 27.
    <Insert Picture Here>Exadata InfiniBand Switch
  • 28.
    InfiniBand Network HighBandwidth, Low Latency • Sun Datacenter InfiniBand Switch 36 • Fully redundant non-blocking IO paths from servers to storage • 2.88 Tb/sec bi-sectional bandwidth per switch • 40 Gb/sec QDR, Dual port QSFP per server
  • 29.
    <Insert Picture Here>Exadata Configurations
  • 30.
    Start Small andGrow Full Rack Half Rack Quarter Rack Basic System
  • 31.
    Sun Oracle DatabaseMachine Full Rack Pre-Configured for Extreme Performance 8 Sun Fire ™ X4170 Oracle Database servers 14 Exadata Storage Servers (All SAS or all SATA) 3 Sun Datacenter InfiniBand Switch 36 36-port Managed QDR (40Gb/s) switch 1 “Admin” Cisco Ethernet switch Keyboard, Video, Mouse (KVM) hardware Redundant Power Distributions Units (PDUs) Single Point of Support from Oracle 3 year, 24 x 7, 4 Hr On-site response Add more racks for additional scalability
  • 32.
    4 Sun Fire™ X4170 Oracle Database servers 7 Exadata Storage Servers (All SAS or all SATA) 2 Sun Datacenter InfiniBand Switch 36 36-port Managed QDR (40Gb/s) switch 1 “Admin” Cisco Ethernet switch Keyboard, Video, Mouse (KVM) hardware Redundant PDUs Single Point of Support from Oracle 3 year, 24 x 7, 4 Hr On-site response Sun Oracle Database Machine Half Rack Pre-Configured for Extreme Performance Can Upgrade to a Full Rack
  • 33.
    2 Sun Fire™ X4170 Oracle Database servers 3 Exadata Storage Servers (All SAS or all SATA) 2 Sun Datacenter InfiniBand Switch 36 36-port Managed QDR (40Gb/s) InfiniBand switch 1 “Admin” Cisco Ethernet switch Keyboard, Video, Mouse (KVM) hardware Redundant PDUs Single Point of Support from Oracle 3 year, 24 x 7, 4 Hr On-site response Sun Oracle Database Machine Quarter Rack Pre-Configured for Extreme Performance Can Upgrade to an Half Rack
  • 34.
    Sun Oracle DatabaseMachine Basic System Entry Level non-HA Configuration 1 Sun Fire ™ X4170 Oracle Database servers 1 Exadata Storage Servers (All SAS or all SATA) 1 Sun Datacenter InfiniBand Switch 36 36-port Managed QDR (40Gb/s) InfiniBand switch InfiniBand Cables Installed in Customer supplied Rack Customer supplied Ethernet and KVM Infrastructure Single Point of Support from Oracle 3 year, 24 x 7, 4 Hr On-site response
  • 35.
    Exadata Product Capacity1 – Raw capacity calculated using 1 GB = 1000 x 1000 x 1000 bytes and 1 TB = 1000 x 1000 x 1000 x 1000 bytes. 2 - User Data: Actual space for end-user data, computed after single mirroring (ASM normal redundancy) and after allowing space for database structures such as temp, logs, undo, and indexes. Actual user data capacity varies by application. User Data capacity calculated using 1 TB = 1024 * 1024 * 10 24 * 1024 bytes. Single Server Quarter Rack Half Rack Full Rack Raw Disk 1 SAS 7.2 TB 21 TB 50 TB 100 TB SATA 24 TB 72 TB 168 TB 336 TB Raw Flash 1 384 GB 1.1 TB 2.6 TB 5.3 TB User Data 2 (assuming no compression) SAS 2 TB 6 TB 14 TB 28 TB SATA 7 TB 21 TB 50 TB 100 TB
  • 36.
    Exadata Product Performance1 – Bandwidth is peak physical disk scan bandwidth, assuming no compression. 2 - Max User Data Bandwidth assumes scanned data is compressed by factor of 10 and is on Flash. 3 – IOPs – Based on IO requests of size 8K 4 - Actual performance will vary by application. Single Server Quarter Rack Half Rack Full Rack Raw Disk Data Bandwidth 1,4 SAS 1.5 GB/s 4.5 GB/s 10.5 GB/s 21 GB/s SATA 0.85 GB/s 2.5 GB/s 6 GB/s 12 GB/s Raw Flash Data Bandwidth 1,4 3.6 GB/s 11 GB/s 25 GB/s 50 GB/s Max User Data Bandwidth 2,4 (10x compression & Flash) 36 GB/s 110 GB/s 250 GB/s 500 GB/s Disk IOPS 3,4 SAS 3,600 10,800 25,000 50,000 SATA 1,440 4,300 10,000 20,000 Flash IOPS 3,4 75,000 225,000 500,000 1,000,000 Data Load Rate 4 0.65 TB/hr 1 TB/hr 2.5 TB/hr 5 TB/hr
  • 37.
    <Insert Picture Here>Exadata Storage Server Software
  • 38.
    Exadata Storage FeaturesExadata Smart Scans 10X or greater reduction in data sent to database servers Exadata Storage Indexes Eliminate unnecessary I/Os to disk Hybrid Columnar Compression Efficient compression increases user data scan rates Flash increases scan rates Flash cache removed spinning magnetic media bottleneck
  • 39.
    Exadata Smart StorageBreaks Data Bandwidth and Random I/O Bottleneck Oracle addresses data bandwidth bottleneck in three ways Massively parallel storage grid of high performance Exadata storage servers (cells). Data bandwidth scales with data volume Data intensive processing runs in Exadata storage. Queries run in storage as data streams from disk, offloading database server CPUs Columnar compression reduces data volume up to 10x Exadata Hybrid Columnar Compression provides 10x lower cost, 10x higher performance Oracle solves random I/O bottlenecks using Exadata Smart Flash Cache Increase random I/Os by factor of 20X Exadata Storage Cells
  • 40.
    Exadata Smart ScansExadata storage servers implement data intensive processing in storage Row filtering based on “where” predicate Column filtering Join filtering Incremental backup filtering Storage Indexing Scans on encrypted data Data Mining model scoring 10x reduction in data sent to DB servers is common No application changes needed Processing is automatic and transparent Even if cell or disk fails during a query New
  • 41.
    Exadata Smart ScanQuery Example Exadata Storage Grid Oracle Database Grid SUM Optimizer Chooses Partitions and Indexes to Access 10 TB scanned 1 MB returned to servers What were my sales yesterday? Select sum(sales) where Date=’24-Sept’ Scan compressed blocks in partitions/indexes Retrieve sales amounts for Sept 24
  • 42.
    Exadata Storage IndexTransparent I/O Elimination with No Overhead Exadata Storage Indexes maintain summary information about table data in memory Store MIN and MAX values of columns Typically one index entry for every MB of disk Eliminates disk I/Os if MIN and MAX can never match “where” clause of a query Completely automatic and transparent Min B = 1 Max B =5 Table Index Min B = 3 Max B =8 Select * from Table where B<2 - Only first set of rows can match A B C D 1 3 5 5 8 3
  • 43.
    Exadata Hybrid ColumnarCompression Data is grouped by column and then compressed Query Mode for data warehousing Optimized for speed 10X compression typical Scans improve proportionally Archival Mode for infrequently accessed data Optimized to reduce space 15X compression is typical Up to 50X for some data 50X Up To New
  • 44.
    Exadata Hybrid ColumnarCompression Warehousing and Archiving Warehouse Compression 10x average storage savings 10x Scan I/O reduction Archive Compression 15x average storage savings Up to 50x on some data Some access overhead For cold or historical data Optimized for Speed Optimized for Space Smaller Warehouse Faster Performance Reclaim 93% of Disks Keep Data Online Can mix compression types by partition for ILM
  • 45.
    Exadata Real-World CompressionRatios Oracle Production E-Business Suite Tables Columnar compression ratios Query = 14.6X Archive = 22.6X Vary by application and table 52
  • 46.
    <Insert Picture Here>Exadata Flash Cache
  • 47.
    The Disk RandomI/O Bottleneck Disk drives hold vast amounts of data But are limited to about 300 I/Os per second Flash technology holds much less data But can run tens of thousands of I/Os per second Ideal Solution Keep most data on disk for low cost Transparently move hot data to flash Use flash cards instead of flash disks to avoid disk controller limitations Flash cards in Exadata storage High bandwidth, low latency interconnect 300 I/O per Sec Tens of Thousands of I/O’s per Second
  • 48.
    Semiconductor Cache HierarchyMassive throughput and IOs through innovative Cache Hierarchy Database DRAM Cache 400GB raw capacity Up to 4TB compressed user data 100 GB per sec Exadata Smart Flash Cache 5TB raw capacity Up to 50TB compressed user data 50 GB/sec raw scan 1 million I/O per sec Exadata disks 100TB or 336TB raw Up to 500TB compressed user data 21 GB/sec scan 50,000 I/O per sec
  • 49.
    Exadata Smart FlashCache Extreme Performance Database Machine achieves: 20x more random I/Os Over 1 million per second 2x faster sequential query I/O 50 GB/sec 10x better I/O response time Sub-millisecond Greatly Reduced Cost 10x fewer disks needed for I/O Lower Power 5X More I/Os than 1000 Disk Enterprise Storage Array
  • 50.
    Exadata Flash CacheExtreme Performance for Random I/O Sun Oracle Database Machine has 5 TB of flash storage 4 high-performance flash cards in every Exadata Storage Server Smart Flash Cache caches hot data Not just simple LRU Knows when to avoid caching to avoid flushing cache Allows optimization by application table Oracle is the First Flash Optimized Database New
  • 51.
    Flash Cache Flashstorage more than doubles scan throughput 50 GB/sec Combined with Hybrid Columnar Compression Up to 50 TB of data fits in flash Queries on compressed data run up to 500 GB/sec HITACHI USP V TERADATA 2550 NETEZZA TwinFin 12 SUN ORACLE Database Machine Query Throughput GB/sec Uncompressed Data Flash Disk 50
  • 52.
    Comparison to SpecialtySolutions A single database machine has over 400GB of memory usable for caching Database release 11.2 introduces parallel query processing on memory cached data Harnesses memory capacity of entire database cluster for queries Foundation for world record 1TB TPC-H Exadata Hybrid Columnar Compression enables multi-terabyte tables or partitions to be cached in memory Faster than in-memory specialized startups Source: Transaction Processing Council, as of 9/14/2009: Oracle on HP Bladesystem c-Class 128P RAC, 1,166,976 QphH@1000GB, $5.42/QphH@1000GB, available 12/1/09. Exasol on PRIMERGY RX300 S4 , 1,018,321 QphH@1000GB, $1.18/QphH@1000GB, available 08/01/08. ParAccel on SunFire X4100 315,842 QphH@1000GB, $ 4.57 /QphH@1000GB, available 10/29/07. QphH: 1 TB TPC-H Memory has 100x more bandwidth than Disk
  • 53.
    Benefits Multiply withCompression 1 TB with compression 10 TB of user data Requires 10 TB of IO 100 GB with partition pruning 20 GB with Storage Indexes 5 GB Smart Scan on Memory or Flash Subsecond On Database Machine Data is 10x Smaller, Scans are 2000x faster
  • 54.
    <Insert Picture Here>Disk Management With Automatic Storage Management
  • 55.
    Automatic Storage Management(ASM) Simplifies Provisioning, Improves Performance Automatic I/O load balancing Stripes data across disks to balance load Best I/O throughput Automatic mirroring Efficient, online add/remove of disk Consolidate data from multiple databases into same shared storage environment Automatic data rebalancing – NO HOT SPOTS! Automatic Storage Management Database C Database B Database A
  • 56.
    Exadata Storage LayoutCell Disk Sys Area Sys Area Grid Disk n Grid Disk 1 … ASM disk ASM disk Physical disks map to Cell Disks Cell Disks are partitioned into one or more Grid Disks Grid Disks are created in order of “hottest” (first) to “coldest” portion of the disk (last) ASM diskgroups are created from Grid Disks Transparent above the ASM layer Physical Disk
  • 57.
    Interleaved Grid DisksGrid disks are optionally split and interleaved to place frequently accessed data in all grid disks on higher performing outer tracks All applications benefit from higher performance outer tracks of disks Grid Disk 2 Hot Data, Cold Data Grid Disk 1 Hot Data, Cold Data 11gR2
  • 58.
    Exadata Storage LayoutExample ASM Mirroring and Failure Groups Example shows Cell Disks divided into two Grid Disks Hot and Cold Two ASM disk groups created across the two sets of grid disks ASM striping evenly distributes I/O across the disk groups ASM mirroring is used to protect against disk failures ASM failure groups are used to protect against cell failures Exadata Cell Exadata Cell Hot Hot Hot Cold Cold Cold ASM Disk Group … Hot Hot Hot Cold Cold Cold … ASM Failure Group ASM Failure Group
  • 59.
    <Insert Picture Here>Resource Management for Workload Balancing
  • 60.
    Exadata I/O ResourceManagement Multi-Database Environment Ensure different databases are allocated the correct relative amount of I/O bandwidth Database A: 33% I/O resources Database B: 67% I/O resources Ensure different users and tasks within a database are allocated the correct relative amount of I/O bandwidth Database A: Reporting: 60% of I/O resources ETL: 40% of I/O resources Database B: Interactive: 30% of I/O resources Batch: 70% of I/O resources Exadata Cell InfiniBand Switch/Network Database A Database B Exadata Cell Exadata Cell
  • 61.
    Oracle Database 11gRelease 2 Resource Manager Instance Caging – Ideal for Exadata More flexible alternative to server partitioning Wider platform support than operating system resource managers Lower administration overhead than virtualization Set CPU_COUNT per instance and enable resource manager Instance A Instance B Instance C Instance D Sum of cpu_counts 8 12 16 Total Number of CPUs = 16 4
  • 62.
    <Insert Picture Here>Consolidation of Mixed Workload Environments
  • 63.
    Consolidating Databases Consolidateonto Database Machine High performance for all applications Low cost platform for all applications Predictable response times in a shared environment Handles all data management needs Complete, Open, Integrated CRM ERP Warehouse Data Mart HR ERP CRM Warehouse Data Mart HR
  • 64.
    Best Machine forConsolidating Databases Consolidation mixes many different workloads in one system Warehouse oriented bulk data processing OLTP oriented random updates Multimedia oriented streaming files The Sun Oracle Database Machine handles any combination of workloads with extreme performance And predictable response times ERP CRM Warehouse Data Mart HR
  • 65.
    Consolidate Database ServersShared Configuration Applications connect to a database service that runs on one or more database servers Services can grow, shrink, & move dynamically Large databases can span nodes using RAC Multiple small databases can run on a single node Predictable performance Instance caging provides predictable CPU resources when multiple databases run on the same node Restricts a database to subset of processors ERP CRM Warehouse Data Mart HR
  • 66.
    <Insert Picture Here>Exadata High Availability and Disaster Recovery
  • 67.
    Complete, Open, IntegratedAvailability Maximum Availability Architecture Protection from Server Failures Storage Failures Network Failures Site Failures Real-time remote standby open for queries Human error correction Database, table, row, transaction level Online indexing and table redefinition Online patching and upgrades WAN Real Application Clusters ASM Fast Recovery Area Active Data Guard Secure Backup
  • 68.
    Active Data Guardand Low Cost DR Either Physical or Logical Standbys Can Be Opened Primary database Standby database Exadata Racks with SAS disks Exadata Rack with SATA disks Redo transport Oracle Net Backups Reporting
  • 69.
    <Insert Picture Here>Exadata and Oracle Enterprise Manager
  • 70.
    Exadata Storage Management& Administration Enterprise Manager Manage & administer Database and ASM Exadata Storage Plug-in Enterprise Manager Grid Control Plug-in to monitor & manage Exadata Storage Cells Comprehensive CLI Local Exadata Storage cell management Distributed shell utility to execute CLI across multiple cells Sun Embedded Integrated Lights Out Manager (ILOM) Remote management and administration of hardware
  • 71.
    Exadata Storage Plug-inEnterprise Manager Grid Control Plug-in to monitor & manage Exadata Storage Cells Works with Enterprise Manager Grid Control 10.2.0.3 and later versions
  • 72.
  • 73.
    Sun Oracle DatabaseMachine Extreme Performance for all Data Management Best for Data Warehousing Parallel query on memory or Flash Compressed 4TB of data in DRAM, 50 TB in flash 10x compressed tables with storage offload Overall up to 5X faster than 11.1 for Warehousing Best for OLTP Only database that scales real-world applications on grid Smart flash cache provides 1 million IOs per second Compressed 1.2 TB of data in DRAM, 15 TB in Flash Up to 50x compression for archival data Secure, fault tolerant Best for Database Consolidation Only database machine that runs and scales all workloads Predictable response times in multi-database, multi-application, multi-user environments
  • 74.
    Scale Both Performanceand Capacity Scalable Scales to 8 rack database machine by just adding wires More with external InfiniBand switches Scales to hundreds of storage servers Multi-petabyte databases Redundant and Fault Tolerant Failure of any component is tolerated Data is mirrored across storage servers
  • 75.
    Fastest Time toValue & Lowest Risk Time to Database Available for Use Build From Scratch with Components OWI Reference Configurations Take delivery of Sun Oracle Database Machine Weeks to Months Pre-implementation System sizing Acquisition of components Installation and configuration Acquisition of components Installation and configuration Testing and Validation Testing and Validation Weeks to Months Sun Oracle Database Machine Faster deployment Lower Risk Database pre-configured < 1 Week after Delivery Time To ROI / Level of Risk
  • 76.
    Resources Oracle.com: http://www.oracle.com/exadataOracle Exadata Technology Portal on OTN: http://www.oracle.com/technology/products/bi/db/exadata Oracle Exadata white papers: http://www.oracle.com/technology/products/bi/db/exadata/pdf/exadata-technical-whitepaper.pdf http://www.oracle.com/technology/products/bi/db/exadata/pdf/migration-to-exadata-whitepaper.pdf
  • 77.

Editor's Notes

  • #7 Need 3 performance related quotes
  • #16 The Sun Oracle Database Machine Full Rack combines Sun Oracle Exadata Storage Servers with Oracle Database in a complete pre-optimized and pre-configured package of software, servers, and storage. Simple and fast to install, the Oracle Database Machine is ready to start tackling your business queries immediately out-of-the-box . The Sun Oracle Database Machine Full Rack is a building block and you can add more racks as your data warehouse grows. The Sun Oracle Database Machine Full Rack consists of 8 Database Servers 14 Sun Oracle Exadata Storage Servers 3 InfiniBand switches 1 Gigabit Ethernet switch KVM Oracle is the first point of contact for all hardware &amp; software issues and will manage the problem to resolution.
  • #19 Infiniband throughput is based on 22 servers with one dual port 40 Gb/sec card per server.
  • #20 14 Exadata cells 168 disk drives 64 database server cores total 3 36-port Infiniband QDR (40Gb/sec) switches Enough for adding up to 7 more racks by just adding cables between the racks Cisco Ethernet 48-port switch (admin) KVM
  • #45 Archival Compression Best approach for ILM and data archival Use on complete tables or combine with OLTP compression using partitioning Minimal storage footprint Data is always online and always accessible No need to move data to tape or configure multiple disk tiers Run queries against historical data (without recovering from tape) Update historical data Supports schema evolution (add/drop columns, indexes, etc.) Benefits any application with data retention requirements
  • #46 Compression ratios based on Hybrid Columnar Compression “Query Default” and “Archive High”
  • #50 Note that Enterprise Storage Arrays now support flash disks but there are no reported IOPs numbers from any vendor for their storage array using flash. The I/O performance numbers shown here are measured at the database level, not pure storage statistics that cannot be achieved in practice. Some vendors quote component level performance numbers that cannot be achieved in a complete systems due to bottlenecks at other parts of the system. Also, remember that this is a full system including servers, storage, and networking, not a pure storage device when comparing to other products. I/O rates are
  • #52 Why is Oracle Faster DB Processing in Storage Smart Flash Cache Faster Interconnect (40Gb/sec) More Disks Faster Disks (15K RPM)
  • #53 TPC-H 1TB, 11gR1 on Superdome (04/29/09) 64 cores, 768 disks (146GB 15K RPM) In-memory execution algorithms cache partitions in memory on different DB nodes Parallel servers (aka PQ Slaves) are then executed on the corresponding nodes
  • #56 Because ASM is a volume management and file system component within the database it is designed to provide a file management layer optimized for the database. ASM optimizes performance by striping and optionally mirroring files across all the disks under its management. Additionally, ASM provides the ability to alter the storage configuration by adding or removing disks under its management without requiring the database down to be taken down. Finally, ASM is cluster-aware supporting RAC as well as multiple databases under a single ASM domain.
  • #57 A Cell Disk is the virtual representation of the physical disk, minus the System Area LUN (if present), and is one of the key disk objects the administrator manages within an Exadata cell. A Cell Disk is represented by a single LUN, which is created and managed automatically by the Exadata software when the physical disk is discovered. On the first two disks, approximately 13GB of space is used for the system area. On the the other 10 disks, the system area is approximately 50MB. Cell Disks can be further virtualized into one or more Grid Disks. Grid Disks are the disk entity assigned to ASM, as ASM disks, to manage on behalf of the database for user data. The simplest case is when a single Grid Disk takes up the entire Cell But it is also possible to partition a Cell Disk into multiple Grid Disk slices. Placing multiple Grid Disks on a Cell Disk allows the administrator to segregate the storage into pools with different performance or availability requirements. Grid Disk slices can be used to allocate “hot”, “warm” and “cold” regions of a Cell Disk, or to separate databases sharing Exadata disks. For example a Cell Disk could be partitioned such that one Grid Disk resides on the higher performing portion of the physical disk and is configured to be triple mirrored, while a second Grid Disk resides on the lower performing portion of the disk and is used for archive or backup data, without any mirroring. Using ASM, you create Diskgroups from the Grid Disks and from that point on the Exadata Storage is transparent to the rest of the database and applications.
  • #58 Exadata is able to extend the benefits of IDP to multiple grid disks on a single physical disk. The Grid disks are optionally split and interleaved such that frequently accessed data on all the grid disks are on the higher performing portions of the outer tracks. This ensures that all the applications benefit from the higher performance of the outer tracks of the physical disks.
  • #59 For each disk group, ASM automatically creates a failure group for each Exadata Storage Server, containing the Grid Disks that belong to that server. ASM then mirrors the data such that the mirror copies are on a different failure group and hence a different Exadata Storage Server. That way, ASM is able to protect the database from disk failure and the failure of an Exadata Storage Server.
  • #61 An Exadata administrator can create a resource plan that specifies how I/O requests should be prioritized. This is accomplished by putting the different types of work into service groupings called Consumer Groups. Consumer groups can be defined by a number of attributes including the username, client program name, function, or length of time the query has been running. Once these consumer groups are defined, the user can set a hierarchy of which consumer group gets precedence in I/O resources and how much of the I/O resource is given to each consumer group. This hierarchy determining I/O resource prioritization can be applied simultaneously to both intra-database operations (i.e. operations occurring within a database) and inter-database operations (i.e. operations occurring among various databases). In data warehousing, or mixed workload environments, you may want to ensure different users and tasks within a database are allocated the correct relative amount of I/O resources. For example you may want to allocate 50% of I/O resources to interactive users on the system, 30% of I/O resources to batch reporting jobs, and 20% of the I/O resources to the ETL jobs. This is simple to enforce using the DBRM and I/O resource management capabilities of Exadata storage. When Exadata storage is shared between multiple databases you can also prioritize the I/O resources allocated to each database, preventing one database from monopolizing disk resources and bandwidth to ensure user defined SLAs are met. For example you may have two databases sharing Exadata storage Assume that the business objectives dictate that database A should receive 33% of the total I/O resources available and that database B should receive 67% of the total I/O of resources. To ensure the different users and tasks within each database are allocated the correct relative amount of I/O resources various consumer groups are defined. For database A, 60% of the I/O resources are reserved for interactive marketing and 40% of the I/O resources are allocated for batch marketing activities. For database B. assume that 30% of the resources are allocated for interactive sales activities and 70% of the I/O resources are allocated fo the batch sales activities. These consumer group allocations are relative to the total I/O resources allocated to each database.
  • #62 Instance Caging is very useful for consolidation. We want to support the consolidation of a large number of databases onto a grid, but make sure they share the server resources effectively. In the past, Resource Manager only worked inside a single database instance, but now it works between instances. No one database can usurp the resources of the entire server. This makes managing a consolidated environment much more easily. Can be dynamically set, with some limitations.
  • #69 Active Data Guard sends copies of the redo log files to a remote database that applies them continuously. With Active Data Guard, the remote database can be either a physical or a logical standby database. New in 11gR2 is the ability of Active Data Guard to bi-directionally recover from block corruption.
  • #71 Exadata also has been integrated with the Oracle Enterprise Manager (EM) Grid Control to easily monitor the Exadata environment. By installing an Exadata plug-in to the existing EM system, statistics and activity on the Exadata Storage Server can be monitored and events and alerts can be sent to the administrator. The advantages of integrating the EM system with Exadata include: Monitoring Oracle Exadata storage Gathering storage configuration and performance information Raising alerts and warnings based on thresholds set Providing rich out-of-box metrics and reports based on historical data All the functions users have come to expect from the Oracle Enterprise Manager work along with Exadata. By using the EM interface, users can easily manage the Exadata environment along with other Oracle database environments traditionally used with the Enterprise Manager. DBAs can use the familiar EM interface to view reports to determine the health of the Exadata system, and manage the configurations of the Exadata storage. Exadata Storage Servers provide a comprehensive Command Line Interface (CLI) to configure, monitor, and administer the server. In addition, a distributed version of the CLI utility is provided so that commands can be sent to multiple servers to ease the management of multiple servers. Each Exadata Storage Server has ILOM functionality to perform remote hardware administration tasks, like power cycling the servers.
  • #72 Exadata also has been integrated with the Oracle Enterprise Manager (EM) Grid Control to easily monitor the Exadata environment. By installing an Exadata plug-in to the existing EM system, statistics and activity on the Exadata Storage Server can be monitored and events and alerts can be sent to the administrator. The advantages of integrating the EM system with Exadata include: Monitoring Oracle Exadata storage Gathering storage configuration and performance information Raising alerts and warnings based on thresholds set Providing rich out-of-box metrics and reports based on historical data All the functions users have come to expect from the Oracle Enterprise Manager work along with Exadata. By using the EM interface, users can easily manage the Exadata environment along with other Oracle database environments traditionally used with the Enterprise Manager. DBAs can use the familiar EM interface to view reports to determine the health of the Exadata system, and manage the configurations of the Exadata storage. Exadata Storage Servers provide a comprehensive Command Line Interface (CLI) to configure, monitor, and administer the server. In addition, a distributed version of the CLI utility is provided so that commands can be sent to multiple servers to ease the management of multiple servers. Each Exadata Storage Server has ILOM functionality to perform remote hardware administration tasks, like power cycling the servers.
  • #75 This is all about scale out. Scale outwards !