1
<Insert Picture Here>
Oracle Optimized Solution for Oracle Database
Mission Critical Systems Environments
Technical Architecture Presentation
Systems Solutions and Business Planning Group
3
The following is intended to outline our general
product direction. It is intended for information
purposes only, and may not be incorporated into any
contract. It is not a commitment to deliver any
material, code, or functionality, and should not be
relied upon in making purchasing decisions.
The development, release, and timing of any
features or functionality described for Oracle’s
products remains at the sole discretion of Oracle.
4
Agenda
• Introduction: Systems for Oracle Database
• Key architectural components
– Oracle M-Series
– Oracle Solaris
– Oracle FlashFire
– Oracle Storage
• Systems for Oracle Database
– Using dynamic domains for non-stop database operations
– Configuring high scale solutions on SPARC
– Configuring storage for Oracle databases
– Using FlashFire to reduce transaction times
• Summary — for more information
© 2011 Oracle Corporation – Proprietary and Confidential
5
Small
M5000
Pre-Sized Configurations
Medium
M5000
X-Large
M9000
Large
M8000 
Oracle Flash
F5100 Storage Arrays 
Flash Acceleration & Disk Storage
For Mission Critical Oracle Environments*
Network
(VLAN)
SAN
Oracle Software
Oracle
Storage 6000
(pictured right center)
Oracle Flash
F20 PCI Cards
(Not pictured)
Oracle ZFS
Storage Appliance
(pictured bottom right)
© 2011 Oracle Corporation – Proprietary and Confidential – Do Not Distribute
*Mission Critical Solution for New and Legacy Oracle (9i/10g/11g) Databases which support various Business Critical Applications
Oracle Optimized Solution for Oracle Database
Enterprise Solutions for Business Critical Environments
Oracle RAC/Oracle DataGuard/Oracle Solaris Cluster
6
Oracle Optimized Solution for Oracle Database
Value Proposition
• Simple, live scaling across entire system –
processors, memory, operating system, I/O, etc.
• World Record Performance – PeopleSoft, TPC-H,
App Server/11gR2, JD Edwards
Workload Scaling
• Predictive Self Healing – continue operations
even in light of CPU, memory or I/O failures
• Extensive non-disruptive service and upgrades
Non-Stop Database
Operation
• Complete solution provider
• One deployment organization
• Single support organization
Simplification
Investment
Protection
• Extended system life — in-place upgrades vs.
forklift strategy for competitors
• Broad support for legacy software and hardware
• Leading Total Cost of Ownership
© 2011 Oracle Corporation – Proprietary and Confidential
7
Agenda
• Introduction: systems for Oracle Database
• Key architectural components
– Oracle M-Series
– Oracle Solaris
– Oracle FlashFire
– Oracle Storage
• Systems for Oracle Database
– Using dynamic domains for non-stop database operations
– Configuring high scale solutions on SPARC
– Configuring storage for Oracle databases
– Using FlashFire to reduce transaction times
• Summary — for more information
© 2011 Oracle Corporation – Proprietary and Confidential
8
M8000
M5000
M9000
Sun Systems for Oracle Database Infrastructure
Over 20 Years of Joint Mission Critical Deployments
Enterprise class platforms
• Reliability, availability, serviceability,
and security
• Highly scalable (vertical, horizontal)
• Flash optimized for business critical
database performance acceleration
© 2011 Oracle Corporation – Proprietary and Confidential
9
Oracle SPARC Enterprise M5000 Example
Physical and Logical View
• Almost everything redundant
• However, Memory/CPU changes require system
(node) outage
© 2011 Oracle Corporation – Proprietary and Confidential
10
SPARC Enterprise M5000 Example
Basic Specifications and Configuration
• Typical configuration 2 SBs
– XSB's with IOUs
• Single domain recommended
– Two domains: one Uni-XSB
per each OK
SPARC ENTERPRISE
M5000 SERVER
Enclosure • 10 rack units
SPARC64 VI
Processors
• 2.15 GHz
• 5 MB L2 cache
• Up to 8 dual-core chips
SPARC64 VII/VII+
Processors
• 2.4 GHz with 5 MB L2 cache
• 2.53 GHz with 5.5 MB L2 cache
• 2.66 GHz with 11MB L2 cache*
• Up to 8 quad-core chips
Memory
• Up to 512 GB
• 64 DIMM slots
Internal I/O Slots
• 8 PCI Express
• 2 PCI eXtended
External I/O
Chassis
• Up to 4 units
Internal Storage
• Serial attached SCSI
• Up to 4 hard drives
Dynamic Domains • Up to 4
* For 11MB cache support, must have new MOBO_B (SC+ chip.)
© 2011 Oracle Corporation – Proprietary and Confidential
11
SPARC Enterprise M9000-32 Example
Specifications and Physical View
SPARC ENTERPRISE
M9000-32 SERVER
Enclosure • One cabinet
SPARC64 VI
Processors
• 2.28GHz with 5 MB L2 cache
• 2.4 GHz with 6 MB L2 cache
• Up to 32 dual-core chips
SPARC64 VII/VII+
Processors
• 2.52 GHz with 6 MB L2 cache
• 2.88 GHz with 6 MB L2 cache
• 3.0 GHz with 12 MB L2 cache*
• Up to 32 quad-core chips
Memory
• Up to 2 TB
• 256 DIMM slots
Internal I/O Slots • 64 PCI Express
External I/O
Chassis
• Up to 16 units
Internal Storage
• Serial Attached SCSI
• Up to 32 drives
Dynamic Domains • Up to 24
* For 12MB cache support, must have new CMU_C (SC+ chip.)
© 2011 Oracle Corporation – Proprietary and Confidential
12
Modes and Mixed Configuration of CPUs
Domain 0
CMU#0 CMU#1 CMU#2 CMU#3
CMU mounted with
VII/VII+ only
CMU mounted with
VI only
CMU of mixed CPU
configuration
CMU of mixed CPU
configuration
: SPARC64 VII/VII+ processor : SPARC64 VI processor
Domain 2
Domain 1
© 2011 Oracle Corporation – Proprietary and Confidential
13
Implications of CPU Mode and Dynamic Reconfiguration
• To verify domain mode
> on XSCF: showdomainmode
> on domain: prtdiag
• Set mode to compatible if there is a possibly of adding SPARC64 VI cpus to a
domain that only has SPARC64 VII/VII+
Domain CPU
Configuration
Value of
cpumode
Current CPU
Operational
Mode
CPU Configuration that
can be added by DR
Operation
SPARC64 VII auto SPARC64 VII
enhanced mode
SPARC64 VII or VII+
SPARC64 VII/VII+ compatible SPARC64 VI
compatibility mode
Any CPU
SPARC64
VI/VII/VII+
auto or
compatible
SPARC64 VI
compatibility mode
Any CPU
SPARC64 VI Auto or
compatible
SPARC64 VI
compatibility mode
Any CPU
© 2011 Oracle Corporation – Proprietary and Confidential
14
Mixing VII+ Processors With VI or VII Processors
• To achieve the 11MB or 12MB L2$ capacity, two
conditions must be met:
– All four processors on the system board must be SPARC64
VII+. None of the four can be either SPARC64 VI or
SPARC64 VII.
– The motherboard on the M4000/M5000 must be at least
version MOBO_B, and the CMU on the M8000/M9000 must
be at least version CMU_C
• The new MOBO_B and CMU_C have the new SC+
chip, which will provide L2$ addressing up to 12MB.
• When SPARC64 VII+ is set to half of its L2$, a
message notifying this event, will be displayed and
logged.
© 2011 Oracle Corporation – Proprietary and Confidential
15
Oracle SPARC Enterprise M9000-32 Example
RAS and Logical View
1–8 CMU/IOU Pairs
• 8 CMUs max
• CMU and IOU
hard paired
• Everything hot
swap: CMU, IOU,
XB, XSCF,
Clock, etc.
© 2011 Oracle Corporation – Proprietary and Confidential
16
SPARC Enterprise M9000-32 Example
• 8 Uni-XSBs max
– Quad-XSB shown not
recommended
• Grouped as single
hard domain
• Dynamic reconfiguration
used to give fine grain
upgrades and service
1–8 CMU/IOU Pairs
© 2011 Oracle Corporation – Proprietary and Confidential
17
Oracle Database 11g on M-series and Solaris
Record-breaking Performance
SPARC
Enterprise M9000
(2.88 GHz)
Supports database tier of SPECjAppServer2004 benchmark and beats IBM p595 with DB2 and
HP Superdome 9000. Delivers breakthrough performance of
28,648.74 SPECjAppServer2004 JOPS@Standard on the SPECjAppServer2004 benchmark. with
SPARC Enterprise T5440 servers at the application tier(2)
.
World
Record
SPARC
Enterprise M9000
(3.0 GHz)
Top TWO results for non-cluster Oracle Database 11g Decision Support result on TPC-H
benchmark with performance of 386,478.3 QphH@3000GB. Beats POWER6-based IBM p595
with Sybase IQ by 2.5x(1)
.
SPARC
Enterprise M4000/M5000
(2.53 GHz)
M4000, running Oracle PeopleSoft N.A. Payroll 240K employees and Oracle Database 11g,
accelerated by the Sun Storage F5100 Flash Array is 2.1x faster than IBM. М5000,
running 500K employees, processed payroll 18% faster than IBM Z10-class mainframe with a
list price of over $6M and defeated HP Itanium-based system(3)
New
World
Record
SPARC
Enterprise M5000
(2.66 GHz)
M5000 server configured with Oracle's Sun Storage F5100 Flash Array Ands running Oracle
Database 11g Release 2 software supported a
world record result on Oracle PeopleSoft Enterprise Financials 9.0 (4)
.
SPARC
Enterprise M4000
(2.53 GHz)
World
Record
World
Record
Best database hardware for Oracle PeopleSoft Enterprise Campus solutions. Oracle Solaris
with Oracle Database 11g boosted by Sun Flash Accelerator F20 card delivered up to 40%
improvement on batch jobs compared to Itanium-based HP rx6600 solution(7)
.
New
World
Record
SPARC
Enterprise M5000
(2.53 GHz)
Posts the new world record on Oracle Hyperion Essbase ASO benchmark. Essbase is a
component of Oracle Fusion Middleware that uses Oracle Database 11g to manage over one
billion data items(5)
.
World
Record
SPARC
Enterprise M3000
(2.53 GHz)
M3000 server running at the database tier, enabled Oracle's SPARC T3-1 server running
Oracle JD Edwards EnterpriseOne to post a record result of 5,000 users, with 0.523 seconds
of average transaction time. This result beats IBM POWER7 result by 25%(6)
.
New
World
Record
Results as of 03/22/11. Footnotes and required benchmark disclosures on slide 73.
© 2011 Oracle Corporation – Proprietary and Confidential
18
Agenda
• Introduction: systems for Oracle Database
• Key architectural components
– Oracle M-Series
– Oracle Solaris
– Oracle FlashFire
– Oracle Storage
• Systems for Oracle Database
– Using dynamic domains for non-stop database operations
– Configuring high scale solutions on SPARC
– Configuring storage for Oracle databases
– Using FlashFire to reduce transaction times
• Summary — for more information
© 2011 Oracle Corporation – Proprietary and Confidential
19
• Oracle is the leader in Unix system scaling – best able to handle
your workload growth
– Family scales to 2x the number of sockets/cores/threads than
competition 64/256/512 (M9000-64)
– Oracle Solaris is the only OS that scales to 512 threads today and
has been doing so for years
• Every other OS will need a major update and shakeout time to handle
the thread counts coming with the next generation of high core/high
thread counts
– Flexible and easy – just add boards
• Vs. competition’s more complex pre-installation of boards and
additional nodes
– Dynamic reconfiguration lets you easily add new resources to
operating database
• Competitors’ offers are more complex, limiting and requires extra
software costs
Workload Scaling
Deployment Longevity, Legacy Support
Oracle Differentiator
© 2011 Oracle Corporation – Proprietary and Confidential
20
Investment Protection — TCO
Support for Legacy Technologies
• Oracle also preserves your other investments
– 10+ year guaranteed binary compatibility – save time and
money, load and run old applications without recompiling
– Run legacy software on new hardware via Oracle Solaris
Containers
• Oracle Solaris 8 or 9, Oracle Database 9i,10g,11g, and
older custom codes
– Broad compatibility with installed non-Oracle technologies
• SANs and networks
• Applications
• Management tools
• ...many more
Oracle Differentiator
© 2011 Oracle Corporation – Proprietary and Confidential
21
Links to competitive details in reference section.
Non-Stop Database Operations
Predictive Self Healing
• Mitigating risk with unique features
– Can operate after memory and chip failures
– Guaranteed data path integrity – network to disk
– Provides fault and electrical isolation between domains
• Predictive Self Healing – detects and corrects
multitude of system failures without service disruption,
retries failed instructions
– Includes processor or system boards, memory, all levels of
cache, backplane, power source, power supply, fans, network
and storage connections, service processor, storage
components, etc.
Oracle Differentiator
© 2011 Oracle Corporation – Proprietary and Confidential
22
• Simplify management with unique online service and
upgrade features
– Live HW addition or replacement (no reboot required) of system
boards* and memory*, I/O modules
• HP requires performance trade off for HA
• IBM does not support live system hardware upgrades
(hot swappable CPU/Memory boards)
– Live expansion of operating database instance after adding CPUs,
memory, I/O channels, storage capacity
• Online service and upgrades also include
– Repair and replacement of virtually all components
– Database, firmware, microcode and Oracle Solaris updates
– Live migration of databases between Oracle Solaris Containers
Non-Stop Database Operations (cont.)
Online Service and Upgrades
*Hot swap system and memory boards in M8000/M9000 only.Oracle Differentiator
© 2011 Oracle Corporation – Proprietary and Confidential
23
Non-Disruptive Service, Repair and Upgrades
Eliminate Planned Downtime
• Live hardware expansion or
replacement
– System boards
(CPUs and memory)
– Power supplies and fans
– Network and storage
connections
– Service processor
– Storage system components
• Live growth of an operating
environment
– CPUs, memory, I/O channels,
storage capacity
• Live upgrades of
– Oracle Solaris
– Firmware and microcode
– Database
• Live migration of Databases
between Oracle Solaris
Containers
© 2011 Oracle Corporation – Proprietary and Confidential
24
Investment Protection (and Availability)
Deployment Longevity, Legacy Support
• Oracle saves you money and downtime by providing a longer
system life
– Designed for in-system upgrades (3 and counting for M-Series)
vs. forklift replacements required by competitors
• Oracle upgrades can be 1/5 the cost of IBM*
– Add new processor speeds and generations to existing vs. 100%
replacement required by IBM
– New processors support existing I/O cards vs. replacement
sometimes required by competitors
– Extend system life by adding FlashFire to boost database
performance by up 2–4x
– In-system upgrades can be done non-disruptively, forklift
replacements can’t
*Links to competitive details in reference section.Oracle Differentiator
© 2011 Oracle Corporation – Proprietary and Confidential
25
Agenda
• Introduction: systems for Oracle Database
• Key architectural components
– Oracle M-Series
– Oracle Solaris
– Oracle FlashFire
– Oracle Storage
• Systems for Oracle Database
– Using dynamic domains for non-stop database operations
– Configuring high scale solutions on SPARC
– Configuring storage for Oracle databases
– Using FlashFire to reduce transaction times
• Summary — for more information
© 2011 Oracle Corporation – Proprietary and Confidential
26
Oracle’s FlashFire Technology
Oracle’s Flash Accelerator
F20 PCIe Card
Oracle’s Storage F5100 Flash Array
Oracle’s Flash Module
© 2011 Oracle Corporation – Proprietary and Confidential
27
Quick Intro to FlashFire
• Based on Oracle Flash Modules
– ‘Different’ than standard SSDs of others
– Optimized for database acceleration
• Unique RAS, environmental and performance
characteristics
Oracle’s Storage F5100 Flash Array
© 2011 Oracle Corporation – Proprietary and Confidential
28
World Record Flash Performance
Storage Performance Council SPC-1C
• Oracle F5100 driven by M5000 vs. IBM EXP12s
driven by P575
– Nearly 7x better performance
• Delivered this in half the space of IBM
– 2.7x better access density (IOPS/GB)
– 2.5x better service times (better LRT and max recorded)
– 3.9x better price/performance
– 31% better $/GB
SPC-1C, SPC-1C IOPS, and SPC-1C LRT are trademarks of Storage Performance Council (SPC). See http://www.storageperformance.org for more
information. Sun Storage F5100 Flash Array SPC-1C submission identifier C00010 results of 300,873.47 SPC-1C IOPS over a total ASU capacity of
1374.390 GB using unprotected data protection, a SPC-1C LRT of 0.33 milliseconds, a 100% load over all ASU response time of 2.63 milliseconds and a
total TSC price (including three-year maintenance) of $151,381. This compares with IBM System Storage EXP12S SPC-1C/E Submission identifier
E00001 results of 45,000.20 SPC-1C IOPS over a total ASU capacity of 547.61 GB using unprotected data protection level, a SPC-1C LRT of 0.46
milliseconds, a 100% load over all ASU response time of 6.95 milliseconds and a total TSC price (including three-year maintenance) of $87,468.The Sun
Storage F5100 Flash Array is a 1RU (1.75") array. The IBM System Storage EXP12S is a 2RU (3.5") array.
© 2011 Oracle Corporation – Proprietary and Confidential
29
World Record FlashFire Price/Performance
Storage Performance Council SPC-1C
• Oracle F20 driven by X4270M2 vs. IBM EXP12s
driven by P570
– 9x better price/performance
• In the same space (2 RU for Oracle, including our
workload server!)
– 6x better access density (IOPS/GB)
– 60% better performance
• At 1/5th the TSC price
– 50% better $/GB
SPC-1C, SPC-1C IOPS, and SPC-1C LRT are trademarks of Storage Performance Council (SPC). See http://www.storageperformance.org for more
information. Sun Flash Accelerator F20 PCIe Card SPC-1C submission identifier C00011 results of 72521.11 SPC-1C IOPS over a total ASU capacity of
147.413GB using unprotected data protection, and a total TSC price (not including three-year maintenance) of $15,553.55. This compares with IBM
System Storage EXP12S SPC-1C/E Submission identifier E00001 results of 45,000.20 SPC-1C IOPS over a total ASU capacity of 547.61GB using
unprotected data protection and a total TSC price (including three-year maintenance) of $87,468. The Sun Fire X4270M2 server with Sun Flash
Accelerator F20 PCIe cards is a 2RU (3.5") server, while the IBM System Storage EXP12S is a 2RU (3.5") array.
© 2011 Oracle Corporation – Proprietary and Confidential
30
Agenda
• Introduction: systems for Oracle Database
• Key architectural components
– Oracle M-Series
– Oracle Solaris
– Oracle FlashFire
– Oracle Storage
• Systems for Oracle Database
– Using dynamic domains for non-stop database operations
– Configuring high scale solutions on SPARC
– Configuring storage for Oracle databases
– Using FlashFire to reduce transaction times
• Summary — for more information
© 2011 Oracle Corporation – Proprietary and Confidential
31
• Best value: 3 of top 10 SPC-2 results
• Best in class data-rate
– 5.6 SPC-2 GB/sec for 6780
– 1.2 SPC-2 GB/sec for 6180
• Outstanding service times
– 160 μs SPC-1 write LRT for 6780
– 340 μs SPC-1 write LRT for 6180
Oracle Storage 6000 Series
© 2011 Oracle Corporation – Proprietary and Confidential
32
Sun Storage 6000: Data Rate Performance
• The SPC-2 benchmark results below show both the competitive advantage
and generational improvement of the Sun Storage 6780 array configured
using 8Gb fibre channel (FC) host interfaces using RAID 5 and RAID 6 data-
protection schemes
– The Sun Storage 6780 delivered the best price/performance of any top ten SPC-2
performers. All systems that performed better had at least 6x higher tested
storage configuration (TSC) prices
– The Sun Storage 6780 array delivered 58% better SPC-2 price/performance than
the IBM DS5300 on the SPC-2 benchmark in both RAID 5 and RAID 6
configurations
– The Sun Storage 6780 array delivered nearly identical performance for both
RAID 5 and RAID 6 configurations, showing only 1.6% less performance using
double-parity data protection (RAID 6) vs. single parity (RAID 5)
– The Sun Storage 6780 array delivered 4x more SPC-2 MB/sec than the previous
generation of Oracle's StorageTek 6540 array
– The Sun Storage 6780 array provides 1.7x better SPC-2 price/performance than
the previous-generation StorageTek 6540 array
© 2011 Oracle Corporation – Proprietary and Confidential
33
Sun Storage 6000: Transactional Performance
• These SPC-1 benchmark results below show both the competitive
advantage and generational improvement of the Sun Storage 6780
array configured using 8Gb fibre channel (FC) host interfaces
– The Sun Storage 6780 array delivered 2.5x better performance, 3x better price/
performance, and over 2x better response times than the EMC CLARiiON CX3
Model 40
– The Sun Storage 6780 array delivered over 2x better SPC-1 price/performance
than the IBM DS5300 on the SPC-1 benchmark
– The Sun Storage 6780 array delivered 34% more SPC-1 IOPS than the previous
generation of Oracle's StorageTek 6540 array
– The Sun Storage 6780 array delivered a SPC-1 LRT of 1.78 milliseconds, which is
2.7x better than the StorageTek 6540 array SPC-1 LRT of 4.82 milliseconds
© 2011 Oracle Corporation – Proprietary and Confidential
34
Sun Storage 6000: Best Practices
• Use SAME strategy throughout for all disk storage
– Strip And Mirror Everything
• Use ASM to manage 6000 series LUNS presented
• For high insert rate OLTP
– Insure Redo Logs have separate:
– Paths including HBAs, switch ports and controller ports
– LUNs — make sure SAME not mixed with other tables
– Sharing controllers OK with read heavy workloads: 6x80
provides good cache management
© 2011 Oracle Corporation – Proprietary and Confidential
35
Agenda
• Introduction: systems for Oracle Database
• Key architectural components
– Oracle M-Series
– Oracle Solaris
– Oracle FlashFire
– Oracle Storage
• Systems for Oracle Database
– Using dynamic domains for non-stop database operations
– Configuring high scale solutions on SPARC
– Configuring storage for Oracle databases
– Using FlashFire to reduce transaction times
• Summary — for more information
© 2011 Oracle Corporation – Proprietary and Confidential
36
SPARC Enterprise M-Series Servers Dynamic
Reconfiguration Investment Protection Choice
• Dynamic reconfiguration
– Add or remove system boards without Oracle
instance downtime
• Database system server size can be Increased
without forklift approach
• M-Series extended system control facility
(XSCF) manage
– Access the system remotely
• Securely via SSH or SSL
– Dynamic Reconfiguration
• Add CPU Boards
• Add Memory Boards
• Add IO Boards
Oracle Differentiator
© 2011 Oracle Corporation – Proprietary and Confidential
37
SPARC Enterprise M-Series Servers — Extended
System Control Facility (XSCF) Command Line
XSCF> showhardconf
SPARC Enterprise M4000 ;
+ Serial:BCF0712005; Operator_Panel_Switch:Locked;
+ Power_Supply_System:Single; SCF-ID:XSCF#0;
+ System_Power:On; System_Phase:Cabinet Power On;
Domain#0 Domain_Status:Running;
MBU_A Status:Normal; Ver:0101h; Serial:BF064202NR;
+ FRU-Part-Number:541-0894-02 ;
+ Memory_Size:16 GB;
CPUM#0-CHIP#0 Status:Normal; Ver:0201h; Serial:PP0631P640 ;
+ FRU-Part-Number:CA06761-D104 A3 ;
+ Freq:2.150 GHz; Type:16;
+ Core:2; Strand:2;
Snip
Power_Status:On; AC:200 V;
XSCF>
XSCF provides a command line interface to manage and
control the M-Series system via remote SSH or serial console
© 2011 Oracle Corporation – Proprietary and Confidential
38
SPARC Enterprise M-Series Servers — Extended System
Control Facility (XSCF) Web Console
XSCF provides a web-based interface to manage and control
the M-Series system via SSL
© 2011 Oracle Corporation – Proprietary and Confidential
39
XB
Domain #0 Domain #0
Best Practices for Dynamic Reconfiguration with DB
• Place all MCU and IOU in single domain
– If multi-domain required, ensure MCU/IOUs that could be moved between
domains have no I/O devices for faster DR config.
• Configure MCUs as Uni_XSB
– Quad_XSB not best practice as increases dynamic reconfiguration steps
© 2011 Oracle Corporation – Proprietary and Confidential
40
Removing System Boards Live
For Reassignment or Replacement
• No reboot required: modest changes to DB CPU OK with instance restart
• Dynamic Intimate Shared Memory Allows add/remove/deletes for Oracle memory
– But take care in Oracle memory sizing to insure no instance restart required
© 2011 Oracle Corporation – Proprietary and Confidential
41
CMU mounted with
SPARC64 VII only
CMU mounted with
SPARC64 VI only
CMU of mixed CPU
configuration
CMU of mixed CPU
configuration
Domain 0 Domain 2
CMU#0 CMU#1 CMU#2 CMU#3
Domain 1
SPARC64 VII processor SPARC64 VI processor
Adding System Boards Live
For Reassignment or Upgrades
• No reboot required: modest changes to DB CPU OK without instance restart
• Dynamic Intimate Shared Memory allows add/remove/deletes for Oracle memory
– But take care in Oracle memory sizing to insure no instance restart required
• Mixed memory and CPUs allowed
– Best practice to have same memory type within each SB
© 2011 Oracle Corporation – Proprietary and Confidential
42
Agenda
• Introduction: systems for Oracle Database
• Key architectural components
– Oracle M-Series
– Oracle Solaris
– Oracle FlashFire
– Oracle Storage
• Systems for Oracle Database
– Using dynamic domains for non-stop database operations
– Configuring high scale solutions on SPARC
– Configuring storage for Oracle databases
– Using FlashFire to reduce transaction times
• Summary — for more information
© 2011 Oracle Corporation – Proprietary and Confidential
43
RAC for Non-Stop Operation
For Small and Medium Configurations: M5000
Shared Storage
and SAN
M5000
Memory
Coherence
M5000
Two Node RAC
Example
• Straight forward design
of No Single Point of
Failure (NSPF)
• Most of server and all of
SAN and storage hot
swap
• But RAC needed to
insure NSPF for
CPU/memory swap.
© 2011 Oracle Corporation – Proprietary and Confidential
44
SGA, ISM and Non-Stop Operation
• Intimate Shared Memory –
Performance Benefits
– Locked – no swap, mutexes
– Saves kernel CPU, memory
resources
– Single cache for all Oracle
processes, IPC
• But cannot be resized
– So care with Dynamic
Reconfiguration
LOCKED
Memory
Shared
Processes
© 2011 Oracle Corporation – Proprietary and Confidential
45
Dynamic Intimate Shared Memory
• Gives nearly all performance
benefits of ISM
– Also helps NUMA MPO
• But CAN be resized
– Much greater flexibility with
Dynamic Reconfiguration
– Allows Dynamic resize of SGA
• If no DR or dynamic SGA
sizing needed, use ISM.
LOCKED
Memory
Shared
Processes
Resize
© 2011 Oracle Corporation – Proprietary and Confidential
46
No Reboot or Instance Restart!
Oracle Database
+ RAC
+ ASM
+ M-Series RAS
+ XSCF & Dynamic Reconfiguration
+ Solaris Optimizations (I.E. DISM)
NON-STOP OPERATIONS
• Set SGA_MAX_SIZE carefully
Only read at Oracle instance restart!
• IBM requires reboot, let alone instance restart!
© 2011 Oracle Corporation – Proprietary and Confidential
47
Configuration Rule Best Practices
• For mid-range systems, configure each server as
RAC node instance
– Each node defines the availability granularity level
• For high-end systems, use M9000-32 or RAC
multiples of this system for best availability and
performance
• Use Dynamic Intimate Shared Memory (DISM) if
Dynamic Reconfiguration will be used
© 2011 Oracle Corporation – Proprietary and Confidential
48
Agenda
• Introduction: systems for Oracle Database
• Key architectural components
– Oracle M-Series
– Oracle Solaris
– Oracle FlashFire
– Oracle Storage
• Systems for Oracle Database
– Using dynamic domains for non-stop database operations
– Configuring high scale solutions on SPARC
– Configuring storage for Oracle databases
– Using FlashFire to reduce transaction times
• Summary — for more information
© 2011 Oracle Corporation – Proprietary and Confidential
49
Do you have a Database I/O Bottleneck?
Using ADDM / AWR / Statspack
• Statspack ‘free’ PL code download since Oracle 8.1.7
• AWR since 10g
• Use SWAT to determine what causing waits.
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
---------------------------- ---------- -------- ---- ------ ----------
db file sequential read 19,858,182 72,997 4 41.0 User I/O
CPU time 55,805 31.4
log file sync 3,840,570 33,452 9 18.8 Commit
log file parallel write 3,356,001 12,749 4 7.2 System I/O
db file scattered read 3,672,892 10,018 3 5.6 User I/O
-------------------------------------------------------------
© 2011 Oracle Corporation – Proprietary and Confidential
50
Database I/O Bottlenecks: Wait Events
• Typical I/O wait types, foreground
– db file sequential read: disk to database buffer cache wait
– db file scatter read: wait for multi-block read into buffer cache
– read by other session: another session waiting for block above
– direct path read: read bypassing buffer cache directly into PGA
• Typical I/O wait types, background
– log file parallel write: write log data (typically to NVRAM) from LGWR
– db file parallel write: write to tables async from DBWR
– log file sequential read: to build archive log, DataGuard
– Log archive I/O, RMAN, etc.
© 2011 Oracle Corporation – Proprietary and Confidential
51
Typical Storage Bottlenecks
• Maximum IOPS delivered
– Talked about the most, but least
important for enterprise Apps
– Really measures concurrency
• Maximum data rate delivered
– Really measured channel and disk
bandwidth
• Shortest service time delivered
– Usually most important for databases
• All are dependant on I/O workload
– Read-write mix
– Transfer/block size
– ‘Sequentiality’/randomness
Demand Supply
IOPS
MB/Sec
milliseconds
© 2011 Oracle Corporation – Proprietary and Confidential
52
Storage I/O Interconnect Template (Small/Medium)
FC HBA
FLASH
10GE/IB
FLASH
NIC
FC HBA
FLASH
10GE/IB
FLASH
NIC
PCIe x8
0
1
2
3
4
0
1
2
3
4
8 Gb FC
24 Port
FC Switches
Controller A
Controller B
6180
FC Array
Controller
…
CSM2
Expansion
Trays
“RDAC”
Multipath I/O
© 2011 Oracle Corporation – Proprietary and Confidential
53
Storage I/O Interconnect Template (Large/XLarge)
FC HBA
SAS
NIC
SAS
10GE/IB
FC HBA
SAS
NIC
SAS
10GE/IB
PCIe x8 (Slots 6&7 unused)
0
1
2
3
4
0
1
2
3
4
8 Gb FC
24 Port
FC Switches
Controller A
Controller B
6780
FC Array
Controller
…
CSM2
Expansion
Trays
“RDAC”
Multipath I/O
BASE
I/O
BASE
I/O
5
5
© 2011 Oracle Corporation – Proprietary and Confidential
54
What is Oracle ASM?
Operating SystemOperating System
HardwareHardware
Oracle DatabaseOracle Database
ASM
File System & Volume Management
ASM
File System & Volume Management
Operating SystemOperating System
HardwareHardware
Logical Volume ManagerLogical Volume Manager
File SystemFile System
Oracle DatabaseOracle Database
• With Oracle 10g/11g, ASM provides
– Simplicity of management of a File System
– Performance equal to raw disks
– Provides a Cluster File System required for RAC
– Reduces storage product and management costs
© 2011 Oracle Corporation – Proprietary and Confidential
55
Mission Critical Requires “Always Online”
ASM– Re-Balancing
• Automatic online rebalance whenever storage
configuration changes
Disk Group Disk Group
Disk Add
Rebalance
© 2011 Oracle Corporation – Proprietary and Confidential
56
OSB Architecture Overview
© 2011 Oracle Corporation – Proprietary and Confidential
57
Backup I/O Interconnect Template (Large example)
FC HBA
SAS
NIC
SAS
QDR IB
FC HBA
SAS
NIC
SAS
QDR IB
PCIe x8 (Slots 6&7 unused)
0
1
2
3
4
0
1
2
3
4
IPoIB-CM
40 Gb
36 Port
QDR IB Switches
T3-1
Media
Servers
Tape
Drives
BASE I/O
BASE I/O
5
5
FC HBA
FC HBA
QDR IB0
1
2
FC HBA
FC HBA
QDR IB0
1
2
© 2011 Oracle Corporation – Proprietary and Confidential
58
Agenda
• Introduction: systems for Oracle Database
• Key architectural components
– Oracle M-Series
– Oracle Solaris
– Oracle FlashFire
– Oracle Storage
• Systems for Oracle Database
– Using dynamic domains for non-stop database operations
– Configuring high scale solutions on SPARC
– Configuring storage for Oracle databases
– Using FlashFire to reduce transaction times
• Summary — for more information
© 2011 Oracle Corporation – Proprietary and Confidential
59
Four FlashFire Deployment Practices
• 11gR2 Database Flash Cache – Single Node
• 11gR2 Database Flash Cache – RAC
• Use of Flash Disk Groups (ASM recommend)
– Proven to work very well with previous Oracle Database
Versions
– Single instance only
• Combination of Flash Cache and Flash Disk Groups
– Single instance only
© 2011 Oracle Corporation – Proprietary and Confidential
60
11gR2 Database Flash Cache
• Acts as extension of SGA
buffer cache
• Reduces physical read I/Os
– Converts to logical I/O in DB
• Principally accelerates read
intensive workloads
Storage
Few
I/O’s
Buffer Cache
Storage
Buffer Cache
Database
Flash Cache
Many
I/O’s
© 2011 Oracle Corporation – Proprietary and Confidential
61
O On O -- Flash Cache Acceleration
• 5X better
transaction times
• 5x better
transaction rates
• 3X better power
than HDD
© 2011 Oracle Corporation – Proprietary and Confidential
62
Flash Disk Group Configuration
F5100 Flash Arrays
Mirrored
SAS x4
SAS x4
M-Series Server
• ASM Normal Redundancy (Flash Modules Mirrored)
• Failure groups across SAS domains
– Across chassis (shown,) even better
8Gb FC X 8
Logical View
© 2011 Oracle Corporation – Proprietary and Confidential
63
Oracle’s Sun Storage F5100 Flash Array — Database
Accelerator and Flash Cache Target
• Large production database system performance increased with
database on Flash
– A production database of over 200 million objects was placed on Flash and
yielded 5x the performance improvement over existing system on both
• Oracle 10gR2 and Oracle 11gR2
• In addition, used as a target for Flash Cache with Oracle 11gR2
– Increases system efficiency even further
• 2x to 5x speedup with database on hard disk drives
• 3x Speedup with database on Flash above what was achieved
with improvements of the database on Flash in the first place!
• Focuses on two important metrics
– Database response times due to I/O latency and database bandwidth
throughput
Oracle Differentiator
© 2011 Oracle Corporation – Proprietary and Confidential
64
F5100 Flash Array
Improves I/O
Performance
Oracle FlashFire Technology Reduces Database Latency
Use of Flash provides better response time
Increased System Efficiency
Accelerating a Large Production Oracle Database —
Improving I/O Response Time by 100x
© 2011 Oracle Corporation – Proprietary and Confidential
65
Oracle FlashFire Technology Increases Database Throughput
Increased System Efficiency
Accelerating a Large Production Oracle Database —
Increasing Data I/O Bandwidth by 10x
F5100 Flash Array
Increases
Database Productivity
Use of Flash yields higher MB/s
© 2011 Oracle Corporation – Proprietary and Confidential
66
Oracle’s Sun Storage F5100 Flash Array – Adding Flash
Cache Target and Dynamically Changing Settings
© 2011 Oracle Corporation – Proprietary and Confidential
67
Agenda
• Introduction: systems for Oracle Database
• Key architectural components
– Oracle M-Series
– Oracle Solaris
– Oracle FlashFire
– Oracle Storage
• Systems for Oracle Database
– Using dynamic domains for non-stop database operations
– Configuring high scale solutions on SPARC
– Configuring storage for Oracle databases
– Using FlashFire to reduce transaction times
• Summary — for more information
© 2011 Oracle Corporation – Proprietary and Confidential
68
Next Steps
• Core Content in the Solution
– Systems Site (main), Systems Site (solutions)
– Main pages for - SPARC, Flash, Disk Storage, Unified Storage,
Oracle 11g, OEM Ops Center,
Oracle Solaris and Oracle Solaris Cluster,
StorageTek Workload Analysis Tool (SWAT)
• Oracle Customer Successes
– For all Servers (for Oracle database and M-Series, see Kookmin
Bank, ETSA Utilities, Ricoh Company Ltd., StubHub and others)
• Services Information
– Advanced Customer Services for Servers and Storage
Contact your local Oracle sales office for an
assessment of how we can help your organization
© 2011 Oracle Corporation – Proprietary and Confidential
69
Resources (Additional)
Select technical content
• M-Series Architecture Paper
• High Availability Using M-Series paper
• External Benchmarks
• Oracle Enterprise Manager Ops Center:
Changing the Economics of Datacenter Operations
• Sun Systems Handbook
• Brief info on Oracle database Flash Recovery Area
• StorageTek Workload Analysis Tool
• Dynamic SGA Tuning of Oracle Database on Oracle Solaris with DISM
• Oracle Optimized Solution for Oracle Database: Storage Best Practices
• Oracle Optimized Solution for Oracle Secure Backup
• Oracle Solaris: Internal and OTN
• Oracle Solaris Cluster: Internal and External
© 2011 Oracle Corporation – Proprietary and Confidential
70© 2011 Oracle Corporation – Proprietary and Confidential
71© 2011 Oracle Corporation – Proprietary and Confidential
72
<Insert Picture Here>
Appendix
© 2011 Oracle Corporation – Proprietary and Confidential
73
Required Substantiation for Benchmarks
1)TPC-H, QphH, $/QphH tm of Transaction Processing Performance Council (TPC). More info www.tpc.org. TPC-H@3000GB as of
3/22/2011. Sun SPARC Enterprise M9000 server: 386,478.3 QphH@3000GB, $19.25/QphH@3000GB, available 09/20/2011 (world record
non-clustered TPC-H 3TB result). Sun SPARC Enterprise M9000 server: 198,907.5 QphH@3000GB, $16.58/QphH@3000GB, available
12/09/2010. IBM POWER 595 Model 9119-FHA, 156,537.3 QphH@3000GB, $20.60/QphH@3000GB, available 11/24/2009. (Best IBM TPC-H
3000GB performance (QphH) and price/performance ($/QphH) result)
2) SPEC, SPECjAppServer are registered trademarks of Standard Performance Evaluation Corporation. Results as of March 7, 2011.
Source: www.spec.org. SPECjAppServer2004. App. tier: Sun SPARC Enterprise T5440 cluster (20 chips, 160 cores) 28,648.74
SPECjAppServer2004 JOPS@Standard. DB tier: Sun SPARC Enterprise M9000. App. Tier: HP BL870c cluster (68 chips, 136 cores)
28,463.03 SPECjAppServer2004 JOPS@Standard. DB tier: HP Superdome 9000. App. tier: IBM HS21 cluster (32 chips, 128 cores)
22,634.13 SPECjAppServer2004 JOPS@Standard. DB tier: IBM p595.
3)Oracle's PeopleSoft Payroll NA 9.0. Sun SPARC Enterprise M4000 (4x 2.53GHz SPARC64) 43.78 min, IBM Z990 (6 gen1) 91.70 min, HP
rx6600 (4 1.6GHz Itanium2) 68.07 min, Oracle's PeopleSoft Payroll NA 9.0. Sun SPARC Enterprise M5000 (8 2.53GHz SPARC64 VII) 50.11
min, IBM z10 (9 gen1) 58.96 min, HP rx7640 (8 1.6GHz Itanium2) 96.17 min www.oracle.com/apps_benchmark/html/white-papers-
peoplesoft.html
4)Oracle's PeopleSoft Financials 9.0. SPARC T3-1 (1x 1.65GHz SPARC-T3), Oracle's SPARC Enterprise M5000 (8 2.53GHz SPARC64),
38.66 min. http://www.oracle.com/us/solutions/benchmark/apps-benchmark/ora-fin-d-i-t-l-oracle-m4k-286901.pdf
5)Oracle Essbase: www.oracle.com/solutions/mid/oracle-hyperion-enterprise.html, as of 3/3/2011.
6)Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its
affiliates. Other names may be trademarks of their respective owners. Results as of 3/6/2011.
7)Oracle’s PeopleSoft Campus 9.0. For more information, please see http://www.oracle.com/us/solutions/benchmark/apps-
benchmark/ps9-campus-9-ora-sun-sparc-solaris-166427.pdf
© 2011 Oracle Corporation – Proprietary and Confidential
74
M-Series ZERO Downtime Database Availability
Achieving Resiliency
• Sustain operations after
failure of
– Processor or memory chip,
system board
– System backplane
– Power source, supply or fan
– Network and storage connection
– Service processor
– Storage system components
• Fault and electrical isolation
between domains
• Guaranteed data path integrity –
network to disk
• Automatic self healing of Service
processor
– Storage system components
– Recover and retry failed
instructions
– SRAM registers
– Correct double bit memory failures
– Storage and network I/O path
• Oracle RAC and cluster support
© 2011 Oracle Corporation – Proprietary and Confidential
75
M-Series Scaling, Expansion and Upgrading
Deployment Longevity
• Oracle Solaris – proven scaling
– Over 144 CPUs since 2004
• Single system scaling to
– 4TB in a single NUMA memory
image
– 64 sockets, 256 cores, 512
simultaneous threads
– 737 GB/sec system
interconnect bandwidth
– 776 MB of cache
• Ease of upgrades
– 12+ year SPARC binary
compatibility guarantee
– 3rd in-chassis hardware
upgrade and counting
– Live reallocation of CPU,
memory and I/O resources
– Support for legacy software
stack on new hardware
• Broad industry support
– 3rd party applications
– 3rd party SAN and network
solutions
© 2011 Oracle Corporation – Proprietary and Confidential

Optimize solution for oracle db technical presentation

  • 1.
  • 2.
    <Insert Picture Here> OracleOptimized Solution for Oracle Database Mission Critical Systems Environments Technical Architecture Presentation Systems Solutions and Business Planning Group
  • 3.
    3 The following isintended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.
  • 4.
    4 Agenda • Introduction: Systemsfor Oracle Database • Key architectural components – Oracle M-Series – Oracle Solaris – Oracle FlashFire – Oracle Storage • Systems for Oracle Database – Using dynamic domains for non-stop database operations – Configuring high scale solutions on SPARC – Configuring storage for Oracle databases – Using FlashFire to reduce transaction times • Summary — for more information © 2011 Oracle Corporation – Proprietary and Confidential
  • 5.
    5 Small M5000 Pre-Sized Configurations Medium M5000 X-Large M9000 Large M8000  OracleFlash F5100 Storage Arrays  Flash Acceleration & Disk Storage For Mission Critical Oracle Environments* Network (VLAN) SAN Oracle Software Oracle Storage 6000 (pictured right center) Oracle Flash F20 PCI Cards (Not pictured) Oracle ZFS Storage Appliance (pictured bottom right) © 2011 Oracle Corporation – Proprietary and Confidential – Do Not Distribute *Mission Critical Solution for New and Legacy Oracle (9i/10g/11g) Databases which support various Business Critical Applications Oracle Optimized Solution for Oracle Database Enterprise Solutions for Business Critical Environments Oracle RAC/Oracle DataGuard/Oracle Solaris Cluster
  • 6.
    6 Oracle Optimized Solutionfor Oracle Database Value Proposition • Simple, live scaling across entire system – processors, memory, operating system, I/O, etc. • World Record Performance – PeopleSoft, TPC-H, App Server/11gR2, JD Edwards Workload Scaling • Predictive Self Healing – continue operations even in light of CPU, memory or I/O failures • Extensive non-disruptive service and upgrades Non-Stop Database Operation • Complete solution provider • One deployment organization • Single support organization Simplification Investment Protection • Extended system life — in-place upgrades vs. forklift strategy for competitors • Broad support for legacy software and hardware • Leading Total Cost of Ownership © 2011 Oracle Corporation – Proprietary and Confidential
  • 7.
    7 Agenda • Introduction: systemsfor Oracle Database • Key architectural components – Oracle M-Series – Oracle Solaris – Oracle FlashFire – Oracle Storage • Systems for Oracle Database – Using dynamic domains for non-stop database operations – Configuring high scale solutions on SPARC – Configuring storage for Oracle databases – Using FlashFire to reduce transaction times • Summary — for more information © 2011 Oracle Corporation – Proprietary and Confidential
  • 8.
    8 M8000 M5000 M9000 Sun Systems forOracle Database Infrastructure Over 20 Years of Joint Mission Critical Deployments Enterprise class platforms • Reliability, availability, serviceability, and security • Highly scalable (vertical, horizontal) • Flash optimized for business critical database performance acceleration © 2011 Oracle Corporation – Proprietary and Confidential
  • 9.
    9 Oracle SPARC EnterpriseM5000 Example Physical and Logical View • Almost everything redundant • However, Memory/CPU changes require system (node) outage © 2011 Oracle Corporation – Proprietary and Confidential
  • 10.
    10 SPARC Enterprise M5000Example Basic Specifications and Configuration • Typical configuration 2 SBs – XSB's with IOUs • Single domain recommended – Two domains: one Uni-XSB per each OK SPARC ENTERPRISE M5000 SERVER Enclosure • 10 rack units SPARC64 VI Processors • 2.15 GHz • 5 MB L2 cache • Up to 8 dual-core chips SPARC64 VII/VII+ Processors • 2.4 GHz with 5 MB L2 cache • 2.53 GHz with 5.5 MB L2 cache • 2.66 GHz with 11MB L2 cache* • Up to 8 quad-core chips Memory • Up to 512 GB • 64 DIMM slots Internal I/O Slots • 8 PCI Express • 2 PCI eXtended External I/O Chassis • Up to 4 units Internal Storage • Serial attached SCSI • Up to 4 hard drives Dynamic Domains • Up to 4 * For 11MB cache support, must have new MOBO_B (SC+ chip.) © 2011 Oracle Corporation – Proprietary and Confidential
  • 11.
    11 SPARC Enterprise M9000-32Example Specifications and Physical View SPARC ENTERPRISE M9000-32 SERVER Enclosure • One cabinet SPARC64 VI Processors • 2.28GHz with 5 MB L2 cache • 2.4 GHz with 6 MB L2 cache • Up to 32 dual-core chips SPARC64 VII/VII+ Processors • 2.52 GHz with 6 MB L2 cache • 2.88 GHz with 6 MB L2 cache • 3.0 GHz with 12 MB L2 cache* • Up to 32 quad-core chips Memory • Up to 2 TB • 256 DIMM slots Internal I/O Slots • 64 PCI Express External I/O Chassis • Up to 16 units Internal Storage • Serial Attached SCSI • Up to 32 drives Dynamic Domains • Up to 24 * For 12MB cache support, must have new CMU_C (SC+ chip.) © 2011 Oracle Corporation – Proprietary and Confidential
  • 12.
    12 Modes and MixedConfiguration of CPUs Domain 0 CMU#0 CMU#1 CMU#2 CMU#3 CMU mounted with VII/VII+ only CMU mounted with VI only CMU of mixed CPU configuration CMU of mixed CPU configuration : SPARC64 VII/VII+ processor : SPARC64 VI processor Domain 2 Domain 1 © 2011 Oracle Corporation – Proprietary and Confidential
  • 13.
    13 Implications of CPUMode and Dynamic Reconfiguration • To verify domain mode > on XSCF: showdomainmode > on domain: prtdiag • Set mode to compatible if there is a possibly of adding SPARC64 VI cpus to a domain that only has SPARC64 VII/VII+ Domain CPU Configuration Value of cpumode Current CPU Operational Mode CPU Configuration that can be added by DR Operation SPARC64 VII auto SPARC64 VII enhanced mode SPARC64 VII or VII+ SPARC64 VII/VII+ compatible SPARC64 VI compatibility mode Any CPU SPARC64 VI/VII/VII+ auto or compatible SPARC64 VI compatibility mode Any CPU SPARC64 VI Auto or compatible SPARC64 VI compatibility mode Any CPU © 2011 Oracle Corporation – Proprietary and Confidential
  • 14.
    14 Mixing VII+ ProcessorsWith VI or VII Processors • To achieve the 11MB or 12MB L2$ capacity, two conditions must be met: – All four processors on the system board must be SPARC64 VII+. None of the four can be either SPARC64 VI or SPARC64 VII. – The motherboard on the M4000/M5000 must be at least version MOBO_B, and the CMU on the M8000/M9000 must be at least version CMU_C • The new MOBO_B and CMU_C have the new SC+ chip, which will provide L2$ addressing up to 12MB. • When SPARC64 VII+ is set to half of its L2$, a message notifying this event, will be displayed and logged. © 2011 Oracle Corporation – Proprietary and Confidential
  • 15.
    15 Oracle SPARC EnterpriseM9000-32 Example RAS and Logical View 1–8 CMU/IOU Pairs • 8 CMUs max • CMU and IOU hard paired • Everything hot swap: CMU, IOU, XB, XSCF, Clock, etc. © 2011 Oracle Corporation – Proprietary and Confidential
  • 16.
    16 SPARC Enterprise M9000-32Example • 8 Uni-XSBs max – Quad-XSB shown not recommended • Grouped as single hard domain • Dynamic reconfiguration used to give fine grain upgrades and service 1–8 CMU/IOU Pairs © 2011 Oracle Corporation – Proprietary and Confidential
  • 17.
    17 Oracle Database 11gon M-series and Solaris Record-breaking Performance SPARC Enterprise M9000 (2.88 GHz) Supports database tier of SPECjAppServer2004 benchmark and beats IBM p595 with DB2 and HP Superdome 9000. Delivers breakthrough performance of 28,648.74 SPECjAppServer2004 JOPS@Standard on the SPECjAppServer2004 benchmark. with SPARC Enterprise T5440 servers at the application tier(2) . World Record SPARC Enterprise M9000 (3.0 GHz) Top TWO results for non-cluster Oracle Database 11g Decision Support result on TPC-H benchmark with performance of 386,478.3 QphH@3000GB. Beats POWER6-based IBM p595 with Sybase IQ by 2.5x(1) . SPARC Enterprise M4000/M5000 (2.53 GHz) M4000, running Oracle PeopleSoft N.A. Payroll 240K employees and Oracle Database 11g, accelerated by the Sun Storage F5100 Flash Array is 2.1x faster than IBM. М5000, running 500K employees, processed payroll 18% faster than IBM Z10-class mainframe with a list price of over $6M and defeated HP Itanium-based system(3) New World Record SPARC Enterprise M5000 (2.66 GHz) M5000 server configured with Oracle's Sun Storage F5100 Flash Array Ands running Oracle Database 11g Release 2 software supported a world record result on Oracle PeopleSoft Enterprise Financials 9.0 (4) . SPARC Enterprise M4000 (2.53 GHz) World Record World Record Best database hardware for Oracle PeopleSoft Enterprise Campus solutions. Oracle Solaris with Oracle Database 11g boosted by Sun Flash Accelerator F20 card delivered up to 40% improvement on batch jobs compared to Itanium-based HP rx6600 solution(7) . New World Record SPARC Enterprise M5000 (2.53 GHz) Posts the new world record on Oracle Hyperion Essbase ASO benchmark. Essbase is a component of Oracle Fusion Middleware that uses Oracle Database 11g to manage over one billion data items(5) . World Record SPARC Enterprise M3000 (2.53 GHz) M3000 server running at the database tier, enabled Oracle's SPARC T3-1 server running Oracle JD Edwards EnterpriseOne to post a record result of 5,000 users, with 0.523 seconds of average transaction time. This result beats IBM POWER7 result by 25%(6) . New World Record Results as of 03/22/11. Footnotes and required benchmark disclosures on slide 73. © 2011 Oracle Corporation – Proprietary and Confidential
  • 18.
    18 Agenda • Introduction: systemsfor Oracle Database • Key architectural components – Oracle M-Series – Oracle Solaris – Oracle FlashFire – Oracle Storage • Systems for Oracle Database – Using dynamic domains for non-stop database operations – Configuring high scale solutions on SPARC – Configuring storage for Oracle databases – Using FlashFire to reduce transaction times • Summary — for more information © 2011 Oracle Corporation – Proprietary and Confidential
  • 19.
    19 • Oracle isthe leader in Unix system scaling – best able to handle your workload growth – Family scales to 2x the number of sockets/cores/threads than competition 64/256/512 (M9000-64) – Oracle Solaris is the only OS that scales to 512 threads today and has been doing so for years • Every other OS will need a major update and shakeout time to handle the thread counts coming with the next generation of high core/high thread counts – Flexible and easy – just add boards • Vs. competition’s more complex pre-installation of boards and additional nodes – Dynamic reconfiguration lets you easily add new resources to operating database • Competitors’ offers are more complex, limiting and requires extra software costs Workload Scaling Deployment Longevity, Legacy Support Oracle Differentiator © 2011 Oracle Corporation – Proprietary and Confidential
  • 20.
    20 Investment Protection —TCO Support for Legacy Technologies • Oracle also preserves your other investments – 10+ year guaranteed binary compatibility – save time and money, load and run old applications without recompiling – Run legacy software on new hardware via Oracle Solaris Containers • Oracle Solaris 8 or 9, Oracle Database 9i,10g,11g, and older custom codes – Broad compatibility with installed non-Oracle technologies • SANs and networks • Applications • Management tools • ...many more Oracle Differentiator © 2011 Oracle Corporation – Proprietary and Confidential
  • 21.
    21 Links to competitivedetails in reference section. Non-Stop Database Operations Predictive Self Healing • Mitigating risk with unique features – Can operate after memory and chip failures – Guaranteed data path integrity – network to disk – Provides fault and electrical isolation between domains • Predictive Self Healing – detects and corrects multitude of system failures without service disruption, retries failed instructions – Includes processor or system boards, memory, all levels of cache, backplane, power source, power supply, fans, network and storage connections, service processor, storage components, etc. Oracle Differentiator © 2011 Oracle Corporation – Proprietary and Confidential
  • 22.
    22 • Simplify managementwith unique online service and upgrade features – Live HW addition or replacement (no reboot required) of system boards* and memory*, I/O modules • HP requires performance trade off for HA • IBM does not support live system hardware upgrades (hot swappable CPU/Memory boards) – Live expansion of operating database instance after adding CPUs, memory, I/O channels, storage capacity • Online service and upgrades also include – Repair and replacement of virtually all components – Database, firmware, microcode and Oracle Solaris updates – Live migration of databases between Oracle Solaris Containers Non-Stop Database Operations (cont.) Online Service and Upgrades *Hot swap system and memory boards in M8000/M9000 only.Oracle Differentiator © 2011 Oracle Corporation – Proprietary and Confidential
  • 23.
    23 Non-Disruptive Service, Repairand Upgrades Eliminate Planned Downtime • Live hardware expansion or replacement – System boards (CPUs and memory) – Power supplies and fans – Network and storage connections – Service processor – Storage system components • Live growth of an operating environment – CPUs, memory, I/O channels, storage capacity • Live upgrades of – Oracle Solaris – Firmware and microcode – Database • Live migration of Databases between Oracle Solaris Containers © 2011 Oracle Corporation – Proprietary and Confidential
  • 24.
    24 Investment Protection (andAvailability) Deployment Longevity, Legacy Support • Oracle saves you money and downtime by providing a longer system life – Designed for in-system upgrades (3 and counting for M-Series) vs. forklift replacements required by competitors • Oracle upgrades can be 1/5 the cost of IBM* – Add new processor speeds and generations to existing vs. 100% replacement required by IBM – New processors support existing I/O cards vs. replacement sometimes required by competitors – Extend system life by adding FlashFire to boost database performance by up 2–4x – In-system upgrades can be done non-disruptively, forklift replacements can’t *Links to competitive details in reference section.Oracle Differentiator © 2011 Oracle Corporation – Proprietary and Confidential
  • 25.
    25 Agenda • Introduction: systemsfor Oracle Database • Key architectural components – Oracle M-Series – Oracle Solaris – Oracle FlashFire – Oracle Storage • Systems for Oracle Database – Using dynamic domains for non-stop database operations – Configuring high scale solutions on SPARC – Configuring storage for Oracle databases – Using FlashFire to reduce transaction times • Summary — for more information © 2011 Oracle Corporation – Proprietary and Confidential
  • 26.
    26 Oracle’s FlashFire Technology Oracle’sFlash Accelerator F20 PCIe Card Oracle’s Storage F5100 Flash Array Oracle’s Flash Module © 2011 Oracle Corporation – Proprietary and Confidential
  • 27.
    27 Quick Intro toFlashFire • Based on Oracle Flash Modules – ‘Different’ than standard SSDs of others – Optimized for database acceleration • Unique RAS, environmental and performance characteristics Oracle’s Storage F5100 Flash Array © 2011 Oracle Corporation – Proprietary and Confidential
  • 28.
    28 World Record FlashPerformance Storage Performance Council SPC-1C • Oracle F5100 driven by M5000 vs. IBM EXP12s driven by P575 – Nearly 7x better performance • Delivered this in half the space of IBM – 2.7x better access density (IOPS/GB) – 2.5x better service times (better LRT and max recorded) – 3.9x better price/performance – 31% better $/GB SPC-1C, SPC-1C IOPS, and SPC-1C LRT are trademarks of Storage Performance Council (SPC). See http://www.storageperformance.org for more information. Sun Storage F5100 Flash Array SPC-1C submission identifier C00010 results of 300,873.47 SPC-1C IOPS over a total ASU capacity of 1374.390 GB using unprotected data protection, a SPC-1C LRT of 0.33 milliseconds, a 100% load over all ASU response time of 2.63 milliseconds and a total TSC price (including three-year maintenance) of $151,381. This compares with IBM System Storage EXP12S SPC-1C/E Submission identifier E00001 results of 45,000.20 SPC-1C IOPS over a total ASU capacity of 547.61 GB using unprotected data protection level, a SPC-1C LRT of 0.46 milliseconds, a 100% load over all ASU response time of 6.95 milliseconds and a total TSC price (including three-year maintenance) of $87,468.The Sun Storage F5100 Flash Array is a 1RU (1.75") array. The IBM System Storage EXP12S is a 2RU (3.5") array. © 2011 Oracle Corporation – Proprietary and Confidential
  • 29.
    29 World Record FlashFirePrice/Performance Storage Performance Council SPC-1C • Oracle F20 driven by X4270M2 vs. IBM EXP12s driven by P570 – 9x better price/performance • In the same space (2 RU for Oracle, including our workload server!) – 6x better access density (IOPS/GB) – 60% better performance • At 1/5th the TSC price – 50% better $/GB SPC-1C, SPC-1C IOPS, and SPC-1C LRT are trademarks of Storage Performance Council (SPC). See http://www.storageperformance.org for more information. Sun Flash Accelerator F20 PCIe Card SPC-1C submission identifier C00011 results of 72521.11 SPC-1C IOPS over a total ASU capacity of 147.413GB using unprotected data protection, and a total TSC price (not including three-year maintenance) of $15,553.55. This compares with IBM System Storage EXP12S SPC-1C/E Submission identifier E00001 results of 45,000.20 SPC-1C IOPS over a total ASU capacity of 547.61GB using unprotected data protection and a total TSC price (including three-year maintenance) of $87,468. The Sun Fire X4270M2 server with Sun Flash Accelerator F20 PCIe cards is a 2RU (3.5") server, while the IBM System Storage EXP12S is a 2RU (3.5") array. © 2011 Oracle Corporation – Proprietary and Confidential
  • 30.
    30 Agenda • Introduction: systemsfor Oracle Database • Key architectural components – Oracle M-Series – Oracle Solaris – Oracle FlashFire – Oracle Storage • Systems for Oracle Database – Using dynamic domains for non-stop database operations – Configuring high scale solutions on SPARC – Configuring storage for Oracle databases – Using FlashFire to reduce transaction times • Summary — for more information © 2011 Oracle Corporation – Proprietary and Confidential
  • 31.
    31 • Best value:3 of top 10 SPC-2 results • Best in class data-rate – 5.6 SPC-2 GB/sec for 6780 – 1.2 SPC-2 GB/sec for 6180 • Outstanding service times – 160 μs SPC-1 write LRT for 6780 – 340 μs SPC-1 write LRT for 6180 Oracle Storage 6000 Series © 2011 Oracle Corporation – Proprietary and Confidential
  • 32.
    32 Sun Storage 6000:Data Rate Performance • The SPC-2 benchmark results below show both the competitive advantage and generational improvement of the Sun Storage 6780 array configured using 8Gb fibre channel (FC) host interfaces using RAID 5 and RAID 6 data- protection schemes – The Sun Storage 6780 delivered the best price/performance of any top ten SPC-2 performers. All systems that performed better had at least 6x higher tested storage configuration (TSC) prices – The Sun Storage 6780 array delivered 58% better SPC-2 price/performance than the IBM DS5300 on the SPC-2 benchmark in both RAID 5 and RAID 6 configurations – The Sun Storage 6780 array delivered nearly identical performance for both RAID 5 and RAID 6 configurations, showing only 1.6% less performance using double-parity data protection (RAID 6) vs. single parity (RAID 5) – The Sun Storage 6780 array delivered 4x more SPC-2 MB/sec than the previous generation of Oracle's StorageTek 6540 array – The Sun Storage 6780 array provides 1.7x better SPC-2 price/performance than the previous-generation StorageTek 6540 array © 2011 Oracle Corporation – Proprietary and Confidential
  • 33.
    33 Sun Storage 6000:Transactional Performance • These SPC-1 benchmark results below show both the competitive advantage and generational improvement of the Sun Storage 6780 array configured using 8Gb fibre channel (FC) host interfaces – The Sun Storage 6780 array delivered 2.5x better performance, 3x better price/ performance, and over 2x better response times than the EMC CLARiiON CX3 Model 40 – The Sun Storage 6780 array delivered over 2x better SPC-1 price/performance than the IBM DS5300 on the SPC-1 benchmark – The Sun Storage 6780 array delivered 34% more SPC-1 IOPS than the previous generation of Oracle's StorageTek 6540 array – The Sun Storage 6780 array delivered a SPC-1 LRT of 1.78 milliseconds, which is 2.7x better than the StorageTek 6540 array SPC-1 LRT of 4.82 milliseconds © 2011 Oracle Corporation – Proprietary and Confidential
  • 34.
    34 Sun Storage 6000:Best Practices • Use SAME strategy throughout for all disk storage – Strip And Mirror Everything • Use ASM to manage 6000 series LUNS presented • For high insert rate OLTP – Insure Redo Logs have separate: – Paths including HBAs, switch ports and controller ports – LUNs — make sure SAME not mixed with other tables – Sharing controllers OK with read heavy workloads: 6x80 provides good cache management © 2011 Oracle Corporation – Proprietary and Confidential
  • 35.
    35 Agenda • Introduction: systemsfor Oracle Database • Key architectural components – Oracle M-Series – Oracle Solaris – Oracle FlashFire – Oracle Storage • Systems for Oracle Database – Using dynamic domains for non-stop database operations – Configuring high scale solutions on SPARC – Configuring storage for Oracle databases – Using FlashFire to reduce transaction times • Summary — for more information © 2011 Oracle Corporation – Proprietary and Confidential
  • 36.
    36 SPARC Enterprise M-SeriesServers Dynamic Reconfiguration Investment Protection Choice • Dynamic reconfiguration – Add or remove system boards without Oracle instance downtime • Database system server size can be Increased without forklift approach • M-Series extended system control facility (XSCF) manage – Access the system remotely • Securely via SSH or SSL – Dynamic Reconfiguration • Add CPU Boards • Add Memory Boards • Add IO Boards Oracle Differentiator © 2011 Oracle Corporation – Proprietary and Confidential
  • 37.
    37 SPARC Enterprise M-SeriesServers — Extended System Control Facility (XSCF) Command Line XSCF> showhardconf SPARC Enterprise M4000 ; + Serial:BCF0712005; Operator_Panel_Switch:Locked; + Power_Supply_System:Single; SCF-ID:XSCF#0; + System_Power:On; System_Phase:Cabinet Power On; Domain#0 Domain_Status:Running; MBU_A Status:Normal; Ver:0101h; Serial:BF064202NR; + FRU-Part-Number:541-0894-02 ; + Memory_Size:16 GB; CPUM#0-CHIP#0 Status:Normal; Ver:0201h; Serial:PP0631P640 ; + FRU-Part-Number:CA06761-D104 A3 ; + Freq:2.150 GHz; Type:16; + Core:2; Strand:2; Snip Power_Status:On; AC:200 V; XSCF> XSCF provides a command line interface to manage and control the M-Series system via remote SSH or serial console © 2011 Oracle Corporation – Proprietary and Confidential
  • 38.
    38 SPARC Enterprise M-SeriesServers — Extended System Control Facility (XSCF) Web Console XSCF provides a web-based interface to manage and control the M-Series system via SSL © 2011 Oracle Corporation – Proprietary and Confidential
  • 39.
    39 XB Domain #0 Domain#0 Best Practices for Dynamic Reconfiguration with DB • Place all MCU and IOU in single domain – If multi-domain required, ensure MCU/IOUs that could be moved between domains have no I/O devices for faster DR config. • Configure MCUs as Uni_XSB – Quad_XSB not best practice as increases dynamic reconfiguration steps © 2011 Oracle Corporation – Proprietary and Confidential
  • 40.
    40 Removing System BoardsLive For Reassignment or Replacement • No reboot required: modest changes to DB CPU OK with instance restart • Dynamic Intimate Shared Memory Allows add/remove/deletes for Oracle memory – But take care in Oracle memory sizing to insure no instance restart required © 2011 Oracle Corporation – Proprietary and Confidential
  • 41.
    41 CMU mounted with SPARC64VII only CMU mounted with SPARC64 VI only CMU of mixed CPU configuration CMU of mixed CPU configuration Domain 0 Domain 2 CMU#0 CMU#1 CMU#2 CMU#3 Domain 1 SPARC64 VII processor SPARC64 VI processor Adding System Boards Live For Reassignment or Upgrades • No reboot required: modest changes to DB CPU OK without instance restart • Dynamic Intimate Shared Memory allows add/remove/deletes for Oracle memory – But take care in Oracle memory sizing to insure no instance restart required • Mixed memory and CPUs allowed – Best practice to have same memory type within each SB © 2011 Oracle Corporation – Proprietary and Confidential
  • 42.
    42 Agenda • Introduction: systemsfor Oracle Database • Key architectural components – Oracle M-Series – Oracle Solaris – Oracle FlashFire – Oracle Storage • Systems for Oracle Database – Using dynamic domains for non-stop database operations – Configuring high scale solutions on SPARC – Configuring storage for Oracle databases – Using FlashFire to reduce transaction times • Summary — for more information © 2011 Oracle Corporation – Proprietary and Confidential
  • 43.
    43 RAC for Non-StopOperation For Small and Medium Configurations: M5000 Shared Storage and SAN M5000 Memory Coherence M5000 Two Node RAC Example • Straight forward design of No Single Point of Failure (NSPF) • Most of server and all of SAN and storage hot swap • But RAC needed to insure NSPF for CPU/memory swap. © 2011 Oracle Corporation – Proprietary and Confidential
  • 44.
    44 SGA, ISM andNon-Stop Operation • Intimate Shared Memory – Performance Benefits – Locked – no swap, mutexes – Saves kernel CPU, memory resources – Single cache for all Oracle processes, IPC • But cannot be resized – So care with Dynamic Reconfiguration LOCKED Memory Shared Processes © 2011 Oracle Corporation – Proprietary and Confidential
  • 45.
    45 Dynamic Intimate SharedMemory • Gives nearly all performance benefits of ISM – Also helps NUMA MPO • But CAN be resized – Much greater flexibility with Dynamic Reconfiguration – Allows Dynamic resize of SGA • If no DR or dynamic SGA sizing needed, use ISM. LOCKED Memory Shared Processes Resize © 2011 Oracle Corporation – Proprietary and Confidential
  • 46.
    46 No Reboot orInstance Restart! Oracle Database + RAC + ASM + M-Series RAS + XSCF & Dynamic Reconfiguration + Solaris Optimizations (I.E. DISM) NON-STOP OPERATIONS • Set SGA_MAX_SIZE carefully Only read at Oracle instance restart! • IBM requires reboot, let alone instance restart! © 2011 Oracle Corporation – Proprietary and Confidential
  • 47.
    47 Configuration Rule BestPractices • For mid-range systems, configure each server as RAC node instance – Each node defines the availability granularity level • For high-end systems, use M9000-32 or RAC multiples of this system for best availability and performance • Use Dynamic Intimate Shared Memory (DISM) if Dynamic Reconfiguration will be used © 2011 Oracle Corporation – Proprietary and Confidential
  • 48.
    48 Agenda • Introduction: systemsfor Oracle Database • Key architectural components – Oracle M-Series – Oracle Solaris – Oracle FlashFire – Oracle Storage • Systems for Oracle Database – Using dynamic domains for non-stop database operations – Configuring high scale solutions on SPARC – Configuring storage for Oracle databases – Using FlashFire to reduce transaction times • Summary — for more information © 2011 Oracle Corporation – Proprietary and Confidential
  • 49.
    49 Do you havea Database I/O Bottleneck? Using ADDM / AWR / Statspack • Statspack ‘free’ PL code download since Oracle 8.1.7 • AWR since 10g • Use SWAT to determine what causing waits. Top 5 Timed Events Avg %Total ~~~~~~~~~~~~~~~~~~ wait Call Event Waits Time (s) (ms) Time Wait Class ---------------------------- ---------- -------- ---- ------ ---------- db file sequential read 19,858,182 72,997 4 41.0 User I/O CPU time 55,805 31.4 log file sync 3,840,570 33,452 9 18.8 Commit log file parallel write 3,356,001 12,749 4 7.2 System I/O db file scattered read 3,672,892 10,018 3 5.6 User I/O ------------------------------------------------------------- © 2011 Oracle Corporation – Proprietary and Confidential
  • 50.
    50 Database I/O Bottlenecks:Wait Events • Typical I/O wait types, foreground – db file sequential read: disk to database buffer cache wait – db file scatter read: wait for multi-block read into buffer cache – read by other session: another session waiting for block above – direct path read: read bypassing buffer cache directly into PGA • Typical I/O wait types, background – log file parallel write: write log data (typically to NVRAM) from LGWR – db file parallel write: write to tables async from DBWR – log file sequential read: to build archive log, DataGuard – Log archive I/O, RMAN, etc. © 2011 Oracle Corporation – Proprietary and Confidential
  • 51.
    51 Typical Storage Bottlenecks •Maximum IOPS delivered – Talked about the most, but least important for enterprise Apps – Really measures concurrency • Maximum data rate delivered – Really measured channel and disk bandwidth • Shortest service time delivered – Usually most important for databases • All are dependant on I/O workload – Read-write mix – Transfer/block size – ‘Sequentiality’/randomness Demand Supply IOPS MB/Sec milliseconds © 2011 Oracle Corporation – Proprietary and Confidential
  • 52.
    52 Storage I/O InterconnectTemplate (Small/Medium) FC HBA FLASH 10GE/IB FLASH NIC FC HBA FLASH 10GE/IB FLASH NIC PCIe x8 0 1 2 3 4 0 1 2 3 4 8 Gb FC 24 Port FC Switches Controller A Controller B 6180 FC Array Controller … CSM2 Expansion Trays “RDAC” Multipath I/O © 2011 Oracle Corporation – Proprietary and Confidential
  • 53.
    53 Storage I/O InterconnectTemplate (Large/XLarge) FC HBA SAS NIC SAS 10GE/IB FC HBA SAS NIC SAS 10GE/IB PCIe x8 (Slots 6&7 unused) 0 1 2 3 4 0 1 2 3 4 8 Gb FC 24 Port FC Switches Controller A Controller B 6780 FC Array Controller … CSM2 Expansion Trays “RDAC” Multipath I/O BASE I/O BASE I/O 5 5 © 2011 Oracle Corporation – Proprietary and Confidential
  • 54.
    54 What is OracleASM? Operating SystemOperating System HardwareHardware Oracle DatabaseOracle Database ASM File System & Volume Management ASM File System & Volume Management Operating SystemOperating System HardwareHardware Logical Volume ManagerLogical Volume Manager File SystemFile System Oracle DatabaseOracle Database • With Oracle 10g/11g, ASM provides – Simplicity of management of a File System – Performance equal to raw disks – Provides a Cluster File System required for RAC – Reduces storage product and management costs © 2011 Oracle Corporation – Proprietary and Confidential
  • 55.
    55 Mission Critical Requires“Always Online” ASM– Re-Balancing • Automatic online rebalance whenever storage configuration changes Disk Group Disk Group Disk Add Rebalance © 2011 Oracle Corporation – Proprietary and Confidential
  • 56.
    56 OSB Architecture Overview ©2011 Oracle Corporation – Proprietary and Confidential
  • 57.
    57 Backup I/O InterconnectTemplate (Large example) FC HBA SAS NIC SAS QDR IB FC HBA SAS NIC SAS QDR IB PCIe x8 (Slots 6&7 unused) 0 1 2 3 4 0 1 2 3 4 IPoIB-CM 40 Gb 36 Port QDR IB Switches T3-1 Media Servers Tape Drives BASE I/O BASE I/O 5 5 FC HBA FC HBA QDR IB0 1 2 FC HBA FC HBA QDR IB0 1 2 © 2011 Oracle Corporation – Proprietary and Confidential
  • 58.
    58 Agenda • Introduction: systemsfor Oracle Database • Key architectural components – Oracle M-Series – Oracle Solaris – Oracle FlashFire – Oracle Storage • Systems for Oracle Database – Using dynamic domains for non-stop database operations – Configuring high scale solutions on SPARC – Configuring storage for Oracle databases – Using FlashFire to reduce transaction times • Summary — for more information © 2011 Oracle Corporation – Proprietary and Confidential
  • 59.
    59 Four FlashFire DeploymentPractices • 11gR2 Database Flash Cache – Single Node • 11gR2 Database Flash Cache – RAC • Use of Flash Disk Groups (ASM recommend) – Proven to work very well with previous Oracle Database Versions – Single instance only • Combination of Flash Cache and Flash Disk Groups – Single instance only © 2011 Oracle Corporation – Proprietary and Confidential
  • 60.
    60 11gR2 Database FlashCache • Acts as extension of SGA buffer cache • Reduces physical read I/Os – Converts to logical I/O in DB • Principally accelerates read intensive workloads Storage Few I/O’s Buffer Cache Storage Buffer Cache Database Flash Cache Many I/O’s © 2011 Oracle Corporation – Proprietary and Confidential
  • 61.
    61 O On O-- Flash Cache Acceleration • 5X better transaction times • 5x better transaction rates • 3X better power than HDD © 2011 Oracle Corporation – Proprietary and Confidential
  • 62.
    62 Flash Disk GroupConfiguration F5100 Flash Arrays Mirrored SAS x4 SAS x4 M-Series Server • ASM Normal Redundancy (Flash Modules Mirrored) • Failure groups across SAS domains – Across chassis (shown,) even better 8Gb FC X 8 Logical View © 2011 Oracle Corporation – Proprietary and Confidential
  • 63.
    63 Oracle’s Sun StorageF5100 Flash Array — Database Accelerator and Flash Cache Target • Large production database system performance increased with database on Flash – A production database of over 200 million objects was placed on Flash and yielded 5x the performance improvement over existing system on both • Oracle 10gR2 and Oracle 11gR2 • In addition, used as a target for Flash Cache with Oracle 11gR2 – Increases system efficiency even further • 2x to 5x speedup with database on hard disk drives • 3x Speedup with database on Flash above what was achieved with improvements of the database on Flash in the first place! • Focuses on two important metrics – Database response times due to I/O latency and database bandwidth throughput Oracle Differentiator © 2011 Oracle Corporation – Proprietary and Confidential
  • 64.
    64 F5100 Flash Array ImprovesI/O Performance Oracle FlashFire Technology Reduces Database Latency Use of Flash provides better response time Increased System Efficiency Accelerating a Large Production Oracle Database — Improving I/O Response Time by 100x © 2011 Oracle Corporation – Proprietary and Confidential
  • 65.
    65 Oracle FlashFire TechnologyIncreases Database Throughput Increased System Efficiency Accelerating a Large Production Oracle Database — Increasing Data I/O Bandwidth by 10x F5100 Flash Array Increases Database Productivity Use of Flash yields higher MB/s © 2011 Oracle Corporation – Proprietary and Confidential
  • 66.
    66 Oracle’s Sun StorageF5100 Flash Array – Adding Flash Cache Target and Dynamically Changing Settings © 2011 Oracle Corporation – Proprietary and Confidential
  • 67.
    67 Agenda • Introduction: systemsfor Oracle Database • Key architectural components – Oracle M-Series – Oracle Solaris – Oracle FlashFire – Oracle Storage • Systems for Oracle Database – Using dynamic domains for non-stop database operations – Configuring high scale solutions on SPARC – Configuring storage for Oracle databases – Using FlashFire to reduce transaction times • Summary — for more information © 2011 Oracle Corporation – Proprietary and Confidential
  • 68.
    68 Next Steps • CoreContent in the Solution – Systems Site (main), Systems Site (solutions) – Main pages for - SPARC, Flash, Disk Storage, Unified Storage, Oracle 11g, OEM Ops Center, Oracle Solaris and Oracle Solaris Cluster, StorageTek Workload Analysis Tool (SWAT) • Oracle Customer Successes – For all Servers (for Oracle database and M-Series, see Kookmin Bank, ETSA Utilities, Ricoh Company Ltd., StubHub and others) • Services Information – Advanced Customer Services for Servers and Storage Contact your local Oracle sales office for an assessment of how we can help your organization © 2011 Oracle Corporation – Proprietary and Confidential
  • 69.
    69 Resources (Additional) Select technicalcontent • M-Series Architecture Paper • High Availability Using M-Series paper • External Benchmarks • Oracle Enterprise Manager Ops Center: Changing the Economics of Datacenter Operations • Sun Systems Handbook • Brief info on Oracle database Flash Recovery Area • StorageTek Workload Analysis Tool • Dynamic SGA Tuning of Oracle Database on Oracle Solaris with DISM • Oracle Optimized Solution for Oracle Database: Storage Best Practices • Oracle Optimized Solution for Oracle Secure Backup • Oracle Solaris: Internal and OTN • Oracle Solaris Cluster: Internal and External © 2011 Oracle Corporation – Proprietary and Confidential
  • 70.
    70© 2011 OracleCorporation – Proprietary and Confidential
  • 71.
    71© 2011 OracleCorporation – Proprietary and Confidential
  • 72.
    72 <Insert Picture Here> Appendix ©2011 Oracle Corporation – Proprietary and Confidential
  • 73.
    73 Required Substantiation forBenchmarks 1)TPC-H, QphH, $/QphH tm of Transaction Processing Performance Council (TPC). More info www.tpc.org. TPC-H@3000GB as of 3/22/2011. Sun SPARC Enterprise M9000 server: 386,478.3 QphH@3000GB, $19.25/QphH@3000GB, available 09/20/2011 (world record non-clustered TPC-H 3TB result). Sun SPARC Enterprise M9000 server: 198,907.5 QphH@3000GB, $16.58/QphH@3000GB, available 12/09/2010. IBM POWER 595 Model 9119-FHA, 156,537.3 QphH@3000GB, $20.60/QphH@3000GB, available 11/24/2009. (Best IBM TPC-H 3000GB performance (QphH) and price/performance ($/QphH) result) 2) SPEC, SPECjAppServer are registered trademarks of Standard Performance Evaluation Corporation. Results as of March 7, 2011. Source: www.spec.org. SPECjAppServer2004. App. tier: Sun SPARC Enterprise T5440 cluster (20 chips, 160 cores) 28,648.74 SPECjAppServer2004 JOPS@Standard. DB tier: Sun SPARC Enterprise M9000. App. Tier: HP BL870c cluster (68 chips, 136 cores) 28,463.03 SPECjAppServer2004 JOPS@Standard. DB tier: HP Superdome 9000. App. tier: IBM HS21 cluster (32 chips, 128 cores) 22,634.13 SPECjAppServer2004 JOPS@Standard. DB tier: IBM p595. 3)Oracle's PeopleSoft Payroll NA 9.0. Sun SPARC Enterprise M4000 (4x 2.53GHz SPARC64) 43.78 min, IBM Z990 (6 gen1) 91.70 min, HP rx6600 (4 1.6GHz Itanium2) 68.07 min, Oracle's PeopleSoft Payroll NA 9.0. Sun SPARC Enterprise M5000 (8 2.53GHz SPARC64 VII) 50.11 min, IBM z10 (9 gen1) 58.96 min, HP rx7640 (8 1.6GHz Itanium2) 96.17 min www.oracle.com/apps_benchmark/html/white-papers- peoplesoft.html 4)Oracle's PeopleSoft Financials 9.0. SPARC T3-1 (1x 1.65GHz SPARC-T3), Oracle's SPARC Enterprise M5000 (8 2.53GHz SPARC64), 38.66 min. http://www.oracle.com/us/solutions/benchmark/apps-benchmark/ora-fin-d-i-t-l-oracle-m4k-286901.pdf 5)Oracle Essbase: www.oracle.com/solutions/mid/oracle-hyperion-enterprise.html, as of 3/3/2011. 6)Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 3/6/2011. 7)Oracle’s PeopleSoft Campus 9.0. For more information, please see http://www.oracle.com/us/solutions/benchmark/apps- benchmark/ps9-campus-9-ora-sun-sparc-solaris-166427.pdf © 2011 Oracle Corporation – Proprietary and Confidential
  • 74.
    74 M-Series ZERO DowntimeDatabase Availability Achieving Resiliency • Sustain operations after failure of – Processor or memory chip, system board – System backplane – Power source, supply or fan – Network and storage connection – Service processor – Storage system components • Fault and electrical isolation between domains • Guaranteed data path integrity – network to disk • Automatic self healing of Service processor – Storage system components – Recover and retry failed instructions – SRAM registers – Correct double bit memory failures – Storage and network I/O path • Oracle RAC and cluster support © 2011 Oracle Corporation – Proprietary and Confidential
  • 75.
    75 M-Series Scaling, Expansionand Upgrading Deployment Longevity • Oracle Solaris – proven scaling – Over 144 CPUs since 2004 • Single system scaling to – 4TB in a single NUMA memory image – 64 sockets, 256 cores, 512 simultaneous threads – 737 GB/sec system interconnect bandwidth – 776 MB of cache • Ease of upgrades – 12+ year SPARC binary compatibility guarantee – 3rd in-chassis hardware upgrade and counting – Live reallocation of CPU, memory and I/O resources – Support for legacy software stack on new hardware • Broad industry support – 3rd party applications – 3rd party SAN and network solutions © 2011 Oracle Corporation – Proprietary and Confidential

Editor's Notes

  • #3 Systems for Mission Critical Database Solution developed and managed by: Ken Kutzer, Sean R Walsh, Larry McIntosh, Randal Sagrillo Of Systems Solutions and Business Planning Group, Oracle Hardware BU
  • #5 This presentation is designed to give the audience some background in the System for oracle Database: what they are, Oracle’s relevant technical differentiation as applied to Systems for Database. Both as a background to the unique technologies Oracle offers but some of the underlying evidence of its superiority. And finally the recommended practices to allow deployments to achieve the high levels of Non-Stop operation, Scaling, Investment protection and simplicity.
  • #6 Enterprise Mission Critical: Reliability, availability, serviceability, and security Highly scalable (Vertical, Horizontal) Flash optimized for business critical database performance acceleration COMPLETE. Solutions Infrastructure is the pathway to achieving next generation performance by taking a “holistic view” of IT environment From App to Disk, Systems w/Software, Enterprise Services and Support DBMS Application &amp; Platform Performance from optimization at the “best” place For ease of deployment and management For best price/performance and flexibility for changing business needs Remove customer pain with coordinated patching &amp; upgrades Provide integrated offerings Database + Applications + Servers + Flash + Disk + OS + Software Infrastructure + services + and more…. Proven performance and reliability of SPARC systems relies on hardware features and the strong integration of the operating system. The Solaris operating system is a 20 year evolution with Oracle. There are a broad range of features of unique features designed to lower costs in the datacenter by preventing unplanned downtime, optimizing performance, and preventing security breaches. For example, Unplanned downtime costs and average of $42K per hour or about 3.6% of revenue (16% for financial services). The average security breach cost an average of $6.6M in 2008.
  • #9 Innovation matters and that is what the SPARC Enterprise products deliver, making them #1 in the industry. Our focus here is the Large Mission Critical Oracle Database deployments, thus the M-Series is the starting point in the Sun + Oracle portfolio. M-Series = SMP, Large Mission critical, Global Single instance, DSS-OLTP… T-Series = middle tier, Application tier, web tier, RDBMS… The Solaris operating system, an integral part of the system, demonstrates leadership as the #1 OS shipped on servers and the #1 OS for Oracle deployments. This isn’t hard to believe once you look at the Solaris OS support. Solaris has the #1 application portfolio with 8x more ISVs supported than IBM AIX and 2x more than HP-UX. All of this results in the leading performance in key enterprise benchmarks, including OLTP (tpc-c), BIEE (Oracle BIEE), SAP ERP (sap sd 2-tier), Web(specweb), Mail(specmail), Peoplesoft Payroll(peoplesoft payroll), CRM
  • #14 This is mainly a heads up for customer that have existing systems
  • #15 While new systems will have the SC+, existing customers may want to upgrade their motherboards on the M4000/M5000 so they can fully utilize the 11MB of L2$.
  • #18 Key Points: IBM will continue to push your customers that IBM Power7 is much faster than SPARC. They may do so by asking what CPU is current and what is Oracle offering. Avoid head to head CPU comparisons. Faster CPU’s do not mean better customer performance Or more operational value (lower acquisition and ownership costs over the life of the solution.) Many think that IBM will win all benchmarks, but that is not true (above.) An integrated stack provided proven world recording winning value.
  • #20 Socket, thread and core counts based on public web site information for P-Series AIX and HP Superdome HP-UX products. * Source: IBM AIX 6.1 datasheet, April 2010 &amp;gt; Linux supports up to 256 threads, HP-UX supports up to 256 threads ** Future Superdome2 configs = 16-socket in 2010, then will support 32-socket in 2011, no other configurations officially announced Details on the extra reconfiguration steps and restrictions on IBM upgrades, and extra licensing required may be found here http://my.oracle.com/portal/pls/myo/ats_urldeeplink.loaditem?p_masterthingid=84734658&amp;p_siteid=1
  • #21 Sun offered binary compatibility for over ten years, IBM started in ~2004 (it is rumored that AIX 7 may break that). Support for legacy Solaris and databases are core feature of Solaris Containers.
  • #22 Unique to M-Series are Memory Mirroring, Cache Degraded Mode and dual SP. Of course M-series also supports instruction retry. Core retirement, memory page retirement, redundant power supplies, cooling and many, many others. Data on competition based on internal analysis from information publicly available on IBM and HP products
  • #23 IBM - Related to replacement, is online adding. IBM talks about hot-node add (and removal, although let’s focus on adding since that is done online while removal usually involves the previous process as the node is shut down). If you want to add any resources to a POWER server, you have to make sure you are running either a p 570/560, Power 770/780. Those are the only models that support this feature, and that is because IBM does not hot-add a cellboard, it either activates resources that are already housed in the system (capacity on demand, which we have as well) or you must add an entire node. Now if you already have a full complement of nodes in your 770/780 and you didn’t already have the max of 4 cellboards pre-installed in each node, you’ll have to follow Live Partition Mobility (LPM) procedure if you want to keep your workloads running. IBM assumes you do, and this is why you just use COD or add another node. While adding a node is straightforward enough, there are some gotchas here about how you must reserve rackspace if you intend to grow your system (or move things around if you change your mind later) and cabling can be complex depending on the configuration. Also don’t forget that IBM recommends you migrate running workloads first because “changing the hardware configuration or the operational state of electronic equipment may cause unforeseen impacts to the system status or running applications”, per IBM* *Source p.15: http://www.ibm.com/developerworks/wikis/download/attachments/53871900/Implementing+CEC+Concurrent+Maintenance.pdf?version=1 HP - HP Superdome UGUY board (service processor, system clocks, power monitor, etc.) is listed as a known Single Point Of Failure. There is no mention of this being explicitly addressed in Superdome2. Source: HP-UX 11iv3 Dynamic nPartitions: Features and Configuration Recommendations, Feb 2008
  • #25 The value of M-Series investment protection becomes evident when you only need to upgrade the CPUs and not even all the CPU boards if you choose to keep some workloads on the existing environment. In this example, the IBM p5 570 configuration and M5000 start at similar performance levels. IBM forces an interim upgrade to POWER6, since there is no official MES path to POWER7 from POWER5/5+ — this means customers would give up their MES upgrade support and return credits with a straight box-swap path from P5 to P7. The end configuration is similar in capacity and system specs to an M5000. If we consider benchmark performance, even multiple M5000’s would still cost less on a system and support basis. *Sun SPARC M5000 with SPARC 64 VI 2.15 GHz 8-socket/16-core = $123,800 + 4x Sun SPARC M5000 boards with SPARC64 VII 2.53GHz/2ch/8co = $23,400 each IBM P5 570 = 4-socket/8-core 2.2GHz in 2 CEC drawers = $325,650 + $3,800 HMC = $329,450 IBM p 570 4-socket/8-core 4.2GHz in 2 CEC drawers = $451,639 minus the following re-usable components: -$8,908 I/O -$2920 rack -$3800 HMC Total subtractions = -$15,628 IBM Power 770 4-socket/32-core 3.1GHz in 2 CEC drawers = $298,753, minus the following re-usable components: -$1,398 I/O -$2920 rack -$1830 HMC Total subtractions = -$6,148 IBM P5 570 = 28k SAPS 2.2GHz/8-socket (14k) IBM P6 570 = 40k SAPS 4.7GHz/8-socket (20k) IBM Power 770 = No result (only Power 780 3.8GHz 8-socket result) M5000 = No result (only M9000) Hardware feature comparison based on publically available product documentation. Up to 2X performance boost by add Flash to a system — data available in public blog here — http://blogs.sun.com/BestPerf/entry/oracle_flash_cache_sga_caching
  • #29 The SPC-1C storage-industry benchmark demonstrates the performance of a storage product while it performs the typical functions of business-critical applications. These are characterized predominately by random read and write I/O operations. Examples include OLTP, database operations, and mail-server implementations. SPC-1C utilizes an identical workload as SPC-1, but limits results to storage configurations consisting of one or more HBAs/Controllers, and one of the following storage device configurations: - One, two, or four storage devices in a standalone configuration. An external enclosure may be used, but only to provide power and/or connectivity for the storage devices. - A small storage subsystem configured in no larger than a 4U enclosure profile (1 4U, 2 2U, 4 1U, etc.). As SPC-1C does not require data protection, it better matches characteristics of Flash Cache deployments.
  • #30 Service times are about the same
  • #32 SPC-2 consists of three distinct workloads designed to demonstrate the performance of a storage subsystem during the execution of business critical applications that require the large-scale, sequential movement of data. Those applications are characterized predominately by large I/Os organized into one or more concurrent sequential patterns. A description of each of the three SPC-2 workloads is listed below as well as examples of applications characterized by each workload. * Large File Processing: Applications in a wide range of fields, which require simple sequential process of one or more large files such as scientific computing and large- scale financial processing. * Large Database Queries: Applications that involve scans or joins of large relational tables, such as those performed for data mining or business intelligence. * Video on Demand: Applications that provide individualized video entertainment to a community of subscribers by drawing from a digital film library.
  • #37 Oracle offers tremendous ROI relative to this very Unique Feature Leverages Oracle Dynamic SGA and Solaris DISM 5th generation feature on SPARC Enterprise Servers Offers Mixed processor speed support Sun M-series supports mixed speed and mixed generation CPUs Allows customers to implement rolling CPU upgrades without impacting Oracle instance uptime Delivers greater ROI because customers not required to upgrade all CPUs The Extended System Control Facility (XSCF) is detailed further on next slide — commencing with both a command line mode and Browser mode of operation — it is very secure to use with both SSH or SSL remote access.
  • #38 There is an Extended System Control Facility (XSCF) that can be accessed very securely remotely via SSH or serial console and can be utilized in command line mode for operational management of the M-Series System. The XSCF has many different commands one can use to carve up the system through the dynamic support that one needs. One can add and remove CPU or Memory boards to the various domains one has set up on the M-Series Systems. In addition, XSCF will monitor the complex and report failures that may have occur such as memory dimm failures the following is an example of Mseries dealing with an uncorrectable fault on a memory dimm and memory being configured offline, the problem being reported and tracked so that the memory can be replaced latter via the ability of service coming in while the system still can run and the DB being able to stay up as a goal if the correct Memory Management parameter settings are specified correctly. Here is a real life event that occurred in the past and even though it occurred on an M5K the overall data that is captured and overall interface via the M8K/M9K is the same but with ability to further isolate the domains as well as offer further redundant components for the customer: Error Log — show detail log ParameterStatus DateApr 21 13:55:41 PDT 2010 Code6000e000-a61a0000-0200140d00000000 StatusWarning OccurredApr 21 13:55:39.233 PDT 2010 FRU/MBU_B/MEMB#1/MEM#1B MessagePermanent memory error Diagnostic Code:00010e00 00000000 0000000000000000 00000000 00000000 0000000000000000 00000000 00000000 00000000 UUID8ae80a53-d656-44f0-bb00-92a94ad152af MSG-IDSCF-8001-4X Diagnostic Message—- Using XSCF Command Line XSCF&amp;gt; fmdump —m MSG-ID: SCF-8001-4X, TYPE: Fault, VER: 1, SEVERITY: Major EVENT-TIME: Wed Apr 21 13:55:07 PDT 2010 PLATFORM: SPARC Enterprise M5000 , CSN: BCF073705U, HOSTNAME: electro-sp SOURCE: sde, REV: 1.16 EVENT-ID: a562918d-9db5-4e58-be87-0de63a1be1f1 DESC: The number of uncorrectable and correctable errors on single DIMM exceeds anacceptable threshold. This fault is detected while running POST.Refer to http://www.sun.com/msg/SCF-8001-4X  for more information. AUTO-RESPONSE: The memory associated with the memory bank containing the errors is deconfigured. IMPACT: POST is restarted after the memory associated with the memory bank has been deconfigured. REC-ACTION: Schedule a repair action to replace the affected Field Replaceable Unit (FRU),the identity of which can be determined using fmdump -v -u EVENT_ID. Please consult the detail section of the knowledge article for additional information. XSCF&amp;gt; fmdump -v -u a562918d-9db5-4e58-be87-0de63a1be1f1TIME UUID MSG-ID Apr 21 13:55:07.3115 a562918d-9db5-4e58-be87-0de63a1be1f1 SCF-8001-4X 100% fault.chassis.SPARC-Enterprise.memory.bank.err Problem in: hc:///chassis=0/cmu=0/mem=14 Affects: hc:///chassis=0/cmu=0/mem=14 FRU: hc://:product-id=SPARC Enterprise M5000 :chassis-id=BCF073705U:server-id=electro-sp:serial=43225725:part=M3 93T2950CZ3-CD5:revision=3343/component=/MBU_B/MEMB#1/MEM#1B Location: /MBU_B/MEMB#1/MEM#1B Best reference for this type of detail is to refer to the XSCF Users guide.
  • #39 Oracle’s Sun M-Series Servers have a very nice browser user interface (BUI) that is supported via SSL access. One can accomplish many of the things that they can in command line mode but with the simplicity of a browser session. One has full control to power on and off domains, dynamically reconfigure the system and check error conditions that XSCF has acted upon by looking at the system logs.
  • #44 In these database solutions, we assume the databases are consolidated and any RAC nodes are active-active. That is, databases are assignable to each RAC node and managed by EM for CPU and memory resource allocation. Or that a single large DB workload is readily partition-able across RAC nodes. If not and database performance is key, customers will need to move to larger solution configurations (Large or Extra Large.) As such, having memory coherence on Gigabit Ethernet is not an issue for performance. RAC has outstanding shared storage abilities between RAC nodes, especially when using ASM to manage HDD based LUNS and Flash. So using redundant HBAs paths, FC switches and array controllers provide for no single point of failure in this solution. But the granularity availability can only be as small as a single RAC node for Small and Medium configurations.
  • #45 Key Points: Virtually all systems that use Oracle, implement Shared Memory services for Oracle performance. And virtually all are based on System V shared memory. Key advantage is eliminating paging among the average 200-300 Oracle processes in deployments of this size. However Shared Memory makes it difficult for customer to use Dynamic SGA and Dynamic Reconfiguration.
  • #46 Key points: So Oracle enhanced ISM and made it Dynamic. While IBM Dynamic LPARs support dynamic memory and hence Dynamic SGA, IBM dynamic sizing does not work for “pinned” pages.
  • #47 Key Points: Oracle Database on Oracle Hardware provides more value in relevant, differentiated availability than competition. IBM requires reboot, let along instance restart when doing even CPU or memory board swaps. With careful selection of SGA_MAX_SIZE in init.ora, you can even eliminate Oracle instance restart on Oracle hardware. But if you need to change SGA_MAX_SIZE, the database must be restarted to re-read init.ora. This level of availability is due to the sum of integrated features in the entire stack: Application to Disk.
  • #50 Statspack is a set of performance monitoring and reporting utilities provided by Oracle starting from Oracle 8i which help determine how the DB is performing. This is a Statspack report showing Top 5 Timed Events The following describes the events, a description of the event, and if Flash may help the event on the report. Event Description db file sequential read The sequential read event is caused by reads of single blocks by the Oracle Database of a table or index. This is generally caused by an index read. The amount of time spent waiting for this event can be greatly reduced by moving the indexes to Flash. CPU time This is the amount of time that the Oracle database spent processing SQL statements, parsing statements, or managing the buffer case. Tuning the SQL statements and procedures, or increasing the server’s CPU resources generally best reduce this event. It is an event that is generally not helped by Flash. log file syncThis event is caused by waiting for the LGWR (LoG Writer) which writes the redo log buffers to the online redo log files which are posted after a session performs a commit. This can be tuned by reducing the number of commits. Placing the redo logs on Flash can also help this event – see next item. log file parallel writeThis event is caused by waiting for the writes of the redo records to the redo log files. This event can be greatly alleviated by using Flash for all copies of the redo logs. db file scattered readThe scattered read event is caused by reads of multiple blocks by the Oracle Database of a table or index. This is generally caused by a full table scan of the data tables. The amount of time spent waiting for this event can be greatly reduced by moving some of the data files Flash. Other Events not shown in this particular Report to look for: log file single writeThis event is caused by waiting for the writes of the redo records to the redo log files. This event can be helped by using Flash for some or all copies of the redo logs. free buffer wait This wait occurs when a session needs a free buffer and cannot find one. A slow DBWR (DataBase Writer -- The DBWR writes data from the SGA to the Oracle database files.) process that cannot quickly flush dirty blocks from the buffer cache can cause this. Moving the files that are receiving the majority of the writes to Flash can help with the wait event. If poor I/O does not cause this wait write capacity, you can tune your instance by increasing the buffer cache. control file parallel writeThis wait is caused by waiting on writes to the control files. Moving the control files onto Flash can help alleviate this wait. buffer busy waits The primary cause of these waits is contention for a block that is being used in a non-sharable way (so that a read/write cannot be performed until the process that is using it is complete). Increasing the speed of the disk system by using Flash can alleviate this. direct path readThis wait event is caused by reads that skip the database buffer. If there are lots of sorts and hashes taking place, then this can be caused by slow access to the TEMP space. Moving the TEMP space to Flash can reduce this event. direct path writeThis wait event is caused by writes that skip the database buffer. If there are lots of sorts and hashes taking place, then this can be caused by slow access to the TEMP space. Moving the TEMP space to solid state disks can help reduce this event. Further Notes – ADDM and AWR were introduced with Oracle 10g. ADDM and AWR are extra options to the Enterprise Manager Tool partly in response to the growing complexity and cost of managing and tuning large databases. While AWR/ADDM was a significant enhancement to Statspack, even the best database tuning can not fix slow disk subsystems Later SWAT is introduced as another tool to help in this area.. The Automatic Database Diagnostic Monitor (ADDM) analyzes data in the Automatic Workload Repository (AWR) to identify potential performance bottlenecks. For each of the identified issues it locates the root cause and provides recommendations for correcting the problem. An ADDM analysis task is performed and its findings and recommendations stored in the database every time an AWR snapshot is taken provided the STATISTICS_LEVEL parameter is set to TYPICAL or ALL. The ADDM analysis includes: CPU load Memory usage I/O usage Resource intensive SQL Resource intensive PL/SQL and Java RAC issues Application issues Database configuration issues Concurrency issues Object contention AWR Features Automatic Workload Repository (AWR) collects processes and maintains performance statistics for problem detection &amp; self tuning purposes. The AWR is used to collect performance statistics including: Wait events used to identify performance problems. Time model statistics indicating the amount of DB time associated with a process from the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views. Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view. Some system and session statistics from the V$SYSSTAT and V$SESSTAT views. Object usage statistics. Resource intensive SQL statements. The repository is a source of information for several other Oracle features including: Automatic Database Diagnostic Monitor SQL Tuning Advisor Undo Advisor Segment Advisor
  • #54 This assumes IOU N and N+1 will be part of the same domain. Each possible domain needs a Base I/O card in one of it’s IOU’s. With multiple IOU’s per domain, possible to reduce Base I/O cards and domain specific HDD’s.
  • #55 I/O is spread evenly across all available disk drives to prevent hot spots and maximize performance. ASM eliminates the need for over provisioning and maximizes storage resource utilization facilitating database consolidation. Benefit of ASM: - Performance of RAW with the benefits of a LVM. Performs automatic online redistribution after the incremental addition or removal of storage capacity without impacting performance. (Extent mirroring, not disk mirroring). Maintains redundant copies of data to provide high availability, or leverage 3rd party RAID functionality. - Supports Oracle Database 10g as well as Oracle Real Application Clusters (RAC). Capable of leveraging 3rd party multi-pathing technologies. For simplicity and easier migration to ASM, an Oracle Database 10g Release 2 database can contain ASM and non-ASM files. Any new files can be created as ASM files whilst existing files can also be migrated to ASM. RMAN commands enable non-ASM managed files to be relocated to an ASM disk group. Oracle Enterprise Manager can be used to manage ASM disk and file management activities. ASM reduces Oracle Database cost and complexity without compromising performance or availability.
  • #56 Only move data proportional to storage added No need for manual I/O tuning: hot spots rebalanced.
  • #57 This shows how the Oracle Optimized Solution for Oracle Database fits into and can be protected as part of an overall Oracle Optimized Solution for Oracle Secure Backup
  • #58 IB recommended over 10GE for backup network as provides better efficiency and lower cost per port (assuming media servers and database servers with four racks of each other using copper IB cables.) The protocol used for the backup is IPoIB Also makes it easier to add ZFSSA IB connect to OSB solution. Media server to tape drives use fiber-channel distance rules. Depending on SLAs, media servers and tape drives used by Optimized Solution for Database can be shared for backup of other infrastructure as part of Optimized Solution for OSB
  • #59 This presentation is designed to give the audience some background in the System for oracle Database: what they are, Oracle’s relevant technical differentiation as applied to Systems for Database. Both as a background to the unique technologies Oracle offers but some of the underlying evidence of its superiority. And finally the recommended practices to allow deployments to achieve the high levels of Non-Stop operation, Scaling, Investment protection and simplicity.
  • #60 Note today that if your deployment will need flash based on AWR/Statspack, and it is RAC (Small/Medium) you will need to be 11gR2 to run Database Flash Cache.
  • #61 Setting up Database Flash Cache is deceptively simple: add two simple statements in init.ora db_flash_cache_file=/dev/xx/xx where /dev/xx/xx is the aggregation of the F20 or F5100 FMods. Recommend using ASM or SVM. SVM has performed better in benchmarks to aggregate the meta-device with apparent shorter service times using striping, but is more I/O rate limited than ASM. So if many concurrent PGA sessions, you may find ASM faster. Both are good choices with any raw device being acceptable. Do not use file systems to aggregate the flash cache file as performance will suffer compared to raw devices. The second parameter, db_flash_cache_size= nn is the size of the flash cache in GB. When SGA buffer cache blocks are evicted from SGA and placed on Flash Cache, single instance DB’s will use about 100 bytes for pointers to each 8KB block (default oracle Block Size) For RAC nodes, each SGA will need to hold about 200 bytes for each cache buffer block. Also note that while the buffer cache is global and can be read across nodes RAC, the local flash cache misses are NOT read across nodes: only within the local node.
  • #62 This example was used to determine the limits of 11gR2 Database Flash Cache performance. As such, this test was run using a Read Only (Query) workload. Ultimately, this meant DB file Sequential read was eliminated as a wait event.
  • #63 Key Points: “db file sequential read” still key indicator Accelerates write as well as read intensive workloads Indexes Hot tables Flash Reco Etc. Requires you manage vs. the database (I.E. SFC.) Not for RAC now: RAC needs shared storage SAS F5100 is ‘sharable,’ but SATA Flash Modules are not Requires at least 2:1 Flash to Storage – Mirroring, 3:1 for ASM High Redundancy (Triple Mirroring) Not recommended for logs: NVRAM faster than Flash, especially on 6780.
  • #64 StorageTek Work Analysis Tool (SWAT) Charts Follow to describe why this can occur based upon the metrics and data gathered during actual runs
  • #65 This StorageTek Workload Analysis Chart (SWAT) shows a comparison of response times for HDDs verses FlashFire. It is a chart produced and gathered from a live Database which supports over 200 million objects and same activity associated with the IO and response times acquired for each runs that was brought together on the chart. The blue color reveals IO response times for the HDD and the red color reveals IO response times for Oracle’s Sun FlashFire technology. Conclusion — one can easily see how SWAT has revealed the the response time is at times over 100 times faster for FlashFire over HDDs! BTW — SWAT now has a great feature which will provide one information on given storage to reveal if the storage being used is a good candidate for Flash. Further information on SWAT can be found at http://sun.systemnews.com/articles/133/3/storage/21524
  • #66 This StorageTek Workload Analysis Chart (SWAT) shows a comparison of MB/s for HDDs verses FlashFire. It is a chart produced and gathered from a live Database which supports over 200 million objects and same activity associated with the IO and Bandwidth acquired for each runs that was brought together on the chart. The blue color reveals MB/s for the HDD and the red color reveals MB/s for Oracle’s Sun FlashFire technology. Conclusion — one can easily see how SWAT has revealed that the MB/s is over 10 times greater for FlashFire over HDDs! BTW — SWAT now has a great feature which will provide one information on given storage to reveal if the storage being used is a good candidate for Flash. Further information on SWAT can be found at http://sun.systemnews.com/articles/133/3/storage/21524
  • #67 Two new parameters are used with setting up the flash cache. The alter command can be used to dynamically set the size of the flash cache initially. In other words one can turn this on or off as desired dynamically without shutting down the database. NOTE: you can use alter system command to set db_flash_cache_size to zero to disable the flash cache. You can also use alter system command to set the flash cache back to its original size to re-enable it. However, dynamically changing the size of the flash cache once set is not supported.