Oracle Clusterware and Private Network Considerations- Practical Performance Management for Oracle RAC<br />November 12, 2...
Agenda<br />Oracle RAC Fundamentals and Infrastructure.<br />Analysis of Cache fusion Impact on RAC.<br />Private Intercon...
Oracle RAC Fundamentals and infrastructure<br />Oracle RAC Architecture<br />November 12, 2009<br />3<br />
Oracle rac fundamentals and infrastructure<br />Function and Processes of Global Enqueue Services (GES) and Global Cache S...
ORACLE rac FUNDAMENTAL And INFRASTRUCTURE<br />Global Buffer Cache<br />November 12, 2009<br />5<br />
Analyzing Cache fusion impact in rac<br />The cost of block access and cache coherency is represented by:<br />Global Cach...
Analyzing cache fusion impact on rac<br />Typical Latencies for RAC Operations<br />November 12, 2009<br />7<br /><ul><li>...
Current block request time = pin time + flash time + send time
Latencies from  V$SYSSTAT
Other Latencies may be seen in V$SEG_STATISTICS </li></li></ul><li>Analyzing  cache fusion impact on RAC<br />Wait Events ...
Analyzing Cache fusion impact on RAC<br />Wait Events Views<br />November 12, 2009<br />9<br />
Analyzing cache fusion impact on rac<br />November 12, 2009<br />10<br />Global Cache Wait Events: Overview<br />
Analyzing cache fusion impact on rac<br />November 12, 2009<br />11<br />2 – way Block Request: Example<br />
Analyzing cache fusion impact on rac<br />November 12, 2009<br />12<br />3-way Block Request: Example<br />
Analyzing cache fusion impact on rac <br />November 12, 2009<br />13<br />2-way Grant : Example<br />
Analyzing cache fusion impact on rac<br />Enqueues are synchronous.<br />Enqueues are global resources in RAC<br />The mos...
Analyzing cache fusion impact on rac<br />Use V$SYSSTAT to characterize the workload.<br />Use V$SESSSTAT to monitor impor...
Private Interconnect Considerations<br />November 12, 2009<br />16<br />IPC Configuration<br />
Private Interconnect Considerations<br />November 12, 2009<br />17<br />Infrastructure Network Packet Processing<br />
Private Interconnect considerations<br />November 12, 2009<br />18<br />Network Packet Processing: Layers, Queues and Buff...
Private Interconnect Considerations<br />Network between the nodes of a RAC cluster must be private.<br />NIC to have the ...
Private Interconnect considerations<br />Important Settings:<br />Negotiated top bit rate and full duplex mode<br />NIC ri...
Private interconnect considerations<br />Interconnect should be dedicated non-routable subnet mapped to a single dedicated...
Private Interconnect considerations<br />November 12, 2009<br />22<br />
Aggregation<br />Cisco Etherchannel based 802.3ad<br />AIX Etherchannel<br />HPUX Auto Port Aggregation<br />SUN Trunking,...
Common Problems and symptoms <br />gc [current][cr] block lost:  This event shows block losses during transfers. High valu...
Common Problems and symptoms <br />congested:The events that contain  ‘congested’ suggest CPU, LMS saturation, long runnin...
Diagnostics and Problem Determination <br />Tune for a single instance first<br />Tune for RAC<br />Instance Recovery<br /...
Diagnostics and Problem Determination<br />Application tuning is often the most beneficial.<br />Resizing and tuning the b...
Upcoming SlideShare
Loading in …5
×

Oracle Clusterware and Private Network Considerations - Practical Performance Management for Oracle RAC

7,531 views

Published on

0 Comments
11 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
7,531
On SlideShare
0
From Embeds
0
Number of Embeds
39
Actions
Shares
0
Downloads
75
Comments
0
Likes
11
Embeds 0
No embeds

No notes for slide

Oracle Clusterware and Private Network Considerations - Practical Performance Management for Oracle RAC

  1. 1. Oracle Clusterware and Private Network Considerations- Practical Performance Management for Oracle RAC<br />November 12, 2009<br />1<br />Guenadi Nedkov Jilevski<br />
  2. 2. Agenda<br />Oracle RAC Fundamentals and Infrastructure.<br />Analysis of Cache fusion Impact on RAC.<br />Private Interconnect Considerations.<br />Aggregation.<br />Common known Problems and Symptoms - from cache fusion wait events and statistics. <br />Diagnostics and Problem troubleshooting.<br />Q and A<br />November 12, 2009<br />2<br />
  3. 3. Oracle RAC Fundamentals and infrastructure<br />Oracle RAC Architecture<br />November 12, 2009<br />3<br />
  4. 4. Oracle rac fundamentals and infrastructure<br />Function and Processes of Global Enqueue Services (GES) and Global Cache Services (GCS)<br />November 12, 2009<br />4<br />
  5. 5. ORACLE rac FUNDAMENTAL And INFRASTRUCTURE<br />Global Buffer Cache<br />November 12, 2009<br />5<br />
  6. 6. Analyzing Cache fusion impact in rac<br />The cost of block access and cache coherency is represented by:<br />Global Cache services statistics<br />Global Cache Services wait events<br />The response time for cache fusion transfers is determined by:<br />Overhead by the physical interconnect components<br />IPC protocol<br />GCS protocol<br />The response time is not generally affected by disk I/O factors except for the occasional log write done when sending a dirty buffer to another instance in a write-read or write-write situation<br />November 12, 2009<br />6<br />
  7. 7. Analyzing cache fusion impact on rac<br />Typical Latencies for RAC Operations<br />November 12, 2009<br />7<br /><ul><li>CR block request time = build time + flush time + send time
  8. 8. Current block request time = pin time + flash time + send time
  9. 9. Latencies from V$SYSSTAT
  10. 10. Other Latencies may be seen in V$SEG_STATISTICS </li></li></ul><li>Analyzing cache fusion impact on RAC<br />Wait Events for RAC<br />Wait events help to analyze what sessions are waiting for.<br />Wait times are attributed to events that reflect the outcome of a request:<br />Placeholders while waiting – wait_time = 0<br />Placeholders after waiting – wait_time != 0<br />Global cache waits are summarized in a broader category called Cluster Wait Class.<br />These wait events are used in ADDM to enable Cache Fusion diagnostics.<br />November 12, 2009<br />8<br />
  11. 11. Analyzing Cache fusion impact on RAC<br />Wait Events Views<br />November 12, 2009<br />9<br />
  12. 12. Analyzing cache fusion impact on rac<br />November 12, 2009<br />10<br />Global Cache Wait Events: Overview<br />
  13. 13. Analyzing cache fusion impact on rac<br />November 12, 2009<br />11<br />2 – way Block Request: Example<br />
  14. 14. Analyzing cache fusion impact on rac<br />November 12, 2009<br />12<br />3-way Block Request: Example<br />
  15. 15. Analyzing cache fusion impact on rac <br />November 12, 2009<br />13<br />2-way Grant : Example<br />
  16. 16. Analyzing cache fusion impact on rac<br />Enqueues are synchronous.<br />Enqueues are global resources in RAC<br />The most frequent wait are for:<br />TX – row wait locks or ITL waits<br />TM – Table Manipulation Enqueue<br />TA – Transaction Recovery Enqueue<br />SQ – Sequence generation Enqueue<br />HW – High Watermark Enqueue<br />US – Undo Segment Enqueue to manage undo segment extensions.<br />The waits may constitute serious serialization point<br />November 12, 2009<br />14<br />Global Enqueue Waits: Overview<br />
  17. 17. Analyzing cache fusion impact on rac<br />Use V$SYSSTAT to characterize the workload.<br />Use V$SESSSTAT to monitor important sessions.<br />V$SEGMENT_STATISTICS includes RAC statistics.<br />RAC relevant statistics group are:<br />Global Cache Service statistics<br />Global Enqueue Service statistics<br />Statistics for messages send<br />V$ENQUEUE_STATISTICS determines the enqueue with the highest impact.<br />V$INSTANCE_CACHE_TRANSFER breaks down GCS statistics into block classes.<br />November 12, 2009<br />15<br />Session and System Statistics<br />
  18. 18. Private Interconnect Considerations<br />November 12, 2009<br />16<br />IPC Configuration<br />
  19. 19. Private Interconnect Considerations<br />November 12, 2009<br />17<br />Infrastructure Network Packet Processing<br />
  20. 20. Private Interconnect considerations<br />November 12, 2009<br />18<br />Network Packet Processing: Layers, Queues and Buffers <br />
  21. 21. Private Interconnect Considerations<br />Network between the nodes of a RAC cluster must be private.<br />NIC to have the same name across all the nodes in the RAC cluster.<br />Supported links: Gbe, IB <br />Supported transport protocols: UDP, RDS <br />Use multiple or dual-ported NICs for redundancy (HA), load balancing, load spreading and increase bandwidth with NIC bonding/aggregation.<br />Large ( Jumbo ) Frames for Gbe recommended if the global cache workload requires it.<br />Bandwidth requirements depend on several factors ( e.g. buffer cache size, #of CPUs per node, access patterns) and cannot be predicted precisely for every application<br />For OLTP 1Gb/sec usually is sufficient for performance and scalability.<br />DSS/DW systems should be designed with &gt; 1Gb/sec capacity <br />November 12, 2009<br />19<br />Infrastructure: Private Interconnect<br />
  22. 22. Private Interconnect considerations<br />Important Settings:<br />Negotiated top bit rate and full duplex mode<br />NIC ring buffers<br />Ethernet flow control settings<br />CPU(s) receiving network interrupts<br />Verify your setup:<br />CVU does checking<br />Load testing eliminates potential for problems<br />AWR and ADDM give estimations of link utilization<br />Buffer overflows, congested links and flow control can have severe consequences for performance<br />Block access latencies increase when CPU(s) busy and run queues are long<br />Immediate LMS scheduling is critical for predictable block access latencies when CPU &gt; 80% busy<br />Fewer and busier LMS processes may be more efficient.<br />monitor their CPU utilization<br />Caveat: 1 LMS can be good for runtime performance but may impact cluster reconfiguration and instance recovery time<br />the default is good for most requirements. gcs_server_processes init parameter overrides defaults <br />Higher priority for LMS is default<br />The implementation is platform-specific <br />November 12, 2009<br />20<br />Infrastructure: IPC configuration and Operating System <br />
  23. 23. Private interconnect considerations<br />Interconnect should be dedicated non-routable subnet mapped to a single dedicated, non-shared VLAN<br />If VLANs are ‘trunked’ the interconnect VLAN traffic should not exceed the access switch layer<br />Minimize the impact of Spanning Tree events<br />Monitor the switch(es) for congestion<br />Avoid QoS definitions that may negatively impact interconnect performance<br />NIC driver dependent – DEFAULTS GENERALLY SATISFACTORY<br />Confirm flow control: rx=on, tx=off<br />Confirm full bit rate (1000) for the NICs<br />Confirm full duplex auto-negotiate<br />Ensure NIC names/slots identical on all nodes<br />Configure interconnect NICs on fastest PCI bus<br />Ensure compatible switch settings<br />802.3ad on NICs = 802.3ad on switch ports<br />MTU=9000 on NICs = MTU=9000 on switch ports<br />FAILURE TO CONFIGURE THE NICS AND SWITCHES CORRECTLY WILL RESULT IN SEVERE <br />PERFORMANCE DEGRADATION AND NODE FENCING<br />November 12, 2009<br />21<br />The Interconnects, VLANs and NIC settings<br />
  24. 24. Private Interconnect considerations<br />November 12, 2009<br />22<br />
  25. 25. Aggregation<br />Cisco Etherchannel based 802.3ad<br />AIX Etherchannel<br />HPUX Auto Port Aggregation<br />SUN Trunking, IPMP, GLD<br />Linux Bonding (only certain modes)<br />Windows NIC teaming<br />Aggregation Methods<br />Load balance/failover/load spreading<br />spread on sends/serialize on receives<br />Active/Standby<br />Oracle Interconnect Requirement<br />Both Send/Receive side load balancing<br />NIC and Switch port failure detection<br />November 12, 2009<br />23<br />
  26. 26. Common Problems and symptoms <br />gc [current][cr] block lost: This event shows block losses during transfers. High values indicate IPC, downstream network problems. ‘request retry’ event is likely to be seen .<br />global cache blocks corrupt: This statistic shows if any blocks were corrupted during transfers. If high values are returned for this statistic, there is probably an IPC, network or hardware problem. <br />global cache open s and global cache open x: The initial access of a particular data block by an instance generates these events. The duration of the wait should be short, and the completion of the wait is most likely followed by a read from disk. This wait is a result of the blocks that are being requested and not being cached in any instance in the cluster database. Pre-load heavily used tables into the buffer caches.<br />global cache null to s and global cache null to x: These events are generated by inter-instance block ping across the network. Interinstance block ping is when two instances exchange the same block back and forth. Reduce the number of rows per block to eliminate the need for block swapping between two instances in the RAC cluster.<br />global cache cr request: This event is generated when an instance has requested a consistent read data block and the block to be transferred has not arrived at the requesting instance. Placeholder event. Look for other gc events.<br />gc buffer busy: This event can be associated with a disk I/O contention for example slow disk I/O due to rogue query. Slow concurrent scans can cause buffer cache contention. However, note than there can be a multiple symptoms for the same cause. It can be seen together with ‘db file scattered reads’ event. Global cache access and serialization attributes to this event. Serialization is likely to be due to log flush time on another node or immediate block transfers. <br />November 12, 2009<br />24<br />Wait events worth investigation<br />
  27. 27. Common Problems and symptoms <br />congested:The events that contain ‘congested’ suggest CPU, LMS saturation, long running queries, swapping, network configuration issues. Maintain a global view and remember that symptom and cause can be on different instances.<br />busy: The events that contain ‘busy’ indicate contention. It needs investigation by drilling down into either SQL with highest cluster wait time or segment statistics with highest block transfers. Also look at objects with highest number of block transfers and global serialization.<br />Gc [current/cr] [2/3]-way –Increase private interconnects bandwidth and decreasing the private interconnects latency.<br />Gc [current/cr] grant 2-way – Increase private interconnects bandwidth and decreasing the private interconnects latency. <br />Gc [current/cr][block/grant] congested – means that it has been received eventually but with a delay because of intensive CPU consumption, memory lack, LMS overload due to much work in the queues, paging, swapping. This is worth investigating as it provides a room for improvement. We will look at it later.<br />Gc [current/cr] block busy – Received but not sent immediately due to high concurrency or contention. This means that the block is busy. Variety of reasons for being busy just means cannot be sent immediately due to Oracle oriented reasons.<br />Gc current grant busy – Grant is received but there is a delay due to many shared block images or load. <br />Gc [current/cr][failure/retry] - Failure means that cannot receive the block image while retry means that the problem recovers and ultimately the block image can be received but it needs to retry. Investigate the IPC or downstream network problems. <br />November 12, 2009<br />25<br />Wait events worth investigation<br />
  28. 28. Diagnostics and Problem Determination <br />Tune for a single instance first<br />Tune for RAC<br />Instance Recovery<br />Interconnect traffic<br />Points of serialization can be exacerbated<br />RAC–reactive tuning tools :<br />Specific Wait events<br />System and enqueue statistics<br />Enterprise Manager performance pages<br />AWR and ASH reports<br />RAC – proactive tools<br />AWR snapshots<br />ADDM reports<br />November 12, 2009<br />26<br />
  29. 29. Diagnostics and Problem Determination<br />Application tuning is often the most beneficial.<br />Resizing and tuning the buffer cache.<br />Reducing the long full-table scans in OLTP systems.<br />Using Automatic Segment Space Management.<br />Increasing sequence caches.<br />Using partitioning to reduce inter-instance traffic.<br />Avoid unnecessary parsing.<br />Minimizing locking usage.<br />Removing unselective indexes.<br />Configuring Interconnect properly.<br />November 12, 2009<br />27<br />Most common RAC tuning tips<br />
  30. 30. Diagnostics and Problem Determination <br />November 12, 2009<br />28<br />
  31. 31. Oracle Clusterware and Private Network Considerations- Practical Performance Management for Oracle RAC <br />November 12, 2009<br />29<br />Questions <br />&<br />Answers <br />

×