Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Oracle real application clusters system tests with demo

1,088 views

Published on

OTNYathra 2014, Mumbai & Bangalore, India 22 Feb 2014

  • Be the first to comment

  • Be the first to like this

Oracle real application clusters system tests with demo

  1. 1. Imagination at work. Ajith Narayanan, Technical Lead-Oracle ERP GE Healthcare, Bangalore, India Bangalore, Feb 27th 2014 Oracle Real Application Clusters System Tests With Demo
  2. 2. Who am I? 2 Ajith Narayanan Technical Lead – Oracle ERP Configuration Management GE Healthcare, Bangalore, India 10 years of Oracle Apps & Oracle DBA experience. Blogger :- http://oracledbascriptsfromajith.blogspot.com Website Chair (2011-2013):- http://www.oracleracsig.org – Oracle RAC SIG Member – OAUG, AIOUG, RACSIG
  3. 3. Agenda 3 Real Application Clusters Testing Objectives Oracle Technologies Used For Tests • Test 1 :Planned Node Reboot • Test 2 :Clusterware and Fencing • Test 3 :Restart Failed Node • Test 4 :Reboot All Nodes Same Time • Test 5 :Unplanned Instance Failure • Test 6 :Planned Instance Termination • Test 7 :Clusterware and Fencing • Test 8: Service Failover • Test 9: Public Network Failure • Test 10: Interconnect Network Failure Sample Cluster Callout Script Q & A
  4. 4. Real Application Clusters Testing objectives • To verify that the system has been installed and configured correctly… Check that nothing is broken • To verify that basic functionality still works in a specific environment and for a specific workload • To make sure that the system will achieve its objectives… in particular, gigh availability and performance objectives 4
  5. 5. Oracle Technologies used for tests • Fast Application Notification (FAN) – Notification mechanism that alerts application of service level changes of the database • Fast Connection Failover (FCF) – Utilizes FAN events to enable database clients to proactively react to down events by quickly failing over connections to surviving database instances • Transparent Application Failover (TAF) – Allows for connections to be automatically reestablished to a surviving database instance if the instance servicing the initial connection fails… but insert, update and delete transactions will be rolled back (12C overcomes this problem) • Runtime Connection Load Balancing (RCLB) – Provides intelligence about the current service level of the database instances to application connection pools…his increases the performance of the application by utilizing least loaded (dynamic workload balancing) 5
  6. 6. Test 1 :Planned Node Reboot 6 Procedure • Start client workload and identify instance with most client connections • Reboot the node where the most loaded instance is running • For AIX, HPUX, Windows: “shutdown –r” , For Linux: “shutdown –r now” , For Solaris: “reboot” Expected results 1. The instances and other Clusterware resources go offline ( ‘SERVER’ field of crsctl stat res –t output) 2. The node VIP fails over the surviving nodes and will show a state of “INTERMEDIATE” with state details of “FAILED_OVER” 3. The SCAN VIP(s) that were running on the rebooted node will fail over to surviving nodes 4. The SCAN Listener(s) running on that node will fail over to a surviving node 5. Instance recovery is performed by another instance 6. Services are moved to available instances 7. Client connections are moved / reconnected to surviving… with TAF configured, select statements should continue…active DML will be aborted 8. After the database reconfiguration, surviving instances continue processing their workload Measures 1. Time to detect node or instance failure…Time to complete instance recovery. Alert Log helps! 2. Time to restore client activity to same level 3. Time before failed instance is restarted automatically by Clusterware and is accepting new connections 4. Successful failover of the SCAN VIP(s) and SCAN Listener(s)
  7. 7. Test 2: Unplanned Node Failure of OCR Master 7 Procedure • Start client workload… Identify the node that is the OCR master using the following grep command from any of the nodes: o grep -i "OCR MASTER" $GI_HOME/log/<node_name>/crsd/crsd.l* NOTE: Windows users must manually review the $GI_HOME/log/<node_name>/crsd/crsd.l* logs to determine the OCR Master • Power off the node that is the OCR master Expected results 1. The instances and other Clusterware resources go offline ( ‘SERVER’ field of crsctl stat res –t output) 2. The node VIP fails over the surviving nodes and will show a state of “INTERMEDIATE” with state_details of “FAILED_OVER” 3. The SCAN VIP(s) that were running on the rebooted node will fail over to surviving nodes 4. The SCAN Listener(s) running on that node will fail over to a surviving node 5. Instance recovery is performed by another instance 6. Services are moved to available instances 7. Client connections are moved / reconnected to surviving instances. With TAF configured, select statements should continue. Active DML will be aborted 8. After the database reconfiguration, surviving instances continue processing their workload
  8. 8. Test 3: Restart Failed Node 12/15/2015 8 Procedure • ajithpathiyil2:/home/oracle[RAC1]$ srvctl start instance –d RAC –I RAC1 Expected Results 1. On clusters having 3 or fewer nodes, one of the SCAN VIPs and Listeners will be relocated to the restarted node when the Oracle Clusterware starts 2. The VIP will migrate back to the restarted node 3. Services that had failed over as a result of the node failure will NOT automatically be relocated 4. Failed resources (ASM, listener, instance, etc) will be restarted by the Clusterware Measures 1. Time for all resources to become available again, Check with “crsctl stat res –t”
  9. 9. Test 4: Restart All Nodes Same Time 9 Procedure • Issue a reboot on all nodes at the same time • For AIX, HPUX, Windows: ‘shutdown –r’ • For Linux: ‘shutdown –r now’ • For Solaris: ‘reboot’ Expected Results 1. All nodes, instances and resources are restarted without problems Measures 1. Time for all resources to become available again, Check with “crsctl stat res –t”
  10. 10. Test 5: Unplanned Instance Failure 10 Procedure • Start client workload, identify single database instance with the most client connections and abnormally terminate that instance: For AIX, HPUX, Linux, Solaris: Obtain the PID for the pmon process of the database instance: # ps –ef | grep pmon kill the pmon process: # kill –9 <pmon pid> For Windows: Obtain the thread ID of the pmon thread of the database instance by running: SQL> select b.name, p.spid from v$bgprocess b, v$process p where b.paddr=p.addr and b.name=’PMON’; Run orakill to kill the thread: cmd> orakill <SID> <Thread ID>
  11. 11. Test 5: Unplanned Instance Failure (contd..) 11 Expected Results 1. One of the other instances performs instance recovery 2. Services are moved to available instances, if a preferred instance failed 3. Client connections are moved / reconnected to surviving instances (Procedure and timings will depend on client types and configuration) 4. After a short freeze, surviving instances continue processing the workload 5. Failing instance will be restarted by Oracle Clusterware, unless this feature has been disabled Measures 1. Time to detect instance failure 2. Time to complete instance recovery. Check alert log for recovering instance 3. Time to restore client activity to same level (assuming remaining nodes have sufficient capacity to run workload) 4. Duration of database freeze during failover. 5. Time before failed instance is restarted automatically by Oracle Clusterware and is accepting new connections
  12. 12. Test 6 :Planned Instance Termination 12 Procedure • Issue a ‘shutdown abort’ Expected Results 1. One other instance performs instance recovery 2. Services are moved to available instances, if a preferred instance failed 3. Client connections are moved / reconnected to surviving instances (Procedure and timings will depend on client types and configuration) 4. The instance will NOT be automatically restarted by Oracle Clusterware due to the user invoked shutdown. Measures 1. Time to detect instance failure. 2. Time to complete instance recovery. Check alert log for recovering instance. 3. Time to restore client activity to same level (assuming remaining nodes have sufficient capacity to run workload). 4. The instance will NOT be restarted by Oracle Clusterware due to the user induced shutdown.
  13. 13. Test 7: Clusterware and Fencing 13 Node fencing is a general concept used by computer clusters to forcefully remove a malfunctioning node from it. This preventive technique is a necessary measure to make sure no I/O from malfunctioning node can be done, thus preventing data corruptions and guaranteeing cluster integrity. Procedure • Start with a normal, running cluster with the database instances up and running • Monitor the logfiles for clusterware on each node. On each node, start a new window and run the following command: The network heartbeats are associated with a timeout called misscount, set from 11g Release 1 to 30. ajithpathiyil1:/home/oracle[+ASM1] $crsctl get css misscount 30 ajithpathiyil1:/home/oracle[+ASM1] $oifcfg getif bond0 192.168.78.51 global public bond1 10.10.0.0 global cluster_interconnect Ajithpathiyil1:]$ tail -f /u01/grid/oracle/product/11.2.0/grid_1/log/ajithpathiyil2/crsd/crsd.l* ajithpathiyil1:]$ tail -f /u01/grid/oracle/product/11.2.0/grid_1/log/‘hostname - s‘/cssd/ocssd.log ajithpathiyil2:]$ ifconfig eth1 down
  14. 14. Test 7: Clusterware and Fencing (contd..) 14 Expected Results Following this command, watch the logfiles you began monitoring in step 2 above. You should see errors in those logfiles and eventually (could take a minute or two, literally) you will observe one node reboot itself. If you used ifconfig to trigger a failure, then the node will rejoin the cluster and the instance should start automatically. Alert Log [cssd(2864)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval. Removal of this node from cluster in 14.920 seconds … [cssd(2864)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval. Removal of this node from cluster in 2.900 seconds [cssd(2864)]CRS-1609:This node is unable to communicate with other nodes in the cluster and is going down to p reserve cluster integrity More debugging information is written to the ocssd.bin process log file: [CSSD][1119164736](:CSSNM00008:)clssnmCheckDskInfo: Aborting local node to avoid splitbrain. Cohort of 1 nodes with leader 2, rac2, is smaller than cohort of 1 nodes led by node 1, rac1, based on map type 2 [CSSD][1119164736]################################### [CSSD][1119164736]clssscExit: CSSD aborting from thread clssnmRcfgMgrThread [CSSD][1119164736]################################### .
  15. 15. Test 8: Service Failover 15 Procedure Create a Service ajithpathiyil2:/home/oracle[RAC1]$ srvctl add service -d RAC -s svctest -r RAC1 -a RAC2 -P BASIC ajithpathiyil2:/home/oracle[RAC1]$ srvctl start service -d RAC -s svctest ajithpathiyil2:/home/oracle[RAC1]$ srvctl status service -d RAC -s svctest Service svctest is running on instance(s) RAC1 ajithpathiyil2:/home/oracle[RAC1]$ Warning ! You should never directly change the SERVICE_NAMES init parameter on a RAC database!! This parameter is maintained automatically by the clusterware. SQL> show user USER is "SYS" SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- RAC1 SQL> shutdown abort; ORACLE instance shut down. SQL>
  16. 16. Test 9: Public Network failure 16 Procedure • Unplug all network cables for the public network NOTE: It is recommended NOT to use ifconfig to down the interface, this may lead to the address still being plumbed to the interface resulting in unexpected results.’ Expected Results Check with “crsctl stat res –t” 1. The ora.*.network and listener resources will go offline for the node. 2. SCAN VIPs and SCAN LISTENERs running on the node will fail over to a surviving node. ajithpathiyil2:/home/oracle[grid]$ srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node ajithpathiyil2 ajithpathiyil2:/home/oracle[grid]$ ajithpathiyil2:/home/oracle[grid]$ srvctl status scan_listener SCAN Listener LISTENER_SCAN1 is enabled SCAN listener LISTENER_SCAN1 is running on node ajithpathiyil2 ajithpathiyil2:/home/oracle[grid]$
  17. 17. Test 9: Public Network failure (contd..) 12/15/2015 17 3. The VIP for the node will fail over to a surviving node. 4. The database instance will remain up but will be unregistered with the remote listeners. 5. Database services will fail over to one of the other available nodes. 6. If TAF is configured, clients should fail over to an available instance. NODE VERSION=1.0 host=ajithpathiyil2 incarn=0 status=nodedown reason=public_nw_down timestamp=30-Aug-2009 01:56:12 reported=Sun Jan 30 01:56:13 CDT 2013 NODE VERSION=1.0 host=ajithpahtiyil2 incarn=147028525 status=nodedown reason=member_leave timestamp=30-Aug-2009 01:57:19 reported=Sun Aug 30 01:57:20 CDT 2013 Measure 1. Time to detect the network failure and relocate resources.
  18. 18. Test 10: Interconnect Network Failure 12/15/2015 18 Procedure • Unplug all network cables for the interconnect network NOTE: It is recommended NOT to use ifconfig to down the interface, this may lead to the address still being plumbed to the interface resulting in unexpected results Expected results For 11.2.0.2 and above: 1. CSSD will detect split-brain situation and perform one of the following: 2. In a two-node cluster the node with the lowest node number will survive. 3. In a multiple node cluster the largest sub-cluster will survive. 4. On the node(s) that is being evicted, a graceful shutdown of Oracle Clusterware will be attempted. All I/O capable client processes will be terminated and all resources will be cleaned up. If process termination and/or resource cleanup does not complete successfully the node will be rebooted. 5. Assuming that the above has completed successfully, OHASD will attempt to restart the stack. In this case the stack will be restarted once the network connectivity of the private interconnect network has been restored. Review the following logs: o $GI_HOME/log/<nodename>/alert<nodename>.log o $GI_HOME/log/<nodename>/cssd/ocssd.log
  19. 19. Test 10: Interconnect Network Failure (Contd..) 12/15/2015 19 Measure For 11.2.0.2 and above: 1. Oracle Clusterware will gracefully shutdown, should graceful shutdown fail (due to I/O processes not being terminated or resource cleanup) the node will be rebooted 2. Assuming that the graceful shutdown of Oracle Clusterware succeeded, OHASD will restart the stack once network connectivity for the private interconnect has been restored
  20. 20. Sample Cluster Callout Script 12/15/2015 20 #!/bin/ksh # # Author: Ajith Narayanan ## http://oracledbascriptsfromajith.blogspot.com ## Version 1.0 ## This callout script is extended to report/mail the affected weblogic services when any Oracle cluster event occurs. ## umask 022 FAN_LOGFILE=$ORACLE_HOME/racg/usrco/`hostname`_uptime.log EVENTLINE=$ORACLE_HOME/racg/usrco/`hostname`_eventline.log EVENTLINE_MID=$ORACLE_HOME/racg/usrco/`hostname`_eventline_mid.log MAIL_CONT=$ORACLE_HOME/racg/usrco/`hostname`_mail.log WEBLOGIC_DS=$ORACLE_HOME/racg/usrco/weblogic_ds echo $* "reported="`date` >> $FAN_LOGFILE & tail -1 $FAN_LOGFILE > $EVENTLINE awk '{ for (f = 1; f <= NF; f++) { a[NR, f] = $f } } NF > nf { nf = NF } END { for (f = 1; f <= nf; f++) { for (r = 1; r <= NR; r++) { printf a[r, f] (r==NR ? RS : FS) } } }' $EVENTLINE > $EVENTLINE_MID SER=`grep "service=" $EVENTLINE_MID|awk -F= '{print $2}'` DB=`grep "database=" $EVENTLINE_MID|awk -F= '{print $2}'` INST=`grep "instance=" $EVENTLINE_MID|awk -F= '{print $2}'` HOST=`grep "host=" $EVENTLINE_MID|awk -F= '{print $2}'`
  21. 21. Sample Cluster Callout Script (contd..) 12/15/2015 21 STAT=`grep "status=" $EVENTLINE_MID|awk -F= '{print $2}'` if [ "$SER" != " " | "$DB" != " " | "$INST" != " " | "$HOST" != " " | "$STAT" != " " ]; then if [ $STAT = nodedown ]; then cat $EVENTLINE_MID > $MAIL_CONT echo "**============================SERVICES AFFECTED===============================**" >> $MAIL_CONT grep -i "$DB_" $WEBLOGIC_DS >> $MAIL_CONT elif [ $STAT = up ]; then cat $EVENTLINE_MID > $MAIL_CONT echo "**============================SERVICES RESTORED===============================**" >> $MAIL_CONT grep -i "$DB_" $WEBLOGIC_DS|grep "SERVICE_NAME=$SER" >> $MAIL_CONT else cat $EVENTLINE_MID > $MAIL_CONT echo "**============================SERVICES AFFECTED===============================**" >> $MAIL_CONT grep -i "$DB_" $WEBLOGIC_DS|grep "SERVICE_NAME=$SER" >> $MAIL_CONT #fi cat $MAIL_CONT| /bin/mail -s "Cluster $STAT event: $DB $INST $SER $HOST" ajithpathiyil@gmail.com fi #cat $MAIL_CONT| /bin/mail -s "Cluster $STAT event: $DB $INST $SER $HOST" ajithpathiyil@gmail.com fi rm $EVENTLINE $EVENTLINE_MID $MAIL_CONT
  22. 22. Q&A 22
  23. 23. References • Oracle RAC Assurance Team:- Oracle RACCheck 23 Thank you Contact Me: ajithpathiyil@gmail.com

×