Database           BEST PRACTICES OF ORACLE 10G/11G CLUSTERWARE:        CONFIGURATION, ADMINISTRATION AND TROUBLESHOOTING ...
Database               Figure 1 Oracle Clusterware and Oracle RAC Architecture                     Figure 2 Hardware Confi...
Databasethe SAN storage, different IO cards are installed in the servers. For example, for a Fibre Channel (FC) storage, H...
Database    •   Oracle Process Monitor Daemon (OPROCD) is locked in memory and monitors the cluster. OPROCD provide I/O   ...
Database                   Figure 3 Connection of clustered servers with EMC Fibre Channel storage.In figure 3, additional...
Database                  Figure 4 Connections of Cluster Servers with EqualLogic iSCSI storageIn figure 4, two servers co...
Database                      alias       ocr1           }           multipath {                   wwid   36090a028e093fc9...
DatabaseCLARiiON ID=APM00083100777 [kim_proc]Logical device ID=6006016013CB22009616938A8CF1DD11 [LUN 2]state=alive; policy...
Database    2. Start udev /sbin/udev    For 11g RAC and Oracle 10 RAC option B, the recommended solution is to use block d...
DatabaseIf we ping the kblade1-vip at the moment when the kblade1 node is shut down, we can see how kblade1-vip isfailed o...
DatabaseEdit the network interface scripts in /etc/sysconfig/network-scripts:ifcfg-eth1:                    ifcfg-eth2:   ...
Database    •   Use the same interconnect for both Oracle clusterware and Oracle RAC communication    •   NIC settings for...
DatabaseTo move voting disk to new location, first add new voting disk, then remove the old voting disk.MANAGE ORACLE CLUS...
Databasekblade2    2009/03/04 18:34:58   /crs/oracle/product/11.1.0/crs/cdata/kblade_cluster/day.ocrkblade7    2009/02/24 ...
Database15   Session #355
DatabaseExecute /crs/oracle/product/11.1.0/crs/root.sh as root on new node k52950-3-n3 as instructedStep 4: Run addNode.sh...
DatabaseStart CRS on node1 k52950-3-n1 andexecute /crs/oracle/product/11.1.0/crs/install/rootaddnode.sh asAnd execute /crs...
DatabaseRestart CRS on node 2 k52950-3-n2:[root@k52950-3-n2 bin]# ./crsctl start crsAttempting to start Oracle Clusterware...
Databasefencing: if a cluster node fails, Oracle clusterware ensures the failed node to be fenced off from all the IO oper...
Database       …  [      CSSD]2008-07-23 11:15:19.079 [1220598112] >TRACE:      clssnmDoSyncUpdate: Terminating node 7, ra...
DatabaseThe second tool is Oracle problem detections tool (IPD/OS) crfpack.zip.This tool is used to analyze the OS and clu...
DatabaseFigure 6: CRS Reboot Troubleshooting Flowchart                         22                      Session #355
DatabaseCONCLUSIONOracle Clusterware is a critical component of Oracle RAC database and Oracle Grid infrastructure. Oracle...
Upcoming SlideShare
Loading in …5
×

Best practices oracle_clusterware_session355_wp

1,608 views

Published on

Published in: Technology, News & Politics
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,608
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
235
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Best practices oracle_clusterware_session355_wp

  1. 1. Database BEST PRACTICES OF ORACLE 10G/11G CLUSTERWARE: CONFIGURATION, ADMINISTRATION AND TROUBLESHOOTING Kai Yu, Dell Oracle Solutions Engineering, Dell Inc.This article aims to provide DBAs some practical understanding and best practices of Oracle Clusterware configuration whichgo beyond the traditional installation steps. It discusses the public and private interconnect network and shared storageconfiguration, and shares some experiences/tips for troubleshooting clusterware issues such as node eviction and how someuseful diagnostic tools help for the root cause analysis. This presentation also covers some clusterware administration methodsas well as the new 11g clusterware features such as cloning clusterware and adding a node to an existing cluster.ORACLE CLUSTERWARE ARCHITECTUREThe Oracle Real Applications (RAC) architecture provides the great benefits to the applications: • High Availability: it automatically fails over the database connections to other cluster node in event of a database node failure or a planned node maintenance. • Scalability: Oracle RAC architecture can be scaled out to meet the growth of application workloads by adding additional nodes to the cluster.Oracle Clusterware serves a foundation for Oracle RAC database. It provides a set of additional processes running on eachcluster server (node) that allow the cluster nodes to communicate each other so that these cluster nodes work together as ifthey were one server serving applications and end users. On each cluster node, Oracle clusterware is configured to manage allthe oracle processes known as cluster resources such as database, database instance, ASM instance, listener and databaseservices and virtual IP (VIP) services. Oracle clusterware requires a shared storage to store its two components: voting diskfor node membership and Oracle Clusterware Registry (OCR) for cluster configuration information. The private interconnectnetwork is required among the cluster nodes to carry the network heartbeat among the cluster nodes. Oracle clusterwareconsists of several process components which provide the event monitoring, high availability features, process monitoring andgroup membership of the cluster. Figure 1 illustrates the architecture of the Oracle clusterware and how it is used in the OracleRAC database environment. The following sections will examine each of the components of the clusterware and theirconfiguration.HARDWARE CONFIGURATION FOR ORACLE CLUSTWAREAs shown in figure 2, a typical cluster environment consists of one or more servers. In additional to the public network forapplications to access the servers, all the servers in the cluster are also connected by a second network called interconnectnetwork. The interconnect network is a private network only accessible for the servers in the cluster and is connected by aprivate network switch. This interconnect network carries the most important heartbeat communication among the servers inthe cluster. A redundant private interconnect configuration is recommended for a production cluster database environment. 1 Session #355
  2. 2. Database Figure 1 Oracle Clusterware and Oracle RAC Architecture Figure 2 Hardware Configuration of Oracle ClusterwareThe cluster environment also requires a shared storage that is connected to all the cluster servers. The shared storage can beSAN (Storage Area Network) or NAS (Network Attached storage). The figure 2 shows the example for SAN storageconfiguration. To achieve the HA and IO load balancing, redundant switches are recommended. the connections among theservers, the SAN storage switches and the shared storage are in butterfly shape as shown in figure 2. Depending on the type of 2 Session #355
  3. 3. Databasethe SAN storage, different IO cards are installed in the servers. For example, for a Fibre Channel (FC) storage, HBA (HostBuss Agent) cards are installed in the servers, and Fibre Channel switches and Fibre Channel cables are used to connect serverswith FC storage. For an iSCSI type storage, regular network cards and Gigabit Ethernet switches and regular network cablesare used to connect the servers with the storage.ORACLE CLUSTWARE COMPONENTS PROCESS ARCHITECTUREOracle clusterware stores its two configuration files in the shared storage: a voting disk and Oracle Cluster Registry (OCR). • Voting disk stores the cluster membership information, such as which RAC instances are members of a cluster. • Oracle Cluster Registry (OCR) stores and manages information about the cluster resources managed by Oracle clusterware such as Oracle RAC databases, database instance, listeners, VIPs, and servers and applications. It is recommended to have multiplexed OCR to ensure the high availability.Oracle clusterware consists of the following components that facilitate cluster operations. These components run as processeson Linux/Unix or run as services on Windows: • Cluster Ready Services (CRS) manages cluster resources such as databases, instances, services, listeners and virtual IP (VIP) address, , applications address, etc. It reads the resource configuration information from OCR . It also starts, stops and monitors these resources and generates the event if these resource status change. Process on Linux: root 9204 1 0 06:03 ? 00:00:00 /bin/sh /etc/init.d/init.crsd run root 10058 9204 0 06:03 ? 00:01:23 /crs/product/11.1.0/crs/bin/crsd.bin reboot • Cluster Synchronization Services (CSSD) manages the Node membership by checking the heartbeats and checking voting disk to find if there is a failure of any cluster node. It also provides Group membership services and notify the member of the cluster about the membership status changes. Processes on Linux: root 9198 1 0 06:03 ? 00:00:22 /bin/sh /etc/init.d/init.cssd fatal root 10095 9198 0 06:03 ? 00:00:00 /bin/sh /etc/init.d/init.cssd oprocd root 10114 9198 0 06:03 ? 00:00:00 /bin/sh /etc/init.d/init.cssd oclsomon root 10151 9198 0 06:03 ? 00:00:00 /bin/sh /etc/init.d/init.cssd daemon oracle 10566 10151 0 06:03 ? 00:00:40 /crs/product//11.1.0/crs/bin/ocssd.bin • Event Mangement (EVM) publishes the events that are created by the Oracle clusterware using Oracle Notification Services(ONS). It communicates with CSS and CSS. Processes on Linux root 9196 1 0 06:03 ? 00:00:00 /bin/sh /etc/init.d/init.evmd run root 10059 9196 0 06:03 ? 00:00:00 /bin/su -l oracle -c sh -c ulimit -c unlimited; cd /crs /product/11.1.0/crs/log/kblade2/evmd; exec /crs/product/11.1.0/crs/bin/evmd oracle 10060 10059 0 06:03 ? 00:00:02 /crs/product/11.1.0/crs/bin/evmd.bin • Oracle Notification Services(ONS) publishes and subscribe service for communicating Fast Application Notification (FAN) events. Processes on Linux: oracle 12063 1 0 06:06 ? 00:00:00 /crs/oracle/product/11.1.0/crs/opmn/bin/ons -d oracle 12064 12063 0 06:06 ? 00:00:00 /crs/oracle/product/11.1.0/crs/opmn/bin/ons -d 3 Session #355
  4. 4. Database • Oracle Process Monitor Daemon (OPROCD) is locked in memory and monitors the cluster. OPROCD provide I/O fencing. Starting with 10.2.0.4, it replace hangcheck timer module on Linux. If OPROCS fails, clusterware will reboot the nodes. Processes on Linux: root 9198 1 0 06:03 ? 00:00:22 /bin/sh /etc/init.d/init.cssd fatal root 10095 9198 0 06:03 ? 00:00:00 /bin/sh /etc/init.d/init.cssd oprocd root 10465 10095 0 06:03 ? 00:00:00 /crs/product/11.1.0/crs/bin/oprocd run -t 1000 -m 500 -f • RACG extends the clusterware to support Oracle-specific requirements and complex resources Processes on Linux: oracle 12039 1 0 06:06 ? 00:00:00 /opt/oracle/product/11.1.0/asm/bin/racgimon daemon ora.kblade2.ASM2.asm oracle 12125 1 0 06:06 ? 00:00:06 /opt/oracle/product/11.1.0/db_1/bin/racgimon startd test1dbSHARED STORAGE CONFIGURATION FOR ORACLE CLUSTERWARESTORAGE REQUIREMENTTwo of most important clusterware components: Voting disk, OCR must be stored in a shared storage which is accessible toevery node in the cluster. The shared storage can be block devices, RAW devices, clustered file system like Oracle Cluster filesystem OCFS and OCFS2 and a network file system (NFS) from a certified network attached storage (NAS) devices. To verifyif the NAS devices are certified for Oracle, check the Oracle Storage Compatibility Program list at the following website:http://www.oracle.com/technology/deploy/availability/htdocs/oscp.html.Since OCR and voting disks play the crucial role in the clusterware configuration, to ensure the high availability, a minimum ofthree voting disks and two mirrored copies of OCR are recommended . If a single copy of OCR and voting disk is used, anexternal mirroring Raid configuration in the shared storage should be used to provide the redundancy. A cluster can have upto 32 voting disks.PHYSICAL CONNECTTIONS TO SHARED SAN STORAGEIn order to ensure the high availability and scalability on the IO loads, the fully redundant active-active IO paths arerecommended to connect the server nodes and the shared storage. For SAN (Storage Area Network) storages such as FibreChannel storage or iSCSI storage, these redundant paths include three components: • HBA cards/NIC cards:, two HBA (Host Bus Adapter) cards are installed in each cluster server node for Fibre Channel storage connection. Multiple NIC cards are dedicated for iSCSI storage connections. • Storage switches: Two Fibre Channel switches for Fibre Channel storage or the regular Gigabit Ethernet switches for iSCSI storage. • A shared storage with two storage processors.The connections among these three components are in butterfly shape. As examples, figure 3 and figure 4 show how serversare connected with a Fibre Channel storage and an iSCSI storage respectively. 4 Session #355
  5. 5. Database Figure 3 Connection of clustered servers with EMC Fibre Channel storage.In figure 3, additional switch IO paths are introduced. This redundant configuration provides the high availability and IOload balancing. For example, if an HBA fails, or a switch fails and one storage controller fails, the IO path will fail over to theremaining HBA, switch or storage controller. During the normal operation, these active-active HBAs, switches and storageprocessors share the IO loads. For the detailed information about how to configure FC storage as the shared storage forOracle clusterware and Oracle database, refer to [6] in the reference list. 5 Session #355
  6. 6. Database Figure 4 Connections of Cluster Servers with EqualLogic iSCSI storageIn figure 4, two servers connect with EqualLogic iSCSI storage through two Gigabit Ethernet switches. Each server usesmultiple NIC cards to connect two redundant iSCSI switches each of which also connects to two redundant two storagecontrol modules. In an event of a NIC card failure or a switch failure or even a control module failure, the IO path willautomatically failover other NIC card or switch or control module. For the detailed information about how to configureEqualLogic storage as the shared storage for Oracle clusterware and Oracle database, refer to [6] in the reference list.MULTIPATH DEVICES OF THE SHARED STORAGEAs multiple IO paths are established in the hardware level, OS like Linux as well as some third party storage vendors offer IOmultipathing device driver which combines multiple IO paths into a virtual IO path and provide IO path failover , IObandwidth aggregations.One common used multipath device driver is the Linux native Device Mapper on Enterprise Linux(RHEL5 or OEL5). TheDevice Mapper(DM) is installed through RPMs from OS CD. To verify the installation of the RPM by running this command:$rpm –qa | grep device-mapperdevice-mapper-1.02.28-2.el5device-mapper-event-1.02.28-2.el5device-mapper-multipath-0.4.7-23.el5device-mapper-1.02.28-2.el5The following example shows how to use the Linux native Device Mapper to implement multipathing for shared iSCSIstorage. After the iSCSI shared storage is configured, /proc/partitions lists the following iSCSI storage partitions: sdb, sdc,sdd, sde, sdf, sdg.Then identify the unique SCSI id for each device:[root@kblade1 sbin]# /sbin/scsi_id -gus /block/sdb36090a028e093fc906099540639aa2149[root@kblade1 sbin]# /sbin/scsi_id -gus /block/sde36090a028e093fc906099540639aa2149 [root@kblade1 sbin]# /sbin/scsi_id -gus /block/sdc36090a028e093dc7c6099140639aae1c7 [root@kblade1 sbin]# /sbin/scsi_id -gus /block/sdf36090a028e093dc7c6099140639aae1c7[root@kblade1 sbin]# /sbin/scsi_id -gus /block/sdg36090a028e093cc896099340639aac104root@kblade1 sbin]# /sbin/scsi_id -gus /block/sdd36090a028e093cc896099340639aac104The results show that sdb and sde are from to the same storage volume, so are sdc and sdf, and sdg and sdd.Each of duplicated paths correspond one IO path, which starts from one of two NIC cards installed in the servers. Next stepis to configure the multipath driver to combine these two IO paths into one virtual IO path by adding the following entries tothe multipating driver configuration file /etc/multipath.conf:multipaths { multipath { wwid 36090a028e093dc7c6099140639aae1c7 #<--- for sdc and sdf 6 Session #355
  7. 7. Database alias ocr1 } multipath { wwid 36090a028e093fc906099540639aa2149 #<---- for sdb and sde alias votingdisk1 } multipath { wwid 36090a028e093cc896099340639aac104 #<---- for sdf and sdg alias data1 }}Then restart the multipath deamon and verify the alias names:[root@kblade1 etc]# service multipathd restartStopping multipathd daemon: [FAILED]Starting multipathd daemon: [ OK ]List the multiple path devices by performing command:[root@kblade1 etc]# multipath -llVerify the multipathing devices that are created:[root@kblade1 etc]# ls -lt /dev/mapper/*brw-rw---- 1 root disk 253, 10 Feb 18 02:02 /dev/mapper/data1brw-rw---- 1 root disk 253, 8 Feb 18 02:02 /dev/mapper/votingdisk1brw-rw---- 1 root disk 253, 9 Feb 18 02:02 /dev/mapper/ocr1These multipathing devices are available for voting disk and OCR as well as the ASM diskgroup.Besides of the native Linux device mapper shown above, some third party vendors also offer the multipath driver. Forexample, EMC Powerpath driver for EMC Fibre Channel storage. With EMC powerpath driver, multiuple IO paths share theI/O workload with intelligent multipath load balancing feature, and the automatic failover feature ensures the high availabilityin the event of a failure.To install EMC Powerpath and naviagent software in Linux, load two Linux rpms:rpm –ivh EMCpower.LINUX-5.1.2.00.00-021.rhel5.x86_64.rpmrpm –ivh naviagentcli-6.24.2.5.0-1.noarch.rpmThen start the naviagent agent dameon and the EMC PowerPath demon:service naviagent startservice PowerPath startyou will see the EMC pseudo devices such as:[root@lkim3 software]# more /proc/partitions | grep emc 120 0 419430400 emcpowera 120 16 2097152 emcpowerb 120 32 419430400 emcpowerc 120 48 419430400 emcpowerd 120 64 419430400 emcpowereUse powermt utility to check the mapping of the EMC pseudo device emcpowerc and its IO paths:[root@lkim3 software]# powermt display dev=emcpowercPseudo name=emcpowerc 7 Session #355
  8. 8. DatabaseCLARiiON ID=APM00083100777 [kim_proc]Logical device ID=6006016013CB22009616938A8CF1DD11 [LUN 2]state=alive; policy=BasicFailover; priority=0; queued-IOs=0Owner: default=SP B, current=SP B Array failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors============================================================================== 2 lpfc sdc SP B1 active alive 0 0 2 lpfc sdh SP A0 active alive 0 0This shows that EMC pseudo emcpowerc is mapping to the logic device of LUN2 by two IO paths: sdc and sdh. And the/dev/emcpowerc is the pseudo device name of the shared storage which can be used for OCR or voting disk or ASMdiskgroup, etc.BLOCK DEVICES VS RAW DEVICESIn Linux world, raw devices have played an important role in Oracle clusterware and Oracle RAC as Oracle can access theunstructured data on block devices by binding to them to character raw devices. Starting with Linux kernel 2.6(RHEL5/OEL5), support for raw devices have been depreciated in favor of block devices. For example, Red Hat Linux 5no longer offers raw devices service. So for a long term solution, one should consider moving away from raw devices to blockdevices.While 11g clusterware fully supports building OCR and voting disk on the block devices, for Oracle 10gR2 clusterware ,Oracle universal installer (OUI) doesn’t allow one to build OCR and voting disk directly on block devices. If one needs toconfigure Oracle 10g RAC on RHEL 5 or OEL 5, the options are: A. Use udev rules to set raw devices mapping and permissions. B. Configure Oracle 11g clusterware using block devices and then install Oracle 10g RAC on top of Oracle 11g clusterware and use block devices for database files. This is an Oracle certified and recommended solution.Steps to implement option A: 1. Establish the binding between block devices with raw devices by editing a mapping rule file /etc/udev/rules.d/65-rasw.rules to add raw binding rules, for example: # cat /etc/udev/rules.d/65-oracleraw.rules ACTION=="add", KERNEL=="emcpowera1", RUN+="/bin/raw /dev/raw/raw1 %N" ACTION=="add", KERNEL=="emcpowerb1", RUN+="/bin/raw /dev/raw/raw2 %N" ACTION=="add", KERNEL=="emcpowerc1", RUN+="/bin/raw /dev/raw/raw3 %N" ACTION=="add", KERNEL=="emcpowerd1", RUN+="/bin/raw /dev/raw/raw4 %N" ACTION=="add", KERNEL=="emcpowere1", RUN+="/bin/raw /dev/raw/raw5 %N" Then, create a mapping rule file to set up the raw devices ownership and permissions: /etc/udev/rules.d/89-raw_permissions.rules: #OCR KERNEL=="raw1",OWNER="root", GROUP="oinstall", MODE="640" KERNEL=="raw2",OWNER="root", GROUP="oinstall", MODE="640" #Votingdisk KERNEL=="raw3",OWNER="oracle", GROUP="oinstall", MODE="640" KERNEL=="raw5",OWNER="oracle", GROUP="oinstall", MODE="640" KERNEL=="raw6",OWNER="oracle", GROUP="oinstall", MODE="640” 8 Session #355
  9. 9. Database 2. Start udev /sbin/udev For 11g RAC and Oracle 10 RAC option B, the recommended solution is to use block devices directly. The only required step is to set the proper permissions and ownerships of the block devices for OCR and voting disks as well as the ASM disks in /etc/rc.local file. For example, one can add the following lines to set the proper ownerships and permissions of block devices for OCRs, voting disks and ASM diskgroups: # cat /etc/rc.local # OCR disks 11gR1 chown root:oinstall /dev/mapper/ocr* chmod 0640 /dev/mapper/ocr* # Voting disks 11gR1 chown oracle:oinstall /dev/mapper/voting* chmod 0640 /dev/mapper/voting* # for ASM diskgroups chown oracle:dba /dev/mapper/data* chmod 0660 /dev/mapper/data*In Linux version with the kernel version older than 2.6 such RHEL 4.x and OEL4.x, raw devices can still used for 10g as wellas 11g clusterware. 11g clusterware also can use block devices directly for its OCRs and voting disks.NETWORK CONFIGURATION FOR ORACLE CLUSTERWAREPrior to installation of Oracle clusterware, one of important requirement is a proper configuration of network. TheClusterware requires three network IP addresses: public IP, virtual IP (VIP) and private interconnect IP on each node in thecluster.PUBLIC IP AND VIRTUAL IP CONFIGURATIONPublic IP address is the public host name for the node. Virtual IP (VIP) is the public virtual IP address used by clients toconnect the database instances on the node. The advantage for having VIP is when the node fails, Oracle clusterware willfailover the VIP associated with this node to other node. If the clients connect to the database through the hostname (publicIP), if the node dies, the clients will have to wait for TCP/IP timeout which can take as long as 10 minutes before the clientsreceive the connection failure error message. However if the client database connection uses VIP, the database connection willbe failed over to other node when the node fails.The following example shows how the VIP is failed over in case of a node failure: The cluster consists of two nodes: kblade1,kblade2. During the normal operation, VIP 1 and other components of nodeapps and database instances 1 are running on itsown node kblade1 as:[oracle@kblade2 ~]$ srvctl status nodeapps -n kblade1VIP is running on node: kblade1GSD is running on node: kblade1Listener is running on node: kblade1ONS daemon is running on node: kblade1In an event of node 1 kblade1 failure, kblade1-vip is failed over to kblade2:[oracle@kblade2 ~]$ srvctl status nodeapps -n kblade1VIP is running on node: kblade2GSD is not running on node: kblade1Listener is not running on node: kblade1ONS daemon is not running on node: kblade1 9 Session #355
  10. 10. DatabaseIf we ping the kblade1-vip at the moment when the kblade1 node is shut down, we can see how kblade1-vip isfailed over to kblade2:[kai_yu@db ~]$ ping kblade1-vipPING 155.16.9.171 (155.16.9.171) 56(84) bytes of data.From 155.16.0.1 icmp_seq=9 Destination Host UnreachableFrom 155.16.0.1 icmp_seq=9 Destination Host Unreachable….. (waiting for 2 seconds before being failed over)64 bytes from 155.16.9.171: icmp_seq=32 ttl=64 time=2257 ms64 bytes from 155.16.9.171: icmp_seq=33 ttl=64 time=1258 ms (at this time kblade1-vip 155.16.9.171 is failed over from kblade1 to kblad2successfully)After restarting kblade1, kblade1-vip is moved back to kblade1 as:[oracle@kblade2 ~]$ srvctl status nodeapps -n kblade1VIP is running on node: kblade1GSD is running on node: kblade1Listener is running on node: kblade1ONS daemon is running on node: kblade1PRIVATE INTERCONNECTION CONFIGURATIONAs the private interconnect among the cluster nodes play a key role in the stability and performance Oracle RAC and Oracleclusterware. the following best practices are recommended for private interconnection configuration. Fully Redundant Ethernet Interconnects: One purpose of the private interconnect is to carry the network heartbeat among the cluster nodes. If a node can not send the network heartbeat for certain misscount time, the node will be evicted from cluster and get rebooted. To ensure the high availability of the private interconnect network, it is recommended to implement a fully redundant Ethernet interconnects which include two NIC cards per nodes and two interconnect switches. Figure 5 shows how the NIC cards are connected to the interconnect switches. Figure 5: Fully Redundant Private Interconnect Configuration It is recommended to use private dedicated non-rountable switches for the private interconnect. Crossover cables are notsupported for RAC. In additional to the redundant physical connections in hardware level, it also recommended to implementNIC teaming or bonding in OS software level. This NIC teaming bonds the two physical network interfaces together tooperate under a single logical IP address. This NIC teaming provides the failover capability: in case of a failure of one NIC orone switch, the network traffic will be routed to the remaining NIC card or remaining switch automatically to ensure theuninterrupted communication. The following example shows the configuration of NIC bonding interface in Linux: 10 Session #355
  11. 11. DatabaseEdit the network interface scripts in /etc/sysconfig/network-scripts:ifcfg-eth1: ifcfg-eth2: ifcfg-bond0:DEVICE=eth1 DEVICE=eth2 DEVICE=eth2USERCTL=no USERCTL=no IPADDR=192.168.9.52ONBOOT=yes ONBOOT=yes NETMASK=255.255.255.0MASTER=bond0 MASTER=bond0 ONBOOT=yesSLAVE=yes SLAVE=yes BOOTPROTO=noneBOOTPROTO=none BOOTPROTO=none USERCTL=noTYPE=Ethernet TYPE=EthernetThen add the bonding options in /etc/modprobe.conf:alias bond0 bondingoptions bonding miimon=100 mode=1Enable the boning by restarting the network service :$service network restartIf you check the network configuration by ifconfig, you will get the bonding configuration like this:[root@k52950-3-n2 etc]# ifconfigbond0 Link encap:Ethernet HWaddr 00:18:8B:4E:F0:10 inet addr:192.168.9.52 Bcast:192.168.9.255 Mask:255.255.255.0 inet6 addr: fe80::218:8bff:fe4e:f010/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:328424549 errors:0 dropped:0 overruns:0 frame:0 TX packets:379844228 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:256108585069 (238.5 GiB) TX bytes:338540870589 (315.2 GiB)eth1 Link encap:Ethernet HWaddr 00:18:8B:4E:F0:10 inet6 addr: fe80::218:8bff:fe4e:f010/64 Scope:Link UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:172822208 errors:0 dropped:0 overruns:0 frame:0 TX packets:189922022 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:132373201459 (123.2 GiB) TX bytes:169285841045 (157.6 GiB) Interrupt:5 Memory:d6000000-d6012100eth2 Link encap:Ethernet HWaddr 00:18:8B:4E:F0:10 inet6 addr: fe80::218:8bff:fe4e:f010/64 Scope:Link UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:155602341 errors:0 dropped:0 overruns:0 frame:0 TX packets:189922206 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:123735383610 (115.2 GiB) TX bytes:169255029544 (157.6 GiB) Base address:0xece0 Memory:d5ee0000-d5f000001. Interconnect configuration best Practices The following best practices have been recommended by Oracle for the private interconnect configuration. Refer to [7] for details. • Set UDP send/receive buffers to max 11 Session #355
  12. 12. Database • Use the same interconnect for both Oracle clusterware and Oracle RAC communication • NIC settings for interconnect: a. Define control control : set rx=on, tx=off for the NIC cards, tx/rx flow control should be turned on for the switch(es) b. Ensure NIC names/slots order identical on all nodes: c. Configure interconnect NICs on fastest PCI bus d. Set Jump frame MTU=9000 by adding entry MTU=’9000’ in ifcfg-eth1/ifcfg-eth2 files, same setting for switches • Recommended kernel parameters for networking such as: Oracle 11gR1 Oracle 10gR2 net.core.rmem_default = 4194304 net.core.rmem_default = 4194304 net.core.rmem_max = 4194304 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_default = 262144 net.core.wmem_max = 262144 net.core.wmem_max = 262144 • Network hearbeat misscount configuration: 30seconds for 11g , 60 seconds for 10g (default), should not be changed without Oracle support help. • Hangcheck timer setting: Set the proper values for 10g/11g: modprobe hangcheck-timer hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1MANANING ORACLE CLUSTERWEAREAfter Oracle clusterware is deployed, it is the DBA’s responsibility to manage it and ensure it work properly so that OracleRAC database built on top the clusterware can keep functioning normally. The management responsibilities mainly includethe management of the two important components of clusterware: the voting disk and OCR.MANAGE VOTING DISKThe management tasks for voting disk include backing up, recovering , adding and moving voting disk. To backup votingdisks, first stop the clustereware on all the nodes, then use the command to locate the voting disks:[root@kblade3 bin]# ./crsctl query css votedisk 0. 0 /dev/mapper/votingdisk1p1 1. 0 /dev/mapper/totingdisk2p1Use dd command to backup the voting disk to a file, for example:[root@kblade3 ~]# dd if=/dev/mapper/votingdisk1p1 of=/root/votingdisk_backup bs=4096In case we need to restore voting diks from a backup, use dd command:[root@kblade3 ~]# dd if=/ root/votingdisk_backup of=/dev/mapper/votingdisk1p1 bs=4096To add votingdisk /dev/mapper/votingdisk3p1, stop clsuterware on all nodes, run the command: crsctl add css votedisk /dev/mapper/votingdisk3p1 -forceTo delete voting disk /dev/mapper/votingdisk2p1: crsctl delete css votedisk /dev/mapper/votingdisk2p1 –force 12 Session #355
  13. 13. DatabaseTo move voting disk to new location, first add new voting disk, then remove the old voting disk.MANAGE ORACLE CLUSTER REGISTRYOracle provides three OCR tools: OCRCONFIG, OCRDUMP and OCRCHECK to manage OCR:ADDING OR REMOVING OR REPLACING OCRIn [4], Oracle recommends to allocate two copies of OCR during the clusterware installation, which will be mirrored eachother and managed automatically by Oracle clusterware. If the mirror copy was not configured during the clusterwareinstallation, run the following command as root user to add the mirror copy after the installation: ocrconfig –replace ocrmirror <path of file or disk>, for example, ocrconfig –replace ocrmirror /dev/mapper/ocr2p1To change the OCR location, first check the OCR status using ocrcheck as:[root@kblade3 bin]# ./ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 296940 Used space (kbytes) : 15680 Available space (kbytes) : 281260 ID : 1168636743 Device/File Name : /dev/mapper/ocr1p1 Device/File integrity check succeeded Device/File Name : /dev/mapper/ocr2p1 Device/File integrity check succeeded Cluster registry integrity check succeededThen verify the clusterware status: [root@kblade3 bin]# ./crsctl check crsCluster Synchronization Services appears healthyCluster Ready Services appears healthyEvent Manager appears healthyThen run the following command as root user:ocrconfig –replace ocr /u01/ocr/ocr1or replace the ocr mirrored copy: ocrconfig –replace ocrmirror /u01/ocr/ocr2run ocrconfig –repair for the node if the clusterware is shutdown during the time when OCR is replaced. This command willmake node rejoin the cluster after the node is restarted.OCR BACKUP AND RECOVERYOracle clusterware automatically backs up OCR every four hours and at end of day and at end of week in the default location:$CRS_HOME/cdata/cluster_name , for example:[root@kblade3 kblade_cluster]# /crs/oracle/product/11.1.0/crs/bin/ocrconfig –showbackupkblade3 2009/03/06 00:18:00 /crs/oracle/product/11.1.0/crs/cdata/kblade_cluster/backup00.ocrkblade3 2009/03/05 20:17:59 /crs/oracle/product/11.1.0/crs/cdata/kblade_cluster/backup01.ocrkblade3 2009/03/05 16:17:59 /crs/oracle/product/11.1.0/crs/cdata/kblade_cluster/backup02.ocr 13 Session #355
  14. 14. Databasekblade2 2009/03/04 18:34:58 /crs/oracle/product/11.1.0/crs/cdata/kblade_cluster/day.ocrkblade7 2009/02/24 16:24:28 /crs/oracle/product/11.1.0/crs/cdata/kblade_cluster/week.ocrThe manual backup also can be done using the command:ocrconfig –manualbackupTo restore OCR from a backup file: first identify the backup using ocrconfig -showbackup command, and stop clusterwareon all the cluster nodes; then perform the restore by the restore command: ocrconfig –restore file_nameAfter the restore, restart the crs and do an OCR integrity check by using cluvfy comp ocr.OCR EXPORT AND IMPORTOracle provides another tool called OCR export and import for backing up and restoring OCR:To do the OCR export, execute the command: ocrconfig –export /home/oracle/ocr_exportTo import the OCR export back to OCR, first stop crs on all the cluster nodes and run the import command:ocrconfig –import /home/oracle/ocr_exportAfter the import, start the CRS and check the OCR integrity by performing the command: cluvfy comp ocrCLONE ORACLE CLUSTERWAREThe Oracle Cluterware configuration can be cloned from one cluster to another cluster or from one server to another server.This is called the clusterware cloning process. This process can be used to build a new cluster or simply extend an existingcluster to an additional node. The following example shows how to use this cloning process to add additional node to thecluster by simply cloning the clusterware configuration to the new node and then add this new node to the clusterwareconfiguration.Task: An existing 11g clusterware configuration includes two nodes: k52950-3-n1 and k52950-3-n2, we wanted to add the thirdnode k52950-3-n3 to the cluster by using the clusterware cloning method.Step1 : Complete all the pre-requisite conditions: On the third node, install OS (RHEL5.2) and configure the access to theshared storage for OCRs and voting disks of the clusterware, configure the public network and private network and VIP onthe third node and configure ssh among all the three nodes.Step2: copy CRS home to from source node k52950-3-n1 to new node k52950-3-n3: • Shutdown CRS in the source node. • In the source node, copy the CRS home to a backup and remove all the log files and trace files from the backup. • Copy the CRS backup to the new node and create directory /opt/oracle/oraInventory on the new node. • Set ownership for oracle inventory: chown oracle:oinstall /opt/oracle/oraInventory on the new node. • Run the preupdate.sh from $CRS_HOME/install on the new node.Step 3: Run CRS clone process on the new node. 14 Session #355
  15. 15. Database15 Session #355
  16. 16. DatabaseExecute /crs/oracle/product/11.1.0/crs/root.sh as root on new node k52950-3-n3 as instructedStep 4: Run addNode.sh on the source node k52950-3-n1 as shown the following screen shots: 16 Session #355
  17. 17. DatabaseStart CRS on node1 k52950-3-n1 andexecute /crs/oracle/product/11.1.0/crs/install/rootaddnode.sh asAnd execute /crs/oracle/product/11.1.0/crs/root.sh on the new node k529503-n3: 17 Session #355
  18. 18. DatabaseRestart CRS on node 2 k52950-3-n2:[root@k52950-3-n2 bin]# ./crsctl start crsAttempting to start Oracle Clusterware stackThe CRS stack will be started shortlySo far adding new node k52950-3-n3 is added to the clusterware configuration successfully.ORACLE CLUSTERWARE TROUBLESSHOTINGSPLIT BRAIN CONDITION AND IO FENCING MECHANISM IN ORACLE CLUSTERWAREOracle clusterware provides the mechanisms to monitor the cluster operation and detect some potential issues with the cluster.One of particular scenarios that needs to be prevented is called split brain condition. A split brain condition occurs when asingle cluster node has a failure that results in reconfiguration of cluster into multiple partitions with each partition forming itsown sub-cluster without the knowledge of the existence of other. This would lead to collision and corruption of shared data aseach sub-cluster assumes ownership of shared data [1]. For a cluster databases like Oracle RAC database, data corruption is aserious issue that has to be prevented all the time. Oracle clustereware solution to the split brain condition is to provide IO 18 Session #355
  19. 19. Databasefencing: if a cluster node fails, Oracle clusterware ensures the failed node to be fenced off from all the IO operations on theshared storage. One of the IO fencing method is called STOMITH which stands for Shoot the Other Machine in the Head.In this method, once detecting a potential split brain condition, Oracle clusterware automatically picks a cluster node as avictim to reboot to avoid data corruption. This process is called node eviction. DBAs or system administrators need tounderstand how this IO fencing mechanism works and learn how to troubleshoot the clustereware problem. When theyexperience a cluster node reboot event, DBAs or system administrators need to be able to analyze the events and identify theroot cause of the clusterware failure.Oracle clusterware uses two Cluster Synchronization Service (CSS) heartbeats: network heartbeat (NHB) and disk heartbeat(DHB) and two CSS misscount values associated with these heartbeats to detect the potential split brain conditions. Thenetwork heartbeat crosses the private interconnect to establish and confirm valid node membership in the cluster. The diskheartbeat is between the cluster node and the voting disk on the shared storage. Both heartbeats have their own maximalmisscount values in seconds called CSS misscount in which the heartbeats must be completed; otherwise a node eviction willbe triggered.The CSS misscount for the network heartbeat has the following default values depending on the version of Oracleclusterweare and operating systems: 10g (R1 OS 11g &R2) Linux 60 30 Unix 30 30 VMS 30 30 Windows 30 30 Table 1: Network heartbeat CSS misscout values for 11g/10g clusterwareThe CSS misscount for disk heartbeat also varies on the versions of Oracle clustereware. For oracle 10.2.1 and up, thedefault value is 200 seconds. Refer to [2] for details.NODE EVICTION DIAGNOSIS CASE STUDYWhen a node eviction occurs, Oracle clusterware usually records error messages into various log files. These logs files providethe evidences and the start points for DBAs and system administrators to do troubleshooting . The following case studyillustrates a troubleshooting process based on a node eviction which occurred in a 11-node 10g RAC production database. Thesymptom was that node 7 of that cluster got automatically rebooted around 11:15am. The troubleshooting started withexamining syslog file /var;log/messages and found the following error message: Jul 23 11:15:23 racdb7 logger: Oracle clsomon failed with fatal status 12. Jul 23 11:15:23 racdb7 logger: Oracle CSSD failure 134. Jul 23 11:15:23 racdb7 logger: Oracle CRS failure. Rebooting for cluster integrity.Then examined the OCSSD logfile at $CRS_HOME/log/<hostname>/cssd/ocssd.log file and found the following errormessages which showed that node 7 network heartbeat didn’t complete within the 60 seconds CSS misscount and triggered anode eviction event: [ CSSD]2008-07-23 11:14:49.150 [1199618400] >WARNING: clssnmPollingThread: node racdb7 (7) at 50% heartbeat fatal, eviction in 29.720 seconds .. clssnmPollingThread: node racdb7 (7) at 90% heartbeat fatal, eviction in 0.550 seconds 19 Session #355
  20. 20. Database … [ CSSD]2008-07-23 11:15:19.079 [1220598112] >TRACE: clssnmDoSyncUpdate: Terminating node 7, racdb7, misstime(60200) state(3)This node eviction only occurred intermittently, about once every other week. The DBAs was not able to recreate the nodeeviction event. But DBAs noticed that right before the node eviction, some private IP addresses were not pingable fromother nodes. This clearly linked the root cause of the node eviction to the stability of the private interconnect. After workingwith network engineers, we identified that the network instability was related to the configuration that both public network andprivate interconnect shared a single physical CISCO switch. The recommended solution was to configure separate switchesdedicated to the private interconnect as shown in figure 4. After implementing the recommendation, no more node evictionoccurred again.CRS REBOOTS TROUBLESHOOTING PROCEDUREBesides of the node eviction caused by the failure of network heartbeat or disk heartbeat, other events may also cause CRSnode reboot. Oracle clusterware provides several processes to monitor the operations of the clusterware. When certainconditions occurs, to protect the data integrity, these monitoring process may automatically kill the clusterware, even rebootthe node and leave some critical error messages in their log files The following lists roles of these clusterware processes in theserver reboot and where their logs are located:Three of clusterware processes OCSSD, OPROCD and OCLSOMON can initiate a CRS reboot when they run into certainerrors:OCSSD ( CSS daemon) monitors inter-node heath, such as the interconnect and membership of the cluster nodes. Its log fileis located in $CRS_HOME/log/<host>/cssd/ocssd.logOPROCD(Oracle Process Monitor Daemon), introduced in 10.2.0.4, detects hardware and driver freezes that results in thenode eviction, then kills the node to prevent any IO from accessing the sharing disk. Its log file is /etc/oracle/oprocd/<hostname>. oprocd.logOCLSOMON process monitors the CSS daemon for hangs or scheduling issue. It may reboot the node if it sees a potentialhang. The log file is $CRS_HOME/log/<host>/cssd/oclsomon/oclsmon.logAnd one of the most important log files is the syslog file, On Linux, the syslog file is /var/log/messages.The CRS reboot troubleshooting procedure starts with reviewing various logs files to identify which of three processes abovecontributes the node reboot and then isolates the root cause of this process reboot. Figure 6 troubleshooting tree or diagramillustrated the CRS reboot troubleshooting flowchart. For further detailed troubleshooting information, refer to [3]RAC DIAGNOSTIC TOOL SOracle provides several diagnostic tools to help the troubleshooting CRS reboot events such node eviction.One of the tools is diagwait. It is very important to have the proper information in the log files for diagnostic process. Thisinformation usually gets written into the log right before the node eviction. However during the time when the node isevicted, the machine can be so heavily loaded that OS didn’t get a chance to write all the necessary error messages into the logfiles before the node is evicted and rebooted. Diagwait is designed delay the node reboot for a short time so that OS canwrite all necessary diagnostic data into the logs and but also will not increase probability of data corruption.To setup diagwait, perform the following steps as root user. 1. Shutdown CRS on all the cluster nodes by performing crsctl stop crs command on all the nodes. 2. On one node, set diagwait by performing the command: crsctl set css diagwait 13 –force ( set the 13 seconds waiting) 3. Restart the clusterware by running command crsctl start crs on all the nodes.To unset the diagwait, just run the command: crsctl unset css diagwait 20 Session #355
  21. 21. DatabaseThe second tool is Oracle problem detections tool (IPD/OS) crfpack.zip.This tool is used to analyze the OS and clusterware resource degradation and failures related to Oracle clusterware and OracleRAC issues as it continuously tracks the OS resource consumptions and monitors cluster-wide operations. It can run in thereal time mode as well as in the historical mode. In the real time mode, it sends alert if certain conditions are reached. Thehistorical mode allows administrators to go back to the time right before the failure such as node eviction occurs and playback the system resources and other data collected at that time. This tool works in Linux x86 with kernel greater that 2.6.9.The tool cal be downloaded at http://www.oracle.com/technology/products/database/clustering/index.html orhttp://download.oracle.com/otndocs/products/clustering/ipd/crfpack.zipRAC-RDDT and OSwatcher are two diagnostic tools that help collect information from each node leading up to the time of thereboot from the OS utilities such as netstat, iostat and vmstat. Refer to Metalink # 301138.1 for RAC-DDT and metalink301137.1 for OSwatcher. 21 Session #355
  22. 22. DatabaseFigure 6: CRS Reboot Troubleshooting Flowchart 22 Session #355
  23. 23. DatabaseCONCLUSIONOracle Clusterware is a critical component of Oracle RAC database and Oracle Grid infrastructure. Oracle clusterware is builtwith a complex hardware technology stack which includes servers hardware, HBAs/network card, network switches, storageswitches and storage. It also utilizes some software components such as OS, multipathing drivers. The best practices ofconfiguring and managing and troubleshooting each of components play key roles in the stability of Oracle RAC and OracleGrid system. It is DBAs and system engineers’ best interest to understand and apply these best practices in their Oracle RACenvironment.SPEAKER’S BACKGROUNDMr. Kai Yu is a senior system consultant in Oracle solutions engineering team at Dell Inc with the specialty on Oracle GridComputing, Oracle RAC and Oracle E-Business Suite. He has been working with Oracle technology since 1995 and hasworked as an Oracle DBA and Oracle Applications DBA in Dell IT and other Fortune 500 companies as well as start-upscompanies. He is active in the IOUG as IOUG Collaborate 09 committee member and was elected as the US event chair forIOUG RACSIG. He has authored many articles and presentations on Dell Power Solutions magnize and OracleOpenworld06/07/08 and Collaborate Technology Forums and Oracle RACSIG Webcast. Kai has an M.S. in ComputerScience from the University of Wyoming. Kai can be reached at (512)725-0046, or by email at kai_yu@dell.com.REFERENCES[1] Oracle 10g Grid Real Applications Clusters Oracle 10g Grid Computing with RAC, Mike Ault, Madhu Tumma[2] Oracle Metalink Note: # 294430.1, CSS Timeout Computation in Oracle Clusterware[3] Oracle Metalink Note: # 265769.1, Troubleshooting CRS Reboots[4] Oracle Clusterware Administration and Deployment Guide 11g Release 1 (11.1) September 2007[5] Deploying Oracle Database 11g R1 Enterprise Edition Real Application Clusters with Red Hat Enterprise Linux 5.1 andOracle Enterprise Linux 5.1 On Dell PowerEdge Servers, Dell/EMC Storage, April 2008, by Kai Yuhttp://www.dell.com/downloads/global/solutions/11gr1_ee_rac_on_rhel5_1__and_OEL.pdf?c=us&cs=555&l=en&s=biz[6] Dell | Oracle Supported configurations: Oracle Enterprise Linux 5.2 Oracle 11g Enterprise Edition Deployment Guide:http://www.dell.com/content/topics/global.aspx/alliances/en/oracle_11g_oracle_ent_linux_4_1?c=us&cs=555&l=en&s=biz[7] Oracle Real Applications Clusters Internals, Oracle Openworld 2008 presentation 298713, Barb Lundhild[8] Looking Under the hood at Oracle Clutserware, Oracle Openworld 2008 presentation 299963, Murali Vallath 23 Session #355

×