为11.2.0.2 grid infrastructure添加节点

  • 1,115 views
Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
1,115
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
27
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. 为 11.2.0.2 GridInfrastructure 添加节 点 by Maclean.liu liu.maclean@gmail.com www.oracledatabase12g.com
  • 2. About Mel Email:liu.maclean@gmail.coml Blog:www.oracledatabase12g.coml Oracle Certified Database Administrator Master 10gand 11gl Over 6 years experience with Oracle DBA technologyl Over 7 years experience with Linux technologyl Member Independent Oracle Users Groupl Member All China Users Groupl Presents for advanced Oracle topics: RAC,DataGuard, Performance Tuning and Oracle Internal.
  • 3. 在之前的文章中我介绍了为 10g RAC Cluster 添加节点的具体步骤。在 11gr2 中 Oracle CRS升级为 Grid Infrastructure,通过 GI 我们可以更方便地控制 CRS 资源如:VIP、ASM 等等,这也导致了在为 11.2 中的 GI 添加节点时,同 10gr2 相比有着较大的差异。这里我们要简述在 11.2 中为 GI ADD NODE 的几个要点:一、准备工作准备工作是不可忽略的,在 10g RAC Cluster 添加节点中我列举了必须完成的先决条件,在11.2 GI 中这些条件依然有效,但请注意以下 2 点:1.不仅要为 oracle 用户配置用户等价性,也要为 grid(GI 安装用户)用户配置;除非你同时使用 oracle 安装 GI 和 RDBMS,这是不推荐的2.在 11.2 GI 中推出了 octssd(Oracle Cluster Synchronization Service Daemon)时间同步服务,如果打算使用 octssd 的话那么建议禁用 ntpd 事件服务,具体方法如下:# service ntpd stopShutting down ntpd: [ OK ]# chkconfig ntpd off# mv /etc/ntp.conf /etc/ntp.conf.orig# rm /var/run/ntpd.pid3.使用 cluster verify 工具验证新增节点是否满足 cluster 的要求:cluvfy stage -pre nodeadd -n <NEW NODE>具体用法如:su - grid[grid@vrh1 ~]$ cluvfy stage -pre nodeadd -n vrh3Performing pre-checks for node additionChecking node reachability...Node reachability check passed from node "vrh1"
  • 4. Checking user equivalence...User equivalence check passed for user "grid"Checking node connectivity...Checking hosts config file...Verification of the hosts config file successfulCheck: Node connectivity for interface "eth0"Node connectivity passed for interface "eth0"Node connectivity check passedChecking CRS integrity...CRS integrity check passedChecking shared resources...Checking CRS home location...The location "/g01/11.2.0/grid" is not shared but is present/creatable on allnodesShared resources check for node addition passedChecking node connectivity...Checking hosts config file...Verification of the hosts config file successfulCheck: Node connectivity for interface "eth0"Node connectivity passed for interface "eth0"Check: Node connectivity for interface "eth1"Node connectivity passed for interface "eth1"Node connectivity check passedTotal memory check passedAvailable memory check passedSwap space check passedFree disk space check passed for "vrh3:/tmp"Free disk space check passed for "vrh1:/tmp"Check for multiple users with UID value 54322 passedUser existence check passed for "grid"Run level check passedHard limits check failed for "maximum open file descriptors"Check failed on nodes: vrh3Soft limits check passed for "maximum open file descriptors"Hard limits check passed for "maximum user processes"Soft limits check passed for "maximum user processes"System architecture check passedKernel version check passedKernel parameter check passed for "semmsl"Kernel parameter check passed for "semmns"Kernel parameter check passed for "semopm"Kernel parameter check passed for "semmni"Kernel parameter check passed for "shmmax"Kernel parameter check passed for "shmmni"Kernel parameter check passed for "shmall"Kernel parameter check passed for "file-max"
  • 5. Kernel parameter check passed for "ip_local_port_range"Kernel parameter check passed for "rmem_default"Kernel parameter check passed for "rmem_max"Kernel parameter check passed for "wmem_default"Kernel parameter check passed for "wmem_max"Kernel parameter check passed for "aio-max-nr"Package existence check passed for "make-3.81( x86_64)"Package existence check passed for "binutils-2.17.50.0.6( x86_64)"Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"Package existence check passed for "glibc-common-2.5( x86_64)"Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"Package existence check passed for "glibc-headers-2.5( x86_64)"Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"Package existence check passed for "sysstat-7.0.2( x86_64)"Package existence check passed for "ksh-20060214( x86_64)"Check for multiple users with UID value 0 passedCurrent group ID check passedChecking OCR integrity...OCR integrity check passedChecking Oracle Cluster Voting Disk configuration...Oracle Cluster Voting Disk configuration check passedTime zone consistency check passedStarting Clock synchronization checks using Network Time Protocol(NTP)...NTP Configuration file check started...No NTP Daemons or Services were found to be runningClock synchronization check using Network Time Protocol(NTP) passedUser "grid" is not part of "root" group. Check passedChecking consistency of file "/etc/resolv.conf" across nodesFile "/etc/resolv.conf" does not have both domain and search entries defineddomain entry in file "/etc/resolv.conf" is consistent across nodessearch entry in file "/etc/resolv.conf" is consistent across nodesAll nodes have one search entry defined in file "/etc/resolv.conf"PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms onfollowing nodes: vrh3File "/etc/resolv.conf" is not consistent across nodesPre-check for node addition was unsuccessful on all the nodes.一般来说如果我们不使用 DNS 解析域名方式的话,那么 resolv.conf 不一直的问题可以忽略,但在 slient 安装模式下可能造成我们的操作无法完成,这个后面会介绍。
  • 6. 二、向 GI 中加入新的节点注意 11.2.0.2 GI 添加节点的关键脚本 addNode.sh 可能存在 Bug,如官方文档所述当希望使用Interactive Mode 交互模式启动 OUI 界面添加节点时,只要运行 addNode.sh 脚本即可,实际情况则不是这样:documentation said:Go to CRS_home/oui/bin and run the addNode.sh script on one of the existingnodes.Oracle Universal Installer runs in add node mode and the Welcome page displays.Click Next and the Specify Cluster Nodes for Node Addition page displays.we done:运行 addNode.sh 要求以 GI 拥有者身份运行该脚本,一般为 grid 用户,要求在已有的正运行 GI 的节点上启动脚本[grid@vrh1 ~]$ cd $ORA_CRS_HOME/oui/bin[grid@vrh1 bin]$ ./addNode.shERROR:Value for CLUSTER_NEW_NODES not specified.USAGE:/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl {-pre|-post}/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] CLUSTER_NEW_NODES={}/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] CLUSTER_NEW_NODES={}CLUSTER_NEW_VIRTUAL_HOSTNAMES={}/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] -responseFile/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -post [-silent]我们的本意是期望使用图形化的交互界面的 OUI(runInstaller -addnode)来新增节点,然而addNode.sh 居然让我们输入一些参量,而且其调用的 check_nodeadd.pl 脚本使用的是 silent模式。在 MOS 和 GOOGLE 上搜了一圈,基本所有的文档都推荐使用 silent 模式来添加节点,无法只好转到静默添加上来。实际上静默添加所需要提供的参数并不多,这可能是这种方式得到推崇的原因之一,但是这里又碰到问题了:语法 SYNTAX:./addNode.sh –silent"CLUSTER_NEW_NODES={node2}""CLUSTER_NEW_PRIVATE_NODE_NAMES={node2-priv}""CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}"在我们的例子中具体命令如下./addNode.sh -silent"CLUSTER_NEW_NODES={vrh3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}""CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}"
  • 7. 以上命令因为采用 silent 模式所以没有任何窗口输出(实际上会输出到 /tmp/silentInstall.log 日志文件中),去掉-silent 参数./addNode.sh "CLUSTER_NEW_NODES={vrh3}""CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}""CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}"Performing pre-checks for node additionChecking node reachability...Node reachability check passed from node "vrh1"Checking user equivalence...User equivalence check passed for user "grid"Checking node connectivity...Checking hosts config file...Verification of the hosts config file successfulCheck: Node connectivity for interface "eth0"Node connectivity passed for interface "eth0"Node connectivity check passedChecking CRS integrity...CRS integrity check passedChecking shared resources...Checking CRS home location...The location "/g01/11.2.0/grid" is not shared but is present/creatable on allnodesShared resources check for node addition passedChecking node connectivity...Checking hosts config file...Verification of the hosts config file successfulCheck: Node connectivity for interface "eth0"Node connectivity passed for interface "eth0"Check: Node connectivity for interface "eth1"Node connectivity passed for interface "eth1"Node connectivity check passedTotal memory check passedAvailable memory check passedSwap space check passedFree disk space check passed for "vrh3:/tmp"Free disk space check passed for "vrh1:/tmp"Check for multiple users with UID value 54322 passedUser existence check passed for "grid"Run level check passedHard limits check failed for "maximum open file descriptors"Check failed on nodes: vrh3Soft limits check passed for "maximum open file descriptors"
  • 8. Hard limits check passed for "maximum user processes"Soft limits check passed for "maximum user processes"System architecture check passedKernel version check passedKernel parameter check passed for "semmsl"Kernel parameter check passed for "semmns"Kernel parameter check passed for "semopm"Kernel parameter check passed for "semmni"Kernel parameter check passed for "shmmax"Kernel parameter check passed for "shmmni"Kernel parameter check passed for "shmall"Kernel parameter check passed for "file-max"Kernel parameter check passed for "ip_local_port_range"Kernel parameter check passed for "rmem_default"Kernel parameter check passed for "rmem_max"Kernel parameter check passed for "wmem_default"Kernel parameter check passed for "wmem_max"Kernel parameter check passed for "aio-max-nr"Package existence check passed for "make-3.81( x86_64)"Package existence check passed for "binutils-2.17.50.0.6( x86_64)"Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"Package existence check passed for "glibc-common-2.5( x86_64)"Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"Package existence check passed for "glibc-headers-2.5( x86_64)"Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"Package existence check passed for "sysstat-7.0.2( x86_64)"Package existence check passed for "ksh-20060214( x86_64)"Check for multiple users with UID value 0 passedCurrent group ID check passedChecking OCR integrity...OCR integrity check passedChecking Oracle Cluster Voting Disk configuration...Oracle Cluster Voting Disk configuration check passedTime zone consistency check passedStarting Clock synchronization checks using Network Time Protocol(NTP)...NTP Configuration file check started...No NTP Daemons or Services were found to be runningClock synchronization check using Network Time Protocol(NTP) passedUser "grid" is not part of "root" group. Check passedChecking consistency of file "/etc/resolv.conf" across nodesFile "/etc/resolv.conf" does not have both domain and search entries defineddomain entry in file "/etc/resolv.conf" is consistent across nodessearch entry in file "/etc/resolv.conf" is consistent across nodesAll nodes have one search entry defined in file "/etc/resolv.conf"PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms onfollowing nodes: vrh3
  • 9. File "/etc/resolv.conf" is not consistent across nodesChecking VIP configuration.Checking VIP Subnet configuration.Check for VIP Subnet configuration passed.Checking VIP reachabilityCheck for VIP reachability passed.Pre-check for node addition was unsuccessful on all the nodes.在 addNode.sh 正式添加节点之前它也会调用 cluvfy 工具来验证新加入节点是否满足条件,如果不满足则拒绝下一步操作。因为我们在之前已经验证过了新节点的可用性,所以这里完全可以跳过 addNode.sh 的验证,具体来看一下 addNode.sh 脚本的内容:[grid@vrh1 bin]$ cat addNode.sh#!/bin/shOHOME=/g01/11.2.0/gridINVPTRLOC=$OHOME/oraInst.locADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOCORACLE_HOME=$OHOME $*"if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f"$OHOME/cv/cvutl/check_nodeadd.pl" ]then $ADDNODEelse CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre$*" $CHECK_NODEADD if [ $? -eq 0 ] then $ADDNODE fifi可以看到存在一个 IGNORE_PREADDNODE_CHECKS 环境变量可以控制是否进行节点新增的预检查,我们手动设置该变量,之后再次运行 addNode.sh 脚本:export IGNORE_PREADDNODE_CHECKS=Y./addNode.sh "CLUSTER_NEW_NODES={vrh3}""CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}""CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}"> add_node.log 2>&1另开一个窗口可以监控新增节点的过程日志tail -f add_node.logStarting Oracle Universal Installer...Checking swap space: must be greater than 500 MB. Actual 5951 MB PassedChecking monitor: must be configured to display at least 256 colors. Actual16777216 PassedOracle Universal Installer, Version 11.2.0.2.0 ProductionCopyright (C) 1999, 2010, Oracle. All rights reserved.
  • 10. Performing tests to see whether nodes vrh2,vrh3 are available............................................................... 100% Done..-----------------------------------------------------------------------------Cluster Node Addition SummaryGlobal Settings Source: /g01/11.2.0/grid New NodesSpace Requirements New Nodes vrh3 /: Required 6.66GB : Available 32.40GBInstalled Products Product Names Oracle Grid Infrastructure 11.2.0.2.0 Sun JDK 1.5.0.24.08 Installer SDK Component 11.2.0.2.0 Oracle One-Off Patch Installer 11.2.0.0.2 Oracle Universal Installer 11.2.0.2.0 Oracle USM Deconfiguration 11.2.0.2.0 Oracle Configuration Manager Deconfiguration 10.3.1.0.0 Enterprise Manager Common Core Files 10.2.0.4.3 Oracle DBCA Deconfiguration 11.2.0.2.0 Oracle RAC Deconfiguration 11.2.0.2.0 Oracle Quality of Service Management (Server) 11.2.0.2.0 Installation Plugin Files 11.2.0.2.0 Universal Storage Manager Files 11.2.0.2.0 Oracle Text Required Support Files 11.2.0.2.0 Automatic Storage Management Assistant 11.2.0.2.0 Oracle Database 11g Multimedia Files 11.2.0.2.0 Oracle Multimedia Java Advanced Imaging 11.2.0.2.0 Oracle Globalization Support 11.2.0.2.0 Oracle Multimedia Locator RDBMS Files 11.2.0.2.0 Oracle Core Required Support Files 11.2.0.2.0 Bali Share 1.1.18.0.0 Oracle Database Deconfiguration 11.2.0.2.0 Oracle Quality of Service Management (Client) 11.2.0.2.0 Expat libraries 2.0.1.0.1 Oracle Containers for Java 11.2.0.2.0 Perl Modules 5.10.0.0.1 Secure Socket Layer 11.2.0.2.0 Oracle JDBC/OCI Instant Client 11.2.0.2.0 Oracle Multimedia Client Option 11.2.0.2.0 LDAP Required Support Files 11.2.0.2.0 Character Set Migration Utility 11.2.0.2.0 Perl Interpreter 5.10.0.0.1 PL/SQL Embedded Gateway 11.2.0.2.0 OLAP SQL Scripts 11.2.0.2.0 Database SQL Scripts 11.2.0.2.0 Oracle Extended Windowing Toolkit 3.4.47.0.0
  • 11. SSL Required Support Files for InstantClient 11.2.0.2.0SQL*Plus Files for Instant Client 11.2.0.2.0Oracle Net Required Support Files 11.2.0.2.0Oracle Database User Interface 2.2.13.0.0RDBMS Required Support Files for Instant Client 11.2.0.2.0RDBMS Required Support Files Runtime 11.2.0.2.0XML Parser for Java 11.2.0.2.0Oracle Security Developer Tools 11.2.0.2.0Oracle Wallet Manager 11.2.0.2.0Enterprise Manager plugin Common Files 11.2.0.2.0Platform Required Support Files 11.2.0.2.0Oracle JFC Extended Windowing Toolkit 4.2.36.0.0RDBMS Required Support Files 11.2.0.2.0Oracle Ice Browser 5.2.3.6.0Oracle Help For Java 4.2.9.0.0Enterprise Manager Common Files 10.2.0.4.3Deinstallation Tool 11.2.0.2.0Oracle Java Client 11.2.0.2.0Cluster Verification Utility Files 11.2.0.2.0Oracle Notification Service (eONS) 11.2.0.2.0Oracle LDAP administration 11.2.0.2.0Cluster Verification Utility Common Files 11.2.0.2.0Oracle Clusterware RDBMS Files 11.2.0.2.0Oracle Locale Builder 11.2.0.2.0Oracle Globalization Support 11.2.0.2.0Buildtools Common Files 11.2.0.2.0Oracle RAC Required Support Files-HAS 11.2.0.2.0SQL*Plus Required Support Files 11.2.0.2.0XDK Required Support Files 11.2.0.2.0Agent Required Support Files 10.2.0.4.3Parser Generator Required Support Files 11.2.0.2.0Precompiler Required Support Files 11.2.0.2.0Installation Common Files 11.2.0.2.0Required Support Files 11.2.0.2.0Oracle JDBC/THIN Interfaces 11.2.0.2.0Oracle Multimedia Locator 11.2.0.2.0Oracle Multimedia 11.2.0.2.0HAS Common Files 11.2.0.2.0Assistant Common Files 11.2.0.2.0PL/SQL 11.2.0.2.0HAS Files for DB 11.2.0.2.0Oracle Recovery Manager 11.2.0.2.0Oracle Database Utilities 11.2.0.2.0Oracle Notification Service 11.2.0.2.0SQL*Plus 11.2.0.2.0Oracle Netca Client 11.2.0.2.0Oracle Net 11.2.0.2.0Oracle JVM 11.2.0.2.0Oracle Internet Directory Client 11.2.0.2.0
  • 12. Oracle Net Listener 11.2.0.2.0 Cluster Ready Services Files 11.2.0.2.0 Oracle Database 11g 11.2.0.2.0-----------------------------------------------------------------------------Instantiating scripts for add node (Monday, August 15, 2011 10:15:35 PM CST). 1% Done.Instantiation of add node scripts completeCopying to remote nodes (Monday, August 15, 2011 10:15:38 PM CST)............................................................................................... 96% Done.Home copied to new nodesSaving inventory on nodes (Monday, August 15, 2011 10:21:02 PM CST). 100% Done.Save inventory completeWARNING:A new inventory has been created on one or more nodes in this session.However, it has not yet been registered as the central inventory of this system.To register the new inventory please run the script at/g01/oraInventory/orainstRoot.shwith root privileges on nodes vrh3.If you do not register the inventory, you may not be able to update orpatch the products you installed.The following configuration scripts need to be executed as the "root" user ineach cluster node./g01/oraInventory/orainstRoot.sh #On nodes vrh3/g01/11.2.0/grid/root.sh #On nodes vrh3To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts in each cluster nodeThe Cluster Node Addition of /g01/11.2.0/grid was successful.Please check /tmp/silentInstall.log for more details.以上 GI 软件的安装成功了,接下来我们还需要在新加入的节点上运行 2 个关键的脚本,千万不要忘记这一点!:运行 orainstRoot.sh 和 root.sh 脚本要求以 root 身份su - root[root@vrh3]# cat /etc/oraInst.locinventory_loc=/g01/oraInventory --这里是 oraInventory 的位置inst_group=asmadmin[root@vrh3 ~]# cd /g01/oraInventory[root@vrh3 oraInventory]# ./orainstRoot.shCreating the Oracle inventory pointer file (/etc/oraInst.loc)Changing permissions of /g01/oraInventory.Adding read,write permissions for group.Removing read,write,execute permissions for world.Changing groupname of /g01/oraInventory to asmadmin.The execution of the script is complete.运行 CRS_HOME 下的 root.sh 脚本,可能会有警告但不要紧[root@vrh3 ~]# cd $ORA_CRS_HOME
  • 13. [root@vrh3 g01]# /g01/11.2.0/grid/root.shRunning Oracle 11g root script...The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /g01/11.2.0/gridEnter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ...Creating /etc/oratab file...Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root script.Now product-specific root actions will be performed.Using configuration parameter file:/g01/11.2.0/grid/crs/install/crsconfig_paramsCreating trace directoryLOCAL ADD MODECreating OCR keys for user root, privgrp root..Operation successful.OLR initialization - successfulAdding daemon to inittabACFS-9200: SupportedACFS-9300: ADVM/ACFS distribution files found.ACFS-9307: Installing requested ADVM/ACFS software.ACFS-9308: Loading installed ADVM/ACFS drivers.ACFS-9321: Creating udev for ADVM/ACFS.ACFS-9323: Creating module dependencies - this may take some time.ACFS-9327: Verifying ADVM/ACFS devices.ACFS-9309: ADVM/ACFS installation correctness verified.CRS-4402: The CSS daemon was started in exclusive mode but found an active CSSdaemon on node vrh1, number 1, and is terminatingAn active cluster was found during exclusive startup, restarting to join theclusterclscfg: EXISTING configuration version 5 detected.clscfg: version 5 is 11g Release 2.Successfully accumulated necessary OCR keys.Creating OCR keys for user root, privgrp root..Operation successful./g01/11.2.0/grid/bin/srvctl start listener -n vrh3 ... failedFailed to perform new node configuration at/g01/11.2.0/grid/crs/install/crsconfig_lib.pm line 8255./g01/11.2.0/grid/perl/bin/perl -I/g01/11.2.0/grid/perl/lib-I/g01/11.2.0/grid/crs/install/g01/11.2.0/grid/crs/install/rootcrs.pl execution failed以上会出现了 2 个小错误:1.新增节点上 LISTENER 启动失败的问题可以忽略,这是因为 RDBMS_HOME 仍未安装,但 CRS 尝试去启动相关的监听[root@vrh3 g01]# /g01/11.2.0/grid/bin/srvctl start listener -n vrh3PRCR-1013 : Failed to start resource ora.CRS_LISTENER.lsnrPRCR-1064 : Failed to start resource ora.CRS_LISTENER.lsnr on node vrh3CRS-5010: Update of configuration file
  • 14. "/s01/orabase/product/11.2.0/dbhome_1/network/admin/listener.ora" failed:details at "(:CLSN00014:)" in"/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process"/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "check": detailsat "(:CLSN00008:)" in"/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"CRS-2674: Start of ora.CRS_LISTENER.lsnr on vrh3 failedCRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process"/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "clean": detailsat "(:CLSN00008:)" in"/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process"/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "check": detailsat "(:CLSN00008:)" in"/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"CRS-2678: ora.CRS_LISTENER.lsnr on vrh3 has experienced an unrecoverablefailureCRS-0267: Human intervention required to resume its availability.PRCC-1015 : LISTENER was already running on vrh3PRCR-1004 : Resource ora.LISTENER.lsnr is already running2.rootcrs.pl 脚本运行失败的话,一般重新运行一次即可:[root@vrh3 bin]# /g01/11.2.0/grid/perl/bin/perl -I/g01/11.2.0/grid/perl/lib-I/g01/11.2.0/grid/crs/install /g01/11.2.0/grid/crs/install/rootcrs.plUsing configuration parameter file:/g01/11.2.0/grid/crs/install/crsconfig_paramsPRKO-2190 : VIP exists for node vrh3, VIP name vrh3-vipPRKO-2420 : VIP is already started on node(s): vrh3Preparing packages for installation...cvuqdisk-1.0.9-1Configure Oracle Grid Infrastructure for a Cluster ... succeeded3.建议在新增节点上重启 crs,并使用 cluvfy 验证 nodeadd 顺利完成 :[root@vrh3 ~]# crsctl stop crs[root@vrh3 ~]# crsctl start crs[root@vrh3 ~]# su - grid[grid@vrh3 ~]$ cluvfy stage -post nodeadd -n vrh1,vrh2,vrh3Performing post-checks for node additionChecking node reachability...Node reachability check passed from node "vrh1"Checking user equivalence...User equivalence check passed for user "grid"Checking node connectivity...Checking hosts config file...Verification of the hosts config file successfulCheck: Node connectivity for interface "eth0"Node connectivity passed for interface "eth0"
  • 15. Node connectivity check passedChecking cluster integrity...Cluster integrity check passedChecking CRS integrity...CRS integrity check passedChecking shared resources...Checking CRS home location...The location "/g01/11.2.0/grid" is not shared but is present/creatable on allnodesShared resources check for node addition passedChecking node connectivity...Checking hosts config file...Verification of the hosts config file successfulCheck: Node connectivity for interface "eth0"Node connectivity passed for interface "eth0"Check: Node connectivity for interface "eth1"Node connectivity passed for interface "eth1"Node connectivity check passedChecking node application existence...Checking existence of VIP node application (required)VIP node application check passedChecking existence of NETWORK node application (required)NETWORK node application check passedChecking existence of GSD node application (optional)GSD node application is offline on nodes "vrh3,vrh2,vrh1"Checking existence of ONS node application (optional)ONS node application check passedChecking Single Client Access Name (SCAN)...Checking TCP connectivity to SCAN Listeners...TCP connectivity to SCAN Listeners exists on all cluster nodesChecking name resolution setup for "vrh.cluster.oracle.com"...ERROR:PRVF-4664 : Found inconsistent name resolution entries for SCAN name"vrh.cluster.oracle.com"ERROR:PRVF-4657 : Name resolution setup check for "vrh.cluster.oracle.com" (IPaddress: 192.168.1.190) failedERROR:PRVF-4664 : Found inconsistent name resolution entries for SCAN name"vrh.cluster.oracle.com"
  • 16. Verification of SCAN VIP and Listener setup failedUser "grid" is not part of "root" group. Check passedChecking if Clusterware is installed on all nodes...Check of Clusterware install passedChecking if CTSS Resource is running on all nodes...CTSS resource check passedQuerying CTSS for time offset on all nodes...Query of CTSS for time offset passedCheck CTSS state started...CTSS is in Active state. Proceeding with check of clock time offsets on allnodes...Check of clock time offsets passedOracle Cluster Time Synchronization Services check passedPost-check for node addition was successful.© 2011, www.oracledatabase12g.com. 版权所有.文章允许转载,但必须以链接方式注明源地址,否则追求法律责任.