Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this document? Why not share!

Oracle database 11g release 2 rac on linux using nfs

on

  • 1,510 views

Oracle Database 11g Release 2 RAC On Linux Using NFS

Oracle Database 11g Release 2 RAC On Linux Using NFS

Statistics

Views

Total Views
1,510
Views on SlideShare
1,510
Embed Views
0

Actions

Likes
0
Downloads
192
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Oracle database 11g release 2 rac on linux using nfs Oracle database 11g release 2 rac on linux using nfs Document Transcript

    • Oracle Database 11g Release 2 RAC OnLinux Using NFSThis article describes the installation of Oracle Database 11g Release 2 (11.2 64-bit) RAC onLinux (Oracle Enterprise Linux 5.4 64-bit) using NFS to provide the shared storage. Introduction Download Software Operating System Installation Oracle Installation Prerequisites Create Shared Disks Install the Grid Infrastructure Install the Database Check the Status of the RAC Direct NFS ClientIntroductionNFS is an abbreviation of Network File System, a platform independent technology created bySun Microsystems that allows shared access to files stored on computers via an interface calledthe Virtual File System (VFS) that runs on top of TCP/IP. Computers that share files areconsidered NFS servers, while those that access shared files are considered NFS clients. Anindividual computer can be either an NFS server, a NFS client or both.We can use NFS to provide shared storage for a RAC installation. In a production environmentwe would expect the NFS server to be a NAS, but for testing it can just as easily be anotherserver, or even one of the RAC nodes itself.To cut costs, this articles uses one of the RAC nodes as the source of the shared storage.Obviously, this means if that node goes down the whole database is lost, so its not a sensibleidea to do this if you are testing high availability. If you have access to a NAS or a third serveryou can easily use that for the shared storage, making the whole solution much more resilient.Whichever route you take, the fundamentals of the installation are the same.The Single Client Access Name (SCAN) should really be defined in the DNS or GNS and round-robin between one of 3 addresses, which are on the same subnet as the public and virtual IPs. Inthis article Ive defined it as a single IP address in the "/etc/hosts" file, which is wrong and willcause the cluster verification to fail, but it allows me to complete the install without the presenceof a DNS.
    • Download SoftwareDownload the following software. Oracle Enterprise Linux 5.4 Oracle 11g Release 2 (11.2) Clusterware and Database softwareOperating System InstallationThis article uses Oracle Enterprise Linux 5.4. A general pictorial guide to the operating systeminstallation can be found here. More specifically, it should be a server installation with aminimum of 2G swap (preferably 3-4G), firewall and secure Linux disabled. Oracle recommenda default server installation, but if you perform a custom installation include the followingpackage groups: GNOME Desktop Environment Editors Graphical Internet Text-based Internet Development Libraries Development Tools Server Configuration Tools Administration Tools Base System Tools X Window SystemTo be consistent with the rest of the article, the following information should be set during theinstallation:RAC1: hostname: rac1.localdomain IP Address eth0: 192.168.2.101 (public address) Default Gateway eth0: 192.168.2.1 (public address) IP Address eth1: 192.168.0.101 (private address) Default Gateway eth1: noneRAC2: hostname: rac2.localdomain IP Address eth0: 192.168.2.102 (public address) Default Gateway eth0: 192.168.2.1 (public address) IP Address eth1: 192.168.0.102 (private address) Default Gateway eth1: none
    • You are free to change the IP addresses to suit your network, but remember to stay consistentwith those adjustments throughout the rest of the article.Once the basic installation is complete, install the following packages whilst logged in as the rootuser. This includes the 64-bit and 32-bit versions of some packages.# From Enterprise Linux 5 DVDcd /media/cdrom/Serverrpm -Uvh binutils-2.*rpm -Uvh compat-libstdc++-33*rpm -Uvh elfutils-libelf-0.*rpm -Uvh elfutils-libelf-devel-*rpm -Uvh gcc-4.*rpm -Uvh gcc-c++-4.*rpm -Uvh glibc-2.*rpm -Uvh glibc-common-2.*rpm -Uvh glibc-devel-2.*rpm -Uvh glibc-headers-2.*rpm -Uvh ksh-2*rpm -Uvh libaio-0.*rpm -Uvh libaio-devel-0.*rpm -Uvh libgcc-4.*rpm -Uvh libstdc++-4.*rpm -Uvh libstdc++-devel-4.*rpm -Uvh make-3.*rpm -Uvh sysstat-7.*rpm -Uvh unixODBC-2.*rpm -Uvh unixODBC-devel-2.*cd /ejectOracle Installation PrerequisitesPerform the following steps whilst logged into the RAC1 virtual machine as the root user.Make sure the shared memory filesystem is big enough for Automatic Memory Manager towork.# umount tmpfs# mount -t tmpfs shmfs -o size=1500m /dev/shmMake the setting permanent by amending the "tmpfs" setting of the "/etc/fstab" file to look likethis.tmpfs /dev/shm tmpfs size=1500m 0 0If you are not using DNS, the "/etc/hosts" file must contain the following information.127.0.0.1 localhost.localdomain localhost# Public192.168.2.101 rac1.localdomain rac1192.168.2.102 rac2.localdomain rac2#Private192.168.0.101 rac1-priv.localdomain rac1-priv192.168.0.102 rac2-priv.localdomain rac2-priv#Virtual192.168.2.111 rac1-vip.localdomain rac1-vip192.168.2.112 rac2-vip.localdomain rac2-vip# SCAN
    • 192.168.2.201 rac-scan.localdomain rac-scan#NAS192.168.2.101 nas1.localdomain nas1Note. The SCAN address should not really be defined in the hosts file. Instead is should bedefined on the DNS to round-robin between 3 addresses on the same subnet as the public IPs.For this installation, we will compromise and use the hosts file. If you are using DNS, then onlythe first line should be present in the "/etc/hosts" file. The other entries are defined in the DNS,as described here. Also, the NAS1 entry is actually pointing to the RAC1 node. If you are using areal NAS or a third server to provide your shared storage put the correct IP address into the file.Add or amend the following lines to the "/etc/sysctl.conf" file.fs.aio-max-nr = 1048576fs.file-max = 6815744kernel.shmall = 2097152kernel.shmmax = 536870912kernel.shmmni = 4096# semaphores: semmsl, semmns, semopm, semmnikernel.sem = 250 32000 100 128net.ipv4.ip_local_port_range = 9000 65500net.core.rmem_default=262144net.core.rmem_max=4194304net.core.wmem_default=262144net.core.wmem_max=1048586Run the following command to change the current kernel parameters./sbin/sysctl -pAdd the following lines to the "/etc/security/limits.conf" file.oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536Add the following lines to the "/etc/pam.d/login" file, if it does not already exist.session required pam_limits.soDisable secure linux by editing the "/etc/selinux/config" file, making sure the SELINUX flag isset as follows.SELINUX=disabledAlternatively, this alteration can be done using the GUI tool (System > Administration >Security Level and Firewall). Click on the SELinux tab and disable the feature.Either configure NTP, or make sure it is not configured so the Oracle Cluster TimeSynchronization Service (ctssd) can synchronize the times of the RAC nodes. In this case we willdeconfigure NTP.# service ntpd stopShutting down ntpd: [ OK ]# chkconfig ntpd off# mv /etc/ntp.conf /etc/ntp.conf.org# rm /var/run/ntpd.pidIf you are using NTP, you must add the "-x" option into the following line in the"/etc/sysconfig/ntpd" file.OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"Then restart NTP.# service ntpd restartStart the Name Service Cache Daemon (nscd).
    • chkconfig --level 35 nscd onservice nscd startCreate the new groups and users.groupadd -g 1000 oinstallgroupadd -g 1200 dbauseradd -u 1100 -g oinstall -G dba oraclepasswd oracleLogin as the oracle user and add the following lines at the end of the .bash_profile file.# Oracle SettingsTMP=/tmp; export TMPTMPDIR=$TMP; export TMPDIRORACLE_HOSTNAME=rac1.localdomain; export ORACLE_HOSTNAMEORACLE_UNQNAME=rac; export ORACLE_UNQNAMEORACLE_BASE=/u01/app/oracle; export ORACLE_BASEORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOMEORACLE_SID=rac1; export ORACLE_SIDORACLE_TERM=xterm; export ORACLE_TERMPATH=/usr/sbin:$PATH; export PATHPATH=$ORACLE_HOME/bin:$PATH; export PATHLD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATHCLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; exportCLASSPATHif [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fifiRemember to amend the ORACLE_SID and ORACLE_HOSTNAME on each server.Create Shared DisksFirst we need to set up some NFS shares. In this case we will do this on the RAC1 node, but youcan do the on a NAS or a third server if you have one available. On the RAC1 node create thefollowing directories.mkdir /shared_configmkdir /shared_gridmkdir /shared_homemkdir /shared_dataAdd the following lines to the /etc/exports file./shared_config*(rw,sync,no_wdelay,insecure_locks,no_root_squash)/shared_grid*(rw,sync,no_wdelay,insecure_locks,no_root_squash)/shared_home*(rw,sync,no_wdelay,insecure_locks,no_root_squash)/shared_data*(rw,sync,no_wdelay,insecure_locks,no_root_squash)
    • Run the following command to export the NFS shares.chkconfig nfs onservice nfs restartOn both RAC1 and RAC2 create the directories in which the Oracle software will be installed.mkdir -p /u01/app/11.2.0/gridmkdir -p /u01/app/oracle/product/11.2.0/db_1mkdir -p /u01/oradatamkdir -p /u01/shared_configchown -R oracle:oinstall /u01/app /u01/app/oracle /u01/oradata/u01/shared_configchmod -R 775 /u01/app /u01/app/oracle /u01/oradata /u01/shared_configAdd the following lines to the "/etc/fstab" file.nas1:/shared_config /u01/shared_config nfsrw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0nas1:/shared_grid /u01/app/11.2.0/grid nfsrw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0nas1:/shared_home /u01/app/oracle/product/11.2.0/db_1 nfsrw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0nas1:/shared_data /u01/oradata nfsrw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0Mount the NFS shares on both servers.mount /u01/shared_configmount /u01/app/11.2.0/gridmount /u01/app/oracle/product/11.2.0/db_1mount /u01/oradataMake sure the permissions on the shared directories are correct.chown -R oracle:oinstall /u01/shared_configchown -R oracle:oinstall /u01/app/11.2.0/gridchown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1chown -R oracle:oinstall /u01/oradataInstall the Grid InfrastructureStart both RAC nodes, login to RAC1 as the oracle user and start the Oracle installer../runInstallerSelect the "Install and Configure Grid Infrastructure for a Cluster" option, then click the "Next"button.
    • Select the "Advanced Installation" option, then click the "Next" button.Select the the required language support, then click the "Next" button.
    • Enter cluster information and uncheck the "Configure GNS" option, then click the "Next" button.On the "Specify Node Information" screen, click the "Add" button.
    • Enter the details of the second node in the cluster, then click the "OK" button.Click the "SSH Connectivity..." button and enter the password for the "oracle" user. Click the"Setup" button to to configure SSH connectivity, and the "Test" button to test it once it iscomplete. Click the "Next" button.
    • Check the public and private networks are specified correctly, then click the "Next" button.Select the "Shared File System" option, then click the "Next" button.
    • Select the required level of redundancy and enter the OCR File Location(s), then click the "Next"button.Select the required level of redundancy and enter the Voting Disk File Location(s), then click the"Next" button.
    • Accept the default failure isolation support by clicking the "Next" button.Select the preferred OS groups for each option, then click the "Next" button. Click the "Yes"button on the subsequent message dialog.
    • Enter "/u01/app/oracle" as the Oracle Base and "/u01/app/11.2.0/grid" as the software location,then click the "Next" button.Accept the default inventory directory by clicking the "Next" button.
    • Wait while the prerequisite checks complete. If you have any issues, either fix them or check the"Ignore All" checkbox and click the "Next" button. If there are no issues, you will move directlyto the summary screen. If you are happy with the summary information, click the "Finish"button.
    • Wait while the setup takes place.When prompted, run the configuration scripts on each node.
    • The output from the "orainstRoot.sh" file should look something like that listed below.# cd /u01/app/oraInventory# ./orainstRoot.shChanging permissions of /u01/app/oraInventory.Adding read,write permissions for group.Removing read,write,execute permissions for world.Changing groupname of /u01/app/oraInventory to oinstall.The execution of the script is complete.#The output of the root.sh will vary a little depending on the node it is run on. Example output canbe seen here (Node1, Node2).Once the scripts have completed, return to the "Execute Configuration Scripts" screen on RAC1and click the "OK" button.Wait for the configuration assistants to complete.
    • We expect the verification phase to fail with an error relating to the SCAN, assuming you are notusing DNS.INFO: Checking Single Client Access Name (SCAN)...INFO: Checking name resolution setup for "rac-scan.localdomain"...INFO: ERROR:INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name"rac-scan.localdomain"INFO: ERROR:INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IPaddress: 192.168.2.201) failedINFO: ERROR:INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name"rac-scan.localdomain"INFO: Verification of SCAN VIP and Listener setup failedProvided this is the only error, it is safe to ignore this and continue by clicking the "Next" button.Click the "Close" button to exit the installer.
    • The grid infrastructure installation is now complete.Install the DatabaseStart all the RAC nodes, login to RAC1 as the oracle user and start the Oracle installer../runInstallerUncheck the security updates checkbox and click the "Next" button.
    • Accept the "Create and configure a database" option by clicking the "Next" button.Accept the "Server Class" option by clicking the "Next" button.
    • Make sure both nodes are selected, then click the "Next" button.Accept the "Typical install" option by clicking the "Next" button.
    • Enter "/u01/app/oracle/product/11.2.0/db_1" for the software location. The storage type shouldbe set to "File System" with the file location set to "/u01/oradata". Enter the appropriatepasswords and database name, in this case "RAC.localdomain".Wait for the prerequisite check to complete. If there are any problems either fix them, or check
    • the "Ignore All" checkbox and click the "Next" button.If you are happy with the summary information, click the "Finish" button.Wait while the installation takes place.
    • Once the software installation is complete the Database Configuration Assistant (DBCA) willstart automatically.Once the Database Configuration Assistant (DBCA) has finished, click the "OK" button.
    • When prompted, run the configuration scripts on each node. When the scripts have been run oneach node, click the "OK" button.Click the "Close" button to exit the installer.
    • The RAC database creation is now complete.Check the Status of the RACThere are several ways to check the status of the RAC. The srvctl utility shows the currentconfiguration and status of the RAC database.$ srvctl config database -d racDatabase unique name: racDatabase name: racOracle home: /u01/app/oracle/product/11.2.0/db_1Oracle user: oracleSpfile: /u01/oradata/rac/spfilerac.oraDomain: localdomainStart options: openStop options: immediateDatabase role: PRIMARYManagement policy: AUTOMATICServer pools: racDatabase instances: rac1,rac2Disk Groups:Services:Database is administrator managed$$ srvctl status database -d racInstance rac1 is running on node rac1Instance rac2 is running on node rac2
    • $The V$ACTIVE_INSTANCES view can also display the current status of the instances.$ sqlplus / as sysdbaSQL*Plus: Release 11.2.0.1.0 Production on Sat Sep 26 19:04:19 2009Copyright (c) 1982, 2009, Oracle. All rights reserved.Connected to:Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit ProductionWith the Partitioning, Real Application Clusters, Automatic StorageManagement, OLAP,Data Mining and Real Application Testing optionsSQL> SELECT inst_name FROM v$active_instances;INST_NAME--------------------------------------------------------------------------------rac1.localdomain:rac1rac2.localdomain:rac2SQL>If you have configured Enterprise Manager, it can be used to view the configuration and currentstatus of the database using a URL like "https://rac1.localdomain:1158/em".
    • Direct NFS ClientFor improved NFS performance, Oracle recommend using the Direct NFS Client shipped withOracle 11g. The direct NFS client looks for NFS details in the following locations: 1. $ORACLE_HOME/dbs/oranfstab 2. /etc/oranfstab 3. /etc/mtabSince we already have our NFS mount point details in the "/etc/fstab", and therefore the"/etc/mtab" file also, there is no need to configure any extra connection details.For the client to work we need to switch the libodm11.so library for the libnfsodm11.so library,as shown below.
    • srvctl stop database -d raccd $ORACLE_HOME/libmv libodm11.so libodm11.so_stubln -s libnfsodm11.so libodm11.sosrvctl start database -d racWith the configuration complete, you can see the direct NFS client usage via the followingviews: v$dnfs_servers v$dnfs_files v$dnfs_channels v$dnfs_statsFor example:SQL> SELECT svrname, dirname FROM v$dnfs_servers;SVRNAME DIRNAME------------- -----------------nas1 /shared_dataSQL>The Direct NFS Client supports direct I/O and asynchronous I/O by default.