RAC - Test


Published on

Published in: Business, Technology
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • RAC - Test

    1. 1. RAC Best Practices on Linux Kirk McGowan Technical Director – RAC Pack Server Technologies Oracle Corporation Session id: 40136 Roland Knapp Principal Member Technical Staff – RAC Pack Server Technologies Oracle Corporation
    2. 2. Agenda <ul><li>Planning Best Practices </li></ul><ul><ul><li>Architecture </li></ul></ul><ul><ul><li>Expectation setting </li></ul></ul><ul><ul><li>Objectives and success criteria </li></ul></ul><ul><ul><li>Project plan </li></ul></ul><ul><li>Implementation Best Practices </li></ul><ul><ul><li>Infrastructure considerations </li></ul></ul><ul><ul><li>Installation/configuration </li></ul></ul><ul><ul><li>Database creation </li></ul></ul><ul><ul><li>Application considerations </li></ul></ul><ul><li>Operational Best Practices </li></ul><ul><ul><li>Backup & Recovery </li></ul></ul><ul><ul><li>Performance Monitoring and Tuning </li></ul></ul><ul><ul><li>Production Migration </li></ul></ul>
    3. 3. Planning <ul><li>Understand the Architecture </li></ul><ul><ul><li>Cluster terminology </li></ul></ul><ul><ul><li>Functional basics </li></ul></ul><ul><ul><ul><li>HA by eliminating node & Oracle as SPOFs </li></ul></ul></ul><ul><ul><ul><li>Scalability by making additional processing capacity available incrementally </li></ul></ul></ul><ul><ul><li>Hardware components </li></ul></ul><ul><ul><ul><li>Private interconnect/network switch </li></ul></ul></ul><ul><ul><ul><li>Shared storage/concurrent access/storage switch </li></ul></ul></ul><ul><ul><li>Software components </li></ul></ul><ul><ul><ul><li>OS, Cluster Manager, DBMS/RAC, Application </li></ul></ul></ul><ul><ul><ul><li>Differences between cluster managers </li></ul></ul></ul>
    4. 4. RAC Hardware Architecture Clustered Database Servers Mirrored Disk Subsystem High Speed Switch or Interconnect Hub or Switch Fabric Network Centralized Management Console Storage Area Network Low Latency Interconnect ie. VIA or Proprietary Users No Single Point Of Failure Shared Cache
    5. 5. RAC Software Architecture Shared Disk Database Shared Data Model Shared Memory/Global Area shared SQL log buffer . . . . . . Shared Memory/Global Area shared SQL log buffer Shared Memory/Global Area shared SQL log buffer Shared Memory/Global Area shared SQL log buffer GES&GCS GES&GCS GES&GCS GES&GCS
    6. 6. RAC on Linux HW & SW Components public network Node1a shared storage redo log instance 1 … redo log instance 2 … control files database files Node2a cluster interconnect cache to cache N3 N4 Nn concurrent access from every node = “scale out” more nodes = higher availability Unbreakable Linux Unbreakable Linux ORACM ORACM Oracle 9 i RAC instance 1 Oracle 9 i RAC instance 2 DB cache DB cache
    7. 7. Linux Cluster Hardware <ul><li>Cluster interconnects </li></ul><ul><ul><li>FastEthernet, Gigabit Ethernet </li></ul></ul><ul><li>Public networks </li></ul><ul><ul><li>Ethernet, FastEthernet, Gigabit Ethernet </li></ul></ul><ul><li>Memory, swap & CPU Recommendations </li></ul><ul><ul><li>Each server should have a minimum of 512Mb of memory, at least 1Gb swap space, and two CPUs. </li></ul></ul><ul><li>Fiber Channel, SCSI, or NAS storage connectivity </li></ul>
    8. 8. Unbreakable Linux Distributions <ul><li>Red Hat Enterprise Linux AS and ES </li></ul><ul><li>United Linux 1.0 </li></ul><ul><ul><li>SuSE Linux Enterprise Server 8 (SuSE Linux AG) </li></ul></ul><ul><ul><li>Conectiva Linux Enterprise Edition (Conectiva S.A.) </li></ul></ul><ul><ul><li>SCO Linux Server 4.0 (The SCO Group) </li></ul></ul><ul><ul><li>Turbolinux Enterprise Server 8 (Turbolinux) </li></ul></ul><ul><li>Oracle will support Oracle products running with other distributions but will not support the operating system. </li></ul>
    9. 9. RAC Certification for Unbreakable Linux <ul><li>Certification </li></ul><ul><ul><li>Enterprise class OS distribution (e.g. RH AS, United Linux 1.0) </li></ul></ul><ul><ul><li>Clusterware (Oracle Cluster Manager only) </li></ul></ul><ul><ul><li>Network Attached Storage (e.g. Network Appliance filers) </li></ul></ul><ul><ul><li>Most SCSI and SAN storage are compatible </li></ul></ul><ul><ul><li>32 bit and 64 bit Itanium 2 Intel based servers are certified. </li></ul></ul><ul><li>For more details on software certification: http://technet.oracle.com/support/metalink/content.html </li></ul><ul><li>Discuss hardware configuration with your HW vendor </li></ul>
    10. 10. Linux IA64 requirements <ul><li>Operating System Requirements </li></ul><ul><ul><li>Red Hat Linux Advanced Server 2.1 operating system with kernel 2.4.18-e.14.ia64.rpm </li></ul></ul><ul><ul><li>glibc 2.2.4-29 </li></ul></ul><ul><ul><li>Gnu gcc 2.96.0 release </li></ul></ul><ul><ul><li>Linux Header Patch 2.4.18 (available from Intel) </li></ul></ul><ul><ul><li>asynch libraries libaio-0.3.92-1 </li></ul></ul><ul><ul><li>(Oracle9i Release Notes Release 2 ( for Linux Intel on Itanium (64-bit) Part No. B10567-02 ) </li></ul></ul>
    11. 11. Set Expectations Appropriately <ul><li>If your application will scale transparently on SMP, then it is realistic to expect it to scale well on RAC, without having to make any changes to the application code. </li></ul><ul><li>RAC eliminates the database instance, and the node itself, as a single point of failure, and ensures database integrity in the case of such failures </li></ul>
    12. 12. Planning: Define Objectives <ul><li>Objectives need to be quantified/measurable </li></ul><ul><ul><li>HA objectives </li></ul></ul><ul><ul><ul><li>Planned vs unplanned </li></ul></ul></ul><ul><ul><ul><li>Technology failures vs site failures vs human errors </li></ul></ul></ul><ul><ul><li>Scalability Objectives </li></ul></ul><ul><ul><ul><li>Speedup vs scaleup </li></ul></ul></ul><ul><ul><ul><li>Response time, throughput, other measurements </li></ul></ul></ul><ul><ul><li>Server/Consolidation Objectives </li></ul></ul><ul><ul><ul><li>Often tied to TCO </li></ul></ul></ul><ul><ul><ul><li>Often subjective </li></ul></ul></ul>
    13. 13. Build your Project Plan <ul><li>Partner with your vendors </li></ul><ul><ul><li>Multiple stakeholders, shared success </li></ul></ul><ul><li>Build detailed test plans </li></ul><ul><ul><li>Confirm application scalability on SMP before going to RAC  optimize first for single instance </li></ul></ul><ul><li>Address knowledge gaps and training </li></ul><ul><ul><li>Clusters, RAC, HA, Scalability, systems management </li></ul></ul><ul><ul><li>Leverage external resources as required </li></ul></ul><ul><li>Establish strict System and Application Change control </li></ul><ul><ul><li>Apply changes to one system element at a time </li></ul></ul><ul><ul><li>Apply changes to first to test environment </li></ul></ul><ul><ul><li>Monitor impact of application changes on underlying system components </li></ul></ul><ul><li>Define Support mechanisms and escalation procedures </li></ul>
    14. 14. Agenda <ul><li>Planning Best Practices </li></ul><ul><ul><li>Architecture </li></ul></ul><ul><ul><li>Expectation setting </li></ul></ul><ul><ul><li>Objectives and success criteria </li></ul></ul><ul><ul><li>Project plan </li></ul></ul><ul><li>Implementation Best Practices </li></ul><ul><ul><li>Infrastructure considerations </li></ul></ul><ul><ul><li>Installation/configuration </li></ul></ul><ul><ul><li>Database creation </li></ul></ul><ul><ul><li>Application considerations </li></ul></ul><ul><li>Operational Best Practices </li></ul><ul><ul><li>Backup & Recovery </li></ul></ul><ul><ul><li>Performance Monitoring and Tuning </li></ul></ul><ul><ul><li>Production Migration </li></ul></ul>
    15. 15. Infrastructure Considerations <ul><li>Architecture/Design </li></ul><ul><ul><li>Eliminate SPOFs (Single Points of Failure) </li></ul></ul><ul><ul><li>Workload Distribution (load balancing) strategy </li></ul></ul><ul><ul><li>Systems management framework for monitoring and managing to SLAs </li></ul></ul><ul><li>Hardware/Software </li></ul><ul><ul><li>Processing nodes – sufficient CPU to accommodate failure </li></ul></ul><ul><ul><li>Scalable I/O Subsystem </li></ul></ul><ul><ul><ul><li>Use S.A.M.E. </li></ul></ul></ul><ul><ul><li>Private Interconnect </li></ul></ul><ul><ul><ul><li>Gige, UDP, switched </li></ul></ul></ul><ul><ul><li>Patch levels and certification </li></ul></ul>
    16. 16. Impementation Flowchart Configure HW Configure private interconnect Install Unbreakable Linux Install cluster manager Install Oracle Install cluster manager Install Oracle Create database Configure storage and install OCFS
    17. 17. Installation Flowchart for Red Hat Linux AS 2.1 Boot Choose Language Select Keyboard & Mouse Choose – Advanced Server Option Use DRUID for Partition Setup Select Boot Loader Configure Network Configure Timezone Account Configuration Select Graphic Mode Boot Floppy Creation Installation Complete / Reboot
    18. 18. Install tips for Red Hat Linux AS 2.1 <ul><li>As documented in: </li></ul><ul><ul><li>“ Tips and Techniques: Install and Configure Oracle9i on Red Hat Linux Advanced Server” by Deepak Patel, Oracle http://otn.oracle.com/tech/linux/pdf/installtips_final.pdf </li></ul></ul><ul><li>Boot options </li></ul><ul><ul><li>Always use Advanced Server install. As needed install required packages. CD 1 to 3 has all rpm packages. CD 3 and 4 has source packages. CD 5 includes docs. </li></ul></ul><ul><li>Memory </li></ul><ul><ul><li>Based on physical memory on machine smp or enterprise kernel is installed. ( <= 4 GB smp kernel and > 4 GB enterprise kernel ) </li></ul></ul><ul><li>Post Installation </li></ul><ul><ul><li>Add users, configure network and other administrative tasks after installation. </li></ul></ul>
    19. 19. Install tips for United Linux 1.0 <ul><li>You must install the latest UnitedLinux kernel update! Oracle was certified against an update kernel, the original UL-1.0 kernel is NOT certified! </li></ul><ul><li>After installing United Linux 1.0, install Service Pack 2a from: </li></ul><ul><ul><li>ftp://suse.us.oracle.com/pub/suse/i386/unitedlinux-1.0-iso/ </li></ul></ul><ul><li>You will also need to have the basic developments tools installed, like make, gcc_old(2.95.3), and the binutils package. </li></ul><ul><li>Full installation instructions: ftp://ftp.suse.com/pub/suse/i386/supplementary/commercial/Oracle/docs/920_sles8_install.pdf </li></ul>
    20. 20. Install tips for United Linux 1.0 <ul><li>Install the orarun.rpm package from either the SP2 CD </li></ul><ul><ul><li><mountpoint>/UnitedLinux/i586/orarun-1.8-18.i586.rpm </li></ul></ul><ul><ul><li>or from </li></ul></ul><ul><ul><li>ftp://ftp.suse.com/pub/suse/i386/supplementary/commercial/Oracle/sles-8/orarun.rpm </li></ul></ul><ul><li>orarun.rpm </li></ul><ul><ul><ul><li>update kernel (ie. shmmax, shmmin) </li></ul></ul></ul><ul><ul><ul><li>UDP settings (256K) </li></ul></ul></ul><ul><ul><ul><li>Installs and configures hangcheck-timer </li></ul></ul></ul>
    21. 21. Prepare Linux Environment <ul><li>Follow these steps on EACH node of the cluster </li></ul><ul><ul><li>Set Kernel parameters in /etc/sysctl.conf </li></ul></ul><ul><ul><li>Add hostnames to /etc/hosts file </li></ul></ul><ul><ul><li>Establish file system or location for ORACLE_HOME (writable for oracle userid) </li></ul></ul><ul><ul><li>Setup host equivalence for oracle userid (.rhosts) </li></ul></ul>
    22. 22. Installation Flowchart for OCFS Install the rpm’s on all nodes Run ocfstool as root (configures /etc/ocfs.conf) on all nodes Run load_ocfs (insmod will load ocfs.o) on all nodes Create partition on the primary node Run ocfstool to format and mount your new filesystem Mount the new filesystem on all nodes Edit rc.local or equivalent add load_ocfs and ‘mount –t ocfs <device> <mountpoint’ Download the latest OCFS rpm’s from www. ocfs .org
    23. 23. OCFS and Unbreakable Linux <ul><li>Redhat </li></ul><ul><ul><li>currently ships 4 flavors of the AS 2.1 kernel, viz., UP, SMP, Enterprise and Summit (IBM x440) </li></ul></ul><ul><ul><li>Oracle provides a separate OCFS module for each of the kernel flavors </li></ul></ul><ul><ul><li>Minor revisions of the kernel do not need a fresh build of ocfs </li></ul></ul><ul><ul><li>e.g., ocfs built for e.12 will work for e.16, e.18, etc. </li></ul></ul><ul><li>United Linux </li></ul><ul><ul><li>United Linux ships 3 flavors of its kernel, for the 2.4.19-64GB-SMP , the 2.4.19-4GB and the 2.4.19-4GB-SMP kernel </li></ul></ul><ul><ul><li>OCFS 1.0.9 is supported on UL 1.0 Service Pack 2a or higher </li></ul></ul><ul><ul><li>OCFS build is not currently upward compatible with kernel (pre SP3)  must ensure OCFS build exists for each new Kernel version prior to upgrading kernel </li></ul></ul>
    24. 24. OCFS and RAC <ul><li>Maintains cache coherency across nodes for the filesystem metadata only </li></ul><ul><ul><li>Does not synchronize the data cache buffers across nodes, lets RAC handle that </li></ul></ul><ul><li>OCFS journals filesystem metadata changes only </li></ul><ul><ul><li>Filedata changes are journalled by RAC (log files) </li></ul></ul><ul><li>Overcomes some limitations of raw devices on Linux </li></ul><ul><ul><li>No limit on number of files </li></ul></ul><ul><ul><li>Allows for very large files (max 2TB) </li></ul></ul><ul><ul><li>Max volume size 32G (4K block) to 8T (1M block) </li></ul></ul><ul><li>Oracle DB performance is comparable to raw </li></ul><ul><li>kernel e.25 is strongly recommended for use with OCFS 1.0.9 (remove old kernel tuning parameters) </li></ul>
    25. 25. Install Tips for OCFS <ul><li>Ensure OCFS rpm corresponds to kernel version </li></ul><ul><ul><li>uname –r (i.e. 2.4.19-4GB) </li></ul></ul><ul><li>Remember to also download rpm’s for OCFS “Support Tools” and “Additional Tools” </li></ul><ul><li>Download the dd/tar/cp rpm that supports o_direct </li></ul><ul><li>Use rpm –Uv to install all 4 rpm’s on all nodes </li></ul><ul><li>Use OCFS for Oracle DB files only, not Oracle binaries (OCFS 1.0.x was not designed as a general purpose filesystem). </li></ul>
    26. 26. Installation Flowchart for oracm and Oracle Install the oracm from the CD-ROM Configure ocmargs.ora and cmcfg.ora Load the softdog and start with ./ocmstart.sh the cluster manager on both nodes Install software with the RAC option Kill the oracm and watchdog process modify ocmargs.ora and cmcfg.ora (remove watchdog) Load the hangcheck-timer module with lsmod Install the oracm from the patchset Start with ./ocmstart.sh the cluster manager Install the patchset Configure private interconnect and quorum device Fix empty directory bug
    27. 27. Hangcheck NM, and CM Flow (After V9.2.0.2) Oracle Instance Cluster Manager (including Node Monitor) Hangcheck-timer User-mode Kernel-mode Oracm maintains both, node status view and instance status view. The hangcheck-timer monitors the kernel for hangs, and resets the node if needed.
    28. 28. Post Installation <ul><li>To enable asynchronous I/O must re-link Oracle to use skgaioi.o </li></ul><ul><li>Adjust UDP send / receive buffer size to 256K </li></ul><ul><li>Larger Buffer Cache </li></ul><ul><ul><li>Create an in-memory file system on the /dev/shm (mount -t shm shmfs -o size=8g /dev/shm) </li></ul></ul><ul><ul><li>To enable the extended buffer cache feature, set the init.ora paramter USE_INDIRECT_DATA_BUFFERS = true </li></ul></ul><ul><li>Increasing Address Space </li></ul><ul><ul><li>Default 1.7 GB of address space for its SGA. </li></ul></ul><ul><ul><li>See Metalink Note: 200266.1 for details and a sample program. </li></ul></ul>
    29. 29. Create RAC database using DBCA <ul><li>Create Database </li></ul><ul><li>Use DBCA to simplify DB creation </li></ul><ul><ul><li>Start gsd ( global services daemon ) on all nodes, if it is not already running. </li></ul></ul><ul><li>Set MAXINSTANCES, MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES (auto with DBCA) </li></ul><ul><li>Create tablespaces as locally Managed (auto with DBCA) </li></ul><ul><li>Create all tablespaces with ASSM (auto with DBCA) </li></ul><ul><li>Configure automatic UNDO management (auto with DBCA) </li></ul><ul><li>Use SPFILE instead of multiple init.ora’s (auto with DBCA) </li></ul>
    30. 30. Validate RAC Configuration <ul><li>Instances running on all nodes </li></ul><ul><ul><li>SQL> select * from gv$instance </li></ul></ul><ul><li>RAC communicating over the private Interconnect </li></ul><ul><ul><li>SQL> oradebug setmypid </li></ul></ul><ul><ul><li>SQL> oradebug ipc </li></ul></ul><ul><ul><li>SQL> oradebug tracefile_name </li></ul></ul><ul><ul><li>/home/oracle/admin/RAC92_1/udump/rac92_1_ora_1343841.trc </li></ul></ul><ul><ul><li>Check trace file in the user_dump_dest: </li></ul></ul><ul><ul><ul><li>SSKGXPT 0x2ab25bc flags info for network 0 </li></ul></ul></ul><ul><ul><ul><li>socket no 10 IP UDP 49197 </li></ul></ul></ul><ul><ul><ul><li>sflags SSKGXPT_UP </li></ul></ul></ul><ul><ul><ul><li>info for network 1 </li></ul></ul></ul><ul><ul><ul><li>socket no 0 IP UDP 0 </li></ul></ul></ul><ul><ul><ul><li>sflags SSKGXPT_DOWN </li></ul></ul></ul><ul><li>RAC is using desired IPC protocol: Check Alert.log </li></ul><ul><ul><li>... </li></ul></ul><ul><ul><li>cluster interconnect IPC version:Oracle UDP/IP </li></ul></ul><ul><ul><li>IPC Vendor 1 proto 2 Version 1.0 </li></ul></ul><ul><ul><li>PMON started with pid=2 </li></ul></ul><ul><ul><li>... </li></ul></ul><ul><li>Use cluster_interconnects only if necessary </li></ul>
    31. 31. Configure srvconfig / srvctl <ul><li>SRVCTL uses information from srvconfig </li></ul><ul><ul><li>Reads $ORACLE_HOME/srvm/config /srvConfig.loc information </li></ul></ul><ul><ul><ul><li>File can be a RAW Device or OCFS file </li></ul></ul></ul><ul><li>Srvconfig -init </li></ul><ul><li>gsd must be running </li></ul><ul><li>Add ORACLE_HOME </li></ul><ul><ul><li>$ srvctl add database -d db_name -o oracle_home [-m domain_name] [-s spfile] </li></ul></ul><ul><li>Add instances (for each instance enter the command) </li></ul><ul><ul><li>$ srvctl add instance -d db_name -i sid -n node </li></ul></ul>
    32. 32. Application Deployment <ul><li>Same guidelines as single instance </li></ul><ul><ul><li>SQL Tuning </li></ul></ul><ul><ul><li>Sequence Caching </li></ul></ul><ul><ul><li>Partition large objects </li></ul></ul><ul><ul><li>Use different block sizes </li></ul></ul><ul><ul><li>Tune instance recovery </li></ul></ul><ul><ul><li>Avoid DDL </li></ul></ul><ul><ul><li>Use LMT’s and ASSM as noted earlier </li></ul></ul>
    33. 33. Agenda <ul><li>Planning Best Practices </li></ul><ul><ul><li>Architecture </li></ul></ul><ul><ul><li>Expectation setting </li></ul></ul><ul><ul><li>Objectives and success criteria </li></ul></ul><ul><ul><li>Project plan </li></ul></ul><ul><li>Implementation Best Practices </li></ul><ul><ul><li>Infrastructure considerations </li></ul></ul><ul><ul><li>Installation/configuration </li></ul></ul><ul><ul><li>Database creation </li></ul></ul><ul><ul><li>Application considerations </li></ul></ul><ul><li>Operational Best Practices </li></ul><ul><ul><li>Backup & Recovery </li></ul></ul><ul><ul><li>Performance Monitoring and Tuning </li></ul></ul><ul><ul><li>Production Migration </li></ul></ul>
    34. 34. Operations <ul><li>Same DBA procedures as single instance, with some minor, mostly mechanical differences. </li></ul><ul><li>Managing the Oracle environment </li></ul><ul><ul><li>Starting/stopping cluster services (ocmstart.sh) </li></ul></ul><ul><ul><li>Starting/stopping gsd </li></ul></ul><ul><ul><li>Managing multiple redo log threads </li></ul></ul><ul><li>Startup and shutdown of the database </li></ul><ul><ul><li>Use srvctl </li></ul></ul><ul><li>Backup and recovery </li></ul><ul><li>Performance Monitoring and Tuning </li></ul><ul><li>Production migration </li></ul>
    35. 35. Operations: srvconfig / srvctl <ul><li>Use SRVCTL to administer your RAC database environment. </li></ul><ul><ul><li>OEM and the Oracle Intelligent Agent use the configuration information that SRVCTL generates to discover and monitor nodes in your cluster. </li></ul></ul><ul><li>Global Services Daemon (GSD) receives requests from SRVCTL to execute administrative job tasks, such as startup or shutdown. </li></ul><ul><ul><li>GSD must be started on all the nodes in your RAC environment so that the manageability features and tools operate properly. (GSDCTL) </li></ul></ul>
    36. 36. Operations: Backup & Recovery <ul><li>RMAN is the most efficient option for Backup & Recovery </li></ul><ul><ul><li>Managing the snapshot control file location. </li></ul></ul><ul><ul><li>Managing the control file autobackup feature. </li></ul></ul><ul><ul><li>Managing archived logs in RAC – choose proper archiving scheme. </li></ul></ul><ul><ul><li>Node Affinity Awareness </li></ul></ul><ul><li>RMAN and Oracle Net in RAC apply </li></ul><ul><ul><li>you cannot specify a net service name that uses Oracle Net features to distribute RMAN connections to more than one instance. </li></ul></ul><ul><li>Oracle Enterprise Manager </li></ul><ul><ul><li>GUI interface to Recovery Manager </li></ul></ul>
    37. 37. Performance Monitoring and Tuning <ul><li>Tune first for single instance 9i </li></ul><ul><li>Use Statspack: </li></ul><ul><ul><li>Separate 1 GB tablespace for Statspack </li></ul></ul><ul><ul><li>snapshots at 10-20 min intervals during stress testing, hourly during normal operations </li></ul></ul><ul><ul><li>Run on all instances, staggered </li></ul></ul><ul><li>Supplement with scripts/tracing </li></ul><ul><ul><li>Monitor V$SESSION_WAIT to see which blocks are involved in wait events </li></ul></ul><ul><ul><li>Trace events like 10046/8 can provide additional wait event details </li></ul></ul><ul><ul><li>Monitor Alert logs and trace files, as on single instance </li></ul></ul><ul><li>Oracle Performance Manager </li></ul><ul><li>RAC-specific views </li></ul><ul><li>Supplement with System-level monitoring </li></ul><ul><ul><li>CPU utilization never 100% </li></ul></ul><ul><ul><li>I/O service times never > acceptable thresholds </li></ul></ul><ul><ul><li>CPU run queues at optimal levels </li></ul></ul>
    38. 38. Performance Monitoring and Tuning <ul><li>Obvious application deficiency on a single node can’t be solved by multiple nodes. </li></ul><ul><ul><li>Single points of contention. </li></ul></ul><ul><ul><li>Not scalable on SMP </li></ul></ul><ul><ul><li>I/O bound on single instance DB </li></ul></ul><ul><li>Tuning on single instance DB to ensure applications scalable first </li></ul><ul><ul><li>Identify/tune contention using v$segment_statistics to identify objects involved </li></ul></ul><ul><ul><li>Concentrate on the top 5 Statspack timed events if majority of time is spent waiting </li></ul></ul><ul><ul><li>Concentrate on bad SQL if CPU bound </li></ul></ul><ul><li>Maintain a balanced load on underlying systems (DB, OS, storage subsystem, etc. ) </li></ul><ul><ul><li>Excessive load on individual components can invoke aberrant behaviour. </li></ul></ul>
    39. 39. Performance Monitoring and Tuning <ul><li>Deciding if RAC is the performance bottleneck </li></ul><ul><ul><li>Amount of Cross Instance Traffic </li></ul></ul><ul><ul><ul><li>Type of requests </li></ul></ul></ul><ul><ul><ul><li>Type of blocks </li></ul></ul></ul><ul><ul><li>Latency </li></ul></ul><ul><ul><ul><li>Block receive time </li></ul></ul></ul><ul><ul><ul><li>buffer size factor </li></ul></ul></ul><ul><ul><ul><li>bandwidth factor </li></ul></ul></ul>
    40. 40. Production Migration <ul><li>Adhere to strong Systems Life Cycle Disciplines </li></ul><ul><ul><li>Comprehensive test plans (functional and stress) </li></ul></ul><ul><ul><li>Rehearsed production migration plan </li></ul></ul><ul><ul><li>Change Control </li></ul></ul><ul><ul><ul><li>Separate environments for Dev, Test, QA/UAT, Production </li></ul></ul></ul><ul><ul><ul><li>System AND application change control </li></ul></ul></ul><ul><ul><ul><li>Log changes to spfile </li></ul></ul></ul><ul><ul><li>Backup and recovery procedures </li></ul></ul><ul><ul><li>Security controls </li></ul></ul><ul><ul><li>Support Procedures </li></ul></ul>
    41. 41. Next Steps…. <ul><li>Recommended sessions </li></ul><ul><ul><li>List 1 or 2 sessions that complement this session </li></ul></ul><ul><li>Recommended demos and/or hands-on labs </li></ul><ul><ul><li>List of or two demos or labs that will let them see this product in action. </li></ul></ul><ul><li>See Your Business in Our Software </li></ul><ul><ul><li>Visit the DEMOgrounds for a customized architectural review, see a customized demo with Solutions Factory, or receive a personalized proposal. Visit the DEMOgrounds for more information. </li></ul></ul><ul><li>Relevant web sites to visit for more information </li></ul><ul><ul><li>List urls here. </li></ul></ul>
    42. 42. Reminder – please complete the OracleWorld online session survey Thank you.
    43. 44. Resources <ul><li>RedHat Linux </li></ul><ul><ul><li>http://www.redhat.com/oracle/ </li></ul></ul><ul><li>Linux Center - Technical White Papers & Documentation </li></ul><ul><ul><li>http://otn.oracle.com/tech/linux/tech_wp.html </li></ul></ul><ul><li>“ Tips and Techniques: Install and Configure Oracle9i on Red Hat Linux Advanced Server ” by Deepak Patel, Oracle Corporation </li></ul><ul><ul><li>http://otn.oracle.com/tech/linux/pdf/installtips_final.pdf </li></ul></ul><ul><li>“ Tips and Techniques: Install and Configure Oracle9i on SLES8 / United Linux 1.0 </li></ul><ul><ul><li>http://www.suse.com/en/business/certifications/certified_software/oracle/db/9iR2_sles8.html </li></ul></ul>
    44. 45. United Linux 1.0 Resources <ul><li>United Linux </li></ul><ul><ul><li>http://www.unitedlinux.com </li></ul></ul><ul><li>SuSE </li></ul><ul><ul><li>http://www.suse.com/us/business/products/server/sles/index.html </li></ul></ul><ul><li>Connectiva </li></ul><ul><ul><li>http://www.connectiva.com </li></ul></ul><ul><li>SCO Group (Formerly Caldera System) - http://www.ebizenterprises.com/page1.asp?p=463 </li></ul><ul><li>TurboLinux </li></ul><ul><ul><li>http://www.turbolinux.com/ </li></ul></ul>
    45. 46. Recommended one-off patches <ul><li>Bug 2820871 - ORA-29740 NODE EVICTION DESIGN ALGORITHM AND ABRUPT TIME CHANGE ARU: ARU 4161735 completed for LINUX Intel  </li></ul><ul><li>Bug 2420930 - GET ORA-600 [KSXPMPRP1] DURING STARTUP IN RAC MODE WITH LARGER BUFFERS. This was mysteriously included in, but not in Bug 2875050 was opened for this issue. ARU: ARU 4202164completed for LINUX Intel   </li></ul><ul><li>Bug 2420930 - GET ORA-600 [KSXPMPRP1] DURING STARTUP IN RAC MODE WITH LARGER BUFFERS  Bug 2922471 – Fractured block found during crash/instance recovery. Not an Oracle bug. Do not use ‘intr’ for the mount option. </li></ul>
    46. 47. Recommended one-off patches <ul><li>Bug:2844009 - MISSING LIBCXA.SO.3 LIBRARY ISSUE IN PSR 9203. ARU: ARU 4046387 completed for LINUX Intel  </li></ul><ul><li>Bug 2779294 – node_list does not populated into oraInventory/ContentsXML/inventory.xml. opatch install will only apply to local node. Workaround is editing inventory.xml documented in bug 2742686. </li></ul><ul><li>  </li></ul><ul><li>Bug 2646914, 2675090, 2706220 and 2695783 - ORA-600 [KCCSBCK_FIRST], [2] on linux and W2K platform after installing Very important patch, missing from ARU: ARU 4110670 completed for LINUX Intel ·        </li></ul>
    47. 48. Hangcheck-timer and Oracle Cluster Manager <ul><li>Download Patch 2594820 from Metalink </li></ul><ul><ul><li>#rpm -ivh <hangcheck-timer RPM name> </li></ul></ul><ul><li>Detaching watchdogd from the Cluster Manager (Bug 2495915) </li></ul><ul><li>The removal of the watchdogd </li></ul><ul><li>ORACLE_HOME/oracm/admin/cmcfg.ora </li></ul><ul><ul><li>WatchdogTimerMargin </li></ul></ul><ul><ul><li>WatchdogSafetyMargin </li></ul></ul><ul><li>KernelModuleName=hangcheck-timer </li></ul><ul><li>CMDiskFile from optional to mandatory </li></ul><ul><ul><li>CM quorum partition of cluster participation. </li></ul></ul>
    48. 49. Hangcheck-timer and Oracle Cluster Manager <ul><li>remove or comment out from the /etc/rc.local file: </li></ul><ul><li>/sbin/insmod softdog nowayout=0 soft_noboot=1 soft_margin=60 </li></ul><ul><li>ADD to rc.local, execute as root to load </li></ul><ul><li>/sbin/insmod hangcheck-timer.o hangcheck_tick=30 hangcheck_margin=180 </li></ul>
    49. 50. Hangcheck-timer and Oracle Cluster Manager <ul><li>inclusion of the hangcheck-timer kernel module </li></ul><ul><li>Parameter Service Value </li></ul><ul><li>----------------- ----------------- --------------- </li></ul><ul><li>hangcheck_tick hangcheck-timer 30 seconds </li></ul><ul><li>hangcheck_margin hangcheck-timer 180 seconds </li></ul><ul><li>KernelModuleName oracm hangcheck-timer </li></ul><ul><li>MissCount oracm > hangcheck_tick </li></ul><ul><ul><ul><ul><ul><li>hangcheck_margin </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>(> 210 seconds) </li></ul></ul></ul></ul></ul>
    50. 51. Hangcheck-timer and Oracle Cluster Manager <ul><li>cmcfg.ora example </li></ul><ul><ul><li>HeartBeat=15000 </li></ul></ul><ul><ul><li>ClusterName=Oracle Cluster Manager, version 9i </li></ul></ul><ul><ul><li>KernelModuleName=hangcheck-timer </li></ul></ul><ul><ul><li>PollInterval=1000 </li></ul></ul><ul><ul><li>MissCount=215 </li></ul></ul><ul><ul><li>PrivateNodeNames=int-node1 int-node2 </li></ul></ul><ul><ul><li>PublicNodeNames=node1 node2 </li></ul></ul><ul><ul><li>ServicePort=9998 </li></ul></ul><ul><ul><li>CmDiskFile=/ocfsdisk1/quorum/quorumfile </li></ul></ul><ul><ul><li>HostName=int-node1 </li></ul></ul>
    51. 52. Hangcheck-timer and Oracle Cluster Manager <ul><li>Parameters for ocmargs.ora </li></ul><ul><ul><li>oracm </li></ul></ul><ul><ul><li>norestart 1800 </li></ul></ul>
    52. 53. Linux Monitoring and Configuration Tools <ul><li>Overall tools sar, vmstat </li></ul><ul><li>CPU /proc/cpuinfo, mpstat, top </li></ul><ul><li>Memory /proc/meminfo, /proc/slabinfo, free </li></ul><ul><li>Disk I/O iostat </li></ul><ul><li>Network /proc/net/dev, netstat, mii-tool </li></ul><ul><li>Kernel Version and Rel. cat /proc/version </li></ul><ul><li>Types of I/O Cards lspci –vv </li></ul><ul><li>Kernel Modules Loaded lsmod, cat /proc/modules </li></ul><ul><li>List all PCI devices (HW) lspci –v </li></ul><ul><li>Startup changes /etc/sysctl.conf, /etc/rc.local </li></ul><ul><li>Kernel messages /var/log/messages, /var/log/dmesg </li></ul><ul><li>OS error codes /usr/src/linux/include/asm/errno.h </li></ul><ul><li>OS calls /usr/sbin/strace-p </li></ul>
    53. 54. Post Installation <ul><li>Increasing Address Space </li></ul><ul><li>Default 1.7 GB of address space for its SGA. </li></ul><ul><li>Shutdown all instances of Oracle </li></ul><ul><li>cd $ORACLE_HOME/lib </li></ul><ul><li>cp -a libserver9.a libserver9.a.org </li></ul><ul><ul><li>to make a backup copy </li></ul></ul><ul><li>cd $ORACLE_HOME/rdbms/lib </li></ul><ul><li>genksms -s 0x15000000 >ksms.s </li></ul><ul><ul><li>lower SGA base to 0x15000000 </li></ul></ul><ul><li>make -f ins_rdbms.mk ksms.o </li></ul><ul><ul><li>compile in new SGA base address </li></ul></ul><ul><li>make -f ins_rdbms.mk ioracle (relink) </li></ul>
    54. 55. Post Installation <ul><li>Increasing Address Space Cont. </li></ul><ul><li>sysctl –w kernel.shmmax=3000000000 </li></ul><ul><li>Lower process base </li></ul><ul><ul><li>Find out the pid of the process (shell) from where oracle will be started using ps (Oracle - echo $$) </li></ul></ul><ul><ul><li>changing /proc/$pid/mapped_base to 0x10000000 and restarting oracle </li></ul></ul><ul><li>Metalink Note: 200266.1 </li></ul>
    55. 56. Post Installation Variable SGA Reserved for kernel DB Buffers (SGA) Default Code, etc. 0xFFFFFFFF 0xC0000000 0x50000000 0x40000000 0x00000000 Variable SGA Reserved for kernel DB Buffers (SGA) After Relink Code, etc. 0xFFFFFFFF 0xC0000000 0x15000000 0x10000000 0x00000000 mapped_base (/proc/<pid>/mapped_base) sga_base (relink Oracle) Lowering of mapped base
    56. 57. Post Installation <ul><li>Larger Buffer Cache does buffer cache increase with larger SGA </li></ul><ul><li>Create an in-memory file system on the /dev/shm </li></ul><ul><ul><ul><li>mount -t shm shmfs -o size=8g /dev/shm </li></ul></ul></ul><ul><li>To enable the extended buffer cache feature, set the init.ora paramter </li></ul><ul><ul><ul><li>USE_INDIRECT_DATA_BUFFERS = true </li></ul></ul></ul><ul><li>Don’t Use dynamic cache parameters </li></ul><ul><ul><ul><li>DB_CACHE_SIZE </li></ul></ul></ul><ul><ul><ul><li>DB_#K_CACHE_SIZE </li></ul></ul></ul><ul><li>Limitations apply to the extended buffer cache feature on Linux: </li></ul><ul><li>You cannot change the size of the buffer cache while the instance is running. </li></ul><ul><li>You cannot create or use tablespaces with non-standard block sizes. </li></ul>
    57. 58. Post Installation <ul><li>Adjust send / receive buffer size to 256K </li></ul><ul><li>Tuning the default and maximum window sizes: </li></ul><ul><li>/proc/sys/net/core/rmem_default - default receive window </li></ul><ul><li>/proc/sys/net/core/rmem_max - maximum receive window </li></ul><ul><li>/proc/sys/net/core/wmem_default - default send window </li></ul><ul><li>/proc/sys/net/core/wmem_max - maximum send window </li></ul><ul><li>  </li></ul><ul><li>sysctl -w net.core.rmem_max=262144 </li></ul><ul><li>sysctl -w net.core.wmem_max=262144 </li></ul><ul><li>sysctl -w net.core.rmem_default=262144 </li></ul><ul><li>sysctl -w net.core.wmem_default=262144 </li></ul>
    58. 59. Post Installation <ul><li>To enable asynchronous I/O must re-link Oracle to use skgaioi.o </li></ul><ul><ul><li>cd to $ORACLE_HOME/rdbms/lib </li></ul></ul><ul><ul><li>make -f ins_rdbms.mk async_on </li></ul></ul><ul><ul><li>make -f ins_rdbms.mk ioracle </li></ul></ul><ul><ul><li>set 'disk_asynch_io=true' (default value is true) </li></ul></ul><ul><ul><li>set 'filesystemio_options=asynch‘ (RAW Only) </li></ul></ul>