My First 100 days with an Exadata (WP)

3,978 views

Published on

The biggest headine at the 2009 Oracle OpenWorld was when Larry Ellison announced that Oracle was entering the hardware business with a pre-built database machine, engineered by Oracle. Since then businesses around the world have started to use these engineered systems. This beginner/intermediate-level session will take you through my first 100 days of starting to administer an Exadata machine and all the roadblocks and all the success I had along this new path.

Published in: Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
3,978
On SlideShare
0
From Embeds
0
Number of Embeds
142
Actions
Shares
0
Downloads
195
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

My First 100 days with an Exadata (WP)

  1. 1. COLLABORATE 14 – IOUG Forum Engineered Systems 1 | P a g e “My First 100 days with an Exadata” White Paper My First 100 days with an Exadata Gustavo René Antúnez, The Pythian Group ABSTRACT TARGET AUDIENCE This document will benefit whoever is starting to use and administer an Oracle Exadata Machine, it covers the basic concepts and management tips to be able to start as a Database Machine Administrator. EXECUTIVE SUMMARY WHAT IS ORACLE EXADATA DATABASE MACHINE? Exadata is an optimized database machine designed by Oracle, created as the best platform for running the Oracle Database. It is a combination of Oracle Exadata Storage Server Software, Oracle Database software, the latest industry- standard hardware components including completely redundant hardware and a high-bandwidth low-latency InfiniBand network that connects all the components inside an Exadata Database Machine. One of the greatest qualities that the Exadata Database Machine has is a unique technology that offloads data intensive SQL operations into the Oracle Exadata Storage Servers. One thing to be aware of is that when acquiring an Exadata Machine is that it does not include any Oracle software licenses. The following types of Oracle Exadata Database Machine are available: • Exadata Database Machine Full Rack • Exadata Database Machine Half Rack • Exadata Database Machine Quarter Rack • Exadata Database Machine Eighth Rack (X3-2,X4-2) Another key component, is the manageability component that it has, which is the Integrated Lights Out Manager (ILOM), the ILOM provides the Exadata Database Machine can run all types of workloads including Online Transaction Processing (OLTP), Data Warehousing (DW) and most important a consolidation of mixed workloads. The biggest headine at the 2009 Oracle OpenWorld was when Larry Ellison announced that Oracle was entering the hardware business with a pre-built database machine, engineered by Oracle. Since then businesses around the world have started to use these engineered systems. This beginner paer will take you through my first 100 days of starting to administer an Exadata machine and all the roadblocks and all the success I had along this new path. Reader will be able to: • Learn about the Exadata architecture • Gain an insight into how to administer an Exadata machine • Learn what makes the patching of an Exadata so different than any other RAC/Single Database environment
  2. 2. COLLABORATE 14 – IOUG Forum Engineered Systems 2 | P a g e “My First 100 days with an Exadata” White Paper EXADATA ARCHITECTURE The basic components of the Exadata Database Machine consists of the following: Compute  Nodes  or  Database  Node   These are the nodes where the Oracle Grid Infrastructure (Oracle Clusterware and Oracle Automatic Storage Management), Oracle Real Application Clusters (Oracle RAC) and Oracle Databases are run and hosted. The compute node uses the iDB protocol that is built on the RDS (Reliable Datagram Sockets) protocol, and is used to minimize the number of data copies required to service I/O operations; this is a unique Oracle data transfer protocol that serves as the communications protocol among Oracle ASM, database instances, and storage cells. The operating system in these nodes can be either Linux or Solaris 11. What you use to manage these compute nodes is nothing new to the DBA, which are Linux OS commands, SQL*Plus, ASMCMD, CRSCTL. So I won’t go into detail into how to manage these. Storage  Nodes Database  Nodes Infiniband  Leaf   Switches ILOM,KVM,   Cisco  Network   Switch
  3. 3. COLLABORATE 14 – IOUG Forum Engineered Systems 3 | P a g e “My First 100 days with an Exadata” White Paper Storage  Cell  Server   The storage cells run the Exadata Storage Server Software and are part of the Exadata architecture that contain the physical storage that is presented to the ASM instances to support databases on the Exadata database servers. Each storage cell has 12 rotating magnetic disks, the disks can be either of the two options: high performance (HP) or high capacity (HC) and 4 PCIe flash cards that are divided into 4 Flash disks each, summarizing to 16 Flash disks in each Cell Server. These disks can be presented to the database nodes as storage (Database can use these as any other grid disk) or used a secondary cache for the database (smart cache). A full rack configuration has 14 cell nodes. The Exadata physical disk architecture has the following abstraction: • Physical disk. - Device within the storage cell that constitutes a single disk drive spindle • LUN (Logical Unit Number). – In a non-exadata world a LUN might correspond to several disks, where as in the Exadata Architecture 1 LUN corresponds to 1 Physical Disk. On the first 2 disks of each cell, the LUN will have carved out around 30 gb of space for the Operating System, these 2 LUNs will have a mirrored OS image, so that in case the first disk fails, the cell server can still operate. • Cell Disk. – A cell disk is created on a LUN. Once it is created, the Exadata Cell can manage it. Once a cell disk is created, the 30gb that were carved out for OS, it is now called the system area and is reserved for the Linux operating system, Exadata software, ADR and configuration metadata. It is the virtual representation of the physical disk, minus the System Area LUN (Disk 1 and Disk 2). • Grid Disks. – A cell disk can be virtualized into one or more grid disks. These are created in the cell disk and are the candidate disks from which your ASM disk groups will be built out of. The first grid disk that is created is positioned in outer sectors, so these will have a faster performance than the following grid disks that are created. This allows multiple Oracle ASM clusters and multiple databases to share the same physical disk. Grid disks are not exposed to the Operating System, so only database instances, ASM and related utilities, that use iDB, can see them. To manage the Cell Server, besides Linux OS commands, there are two utilities to manage disks in these nodes. • cellcli. - Cell Command Line Interface to manage a storage cell. It is used to configure storage (both physical and flash disks) on the storage cells. The cellcli interface is only available while logged on to the storage cells themselves. Similar to SQL*Plus for a database in which it uses the SELECT command, cellcli uses the LIST command Physical   Disk Grid  Disk Grid  Disk Grid  Disk LUN OS  Storage  Area  (Disk1    and   Disk2) Cell  Disk Cell  System  Area  (Disk  1  and   Disk  2) Faster  (Outer  Sectors) Slower  (Inner  Sectors)
  4. 4. COLLABORATE 14 – IOUG Forum Engineered Systems 4 | P a g e “My First 100 days with an Exadata” White Paper • dcli.- Command Line Interface that facilitates centralized management across all your databases nodes or cell nodes. This allows you to execute the same command across all nodes in a consistent manner. As well, it is very helpful to distribute a file to the same location across all nodes. With the cellcli you can view the information you have defined in your environment 1) General information about your cell CellCLI> list cell detail name: exa1c114_net0 bbuTempThreshold: 45 bbuChargeThreshold: 800 bmcType: IPMI cellVersion: OSS_11.2.0.3.0_LINUX.X64_110929 cpuCount: 16 diagHistoryDays: 7 fanCount: 12/12 fanStatus: normal id: 1003XFG027 interconnectCount: 3 interconnect1: bondib0 iormBoost: 0.0 ipaddress1: 192.168.12.114/24 kernelVersion: 2.6.18-238.12.2.0.2.el5 locatorLEDStatus: off makeModel: SUN MICROSYSTEMS SUN FIRE X4275 SERVER SAS metricHistoryDays: 7 notificationMethod: notificationPolicy: critical,warning,clear offloadEfficiency: 14,158.8 powerCount: 2/2 powerStatus: normal releaseVersion: 11.2.2.4.0 releaseTrackingBug: 12708838 smtpFrom: exadata1 smtpFromAddr: exatest@pythian.com smtpPort: 25 smtpServer: mail.pythian.com smtpToAddr: test@pythian.com smtpUseSSL: FALSE snmpSubscriber: host= exatest.pythian.com,port=3872,community=public status: online temperatureReading: 30.0 temperatureStatus: normal upTime: 97 days, 21:53 cellsrvStatus: running msStatus: running rsStatus: running msStatus: running 2) General Information about your LUN (Reduced Output) CellCLI> list lun detail name: 0_0 cellDisk: CD_00_exa1c114_net0 deviceName: /dev/sda
  5. 5. COLLABORATE 14 – IOUG Forum Engineered Systems 5 | P a g e “My First 100 days with an Exadata” White Paper diskType: HardDisk id: 0_0 isSystemLun: TRUE lunAutoCreate: FALSE lunSize: 557.861328125G lunUID: 0_0 physicalDrives: 20:0 raidLevel: 0 lunWriteCacheMode: "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" status: normal name: 0_1 cellDisk: CD_01_exa1c114_net0 deviceName: /dev/sdb diskType: HardDisk id: 0_1 isSystemLun: TRUE lunAutoCreate: FALSE lunSize: 557.861328125G lunUID: 0_1 physicalDrives: 20:1 raidLevel: 0 lunWriteCacheMode: "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" status: normal name: 0_2 cellDisk: CD_02_exa1c114_net0 deviceName: /dev/sdc diskType: HardDisk id: 0_2 isSystemLun: FALSE lunAutoCreate: FALSE lunSize: 557.861328125G lunUID: 0_2 physicalDrives: 20:2 raidLevel: 0 lunWriteCacheMode: "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU" status: normal 3) General Information about your Physical Disk (Reduced Output) CellCLI> list physicaldisk detail name: 20:0 deviceId: 8 diskType: HardDisk enclosureDeviceId: 20 errMediaCount: 0 errOtherCount: 0 foreignState: false luns: 0_0 makeModel: "SEAGATE ST360057SSUN600G" physicalFirmware: 0805 physicalInsertTime: 2010-02-05T10:31:54+00:00 physicalInterface: sas physicalSerial: E03WV1 physicalSize: 558.9109999993816G
  6. 6. COLLABORATE 14 – IOUG Forum Engineered Systems 6 | P a g e “My First 100 days with an Exadata” White Paper slotNumber: 0 status: normal name: 20:1 deviceId: 9 diskType: HardDisk enclosureDeviceId: 20 errMediaCount: 0 errOtherCount: 0 foreignState: false luns: 0_1 makeModel: "SEAGATE ST360057SSUN600G" physicalFirmware: 0805 physicalInsertTime: 2010-02-05T10:31:56+00:00 physicalInterface: sas physicalSerial: E03XQM physicalSize: 558.9109999993816G slotNumber: 1 status: normal 4) Information about your cell disk CellCLI> list celldisk detail name: CD_00_exa1c114_net0 comment: creationTime: 2011-06-14T02:25:45+00:00 deviceName: /dev/sda devicePartition: /dev/sda3 diskType: HardDisk errorCount: 0 freeSpace: 0 id: 00000130-8bf8-0381-0000-000000000000 interleaving: none lun: 0_0 raidLevel: 0 size: 528.734375G status: normal name: CD_01_exa1c114_net0 comment: creationTime: 2011-06-14T02:25:49+00:00 deviceName: /dev/sdb devicePartition: /dev/sdb3 diskType: HardDisk errorCount: 0 freeSpace: 0 id: 00000130-8bf8-1426-0000-000000000000 interleaving: none lun: 0_1 raidLevel: 0 size: 528.734375G status: normal name: CD_02_exa1c114_net0 comment: creationTime: 2011-06-14T02:25:49+00:00 deviceName: /dev/sdc devicePartition: /dev/sdc
  7. 7. COLLABORATE 14 – IOUG Forum Engineered Systems 7 | P a g e “My First 100 days with an Exadata” White Paper diskType: HardDisk errorCount: 0 freeSpace: 0 id: 00000130-8bf8-169a-0000-000000000000 interleaving: none lun: 0_2 raidLevel: 0 size: 557.859375G status: normal 5) List of information of your grid disks CellCLI> list griddisk detail name: DATA_CD_00_exa1c114_net0 asmDiskGroupName: DATA asmDiskName: DATA_CD_00_exa1c114_NET0 availableTo: cellDisk: CD_00_exa1c114_net0 comment: creationTime: 2011-06-14T02:26:18+00:00 diskType: HardDisk errorCount: 0 id: 00000130-8bf8-867a-0000-000000000000 offset: 32M size: 430G status: active name: DATA_CD_01_exa1c114_net0 asmDiskGroupName: DATA asmDiskName: DATA_CD_01_exa1c114_NET0 availableTo: cellDisk: CD_01_exa1c114_net0 comment: creationTime: 2011-06-14T02:26:18+00:00 diskType: HardDisk errorCount: 0 id: 00000130-8bf8-86b9-0000-000000000000 offset: 32M size: 430G status: active name: DATA_CD_02_exa1c114_net0 asmDiskGroupName: DATA asmDiskName: DATA_CD_02_exa1c114_NET0 availableTo: cellDisk: CD_02_exa1c114_net0 comment: creationTime: 2011-06-14T02:26:18+00:00 diskType: HardDisk errorCount: 0 id: 00000130-8bf8-86d8-0000-000000000000 offset: 32M size: 430G status: active
  8. 8. COLLABORATE 14 – IOUG Forum Engineered Systems 8 | P a g e “My First 100 days with an Exadata” White Paper One of the most common tasks done and that normally is not done on non-Exadata environment is a disk replacement. The following two MOS documents should be revised before doing a disk replacement 1390836.1 (Predictive Failure) and 1386147.1 (Hard Failure). This is an action plan when a Predictive failure disk replacement is done, as it was identified that the disk was going to fail using MOS document 1452325.1 . First will see that there is an error in the a Disk within the cell cellcli -e list physicaldisk detail | grep "status" status: normal status: normal status: normal status: normal status: normal status: warning - poor performance status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal status: normal This wil list the status of the disk 20:5 has a poor performance (Reduced Output) cellcli -e list physicaldisk detail name: 20:5 deviceId: 14 diskType: HardDisk enclosureDeviceId: 20 errMediaCount: 351 errOtherCount: 0 foreignState: false luns: 0_5 makeModel: "SEAGATE ST360057SSUN600G" physicalFirmware: 0B25
  9. 9. COLLABORATE 14 – IOUG Forum Engineered Systems 9 | P a g e “My First 100 days with an Exadata” White Paper physicalInsertTime: 2012-01-19T02:13:34-05:00 physicalInterface: sas physicalSerial: E1BAN6 physicalSize: 558.9109999993816G slotNumber: 5 status: warning - poor performance This is the action plan to change the 20:5 disk 1) Drop disks from ASM and Turn on disk LED a. Log into test01db01 b. Log into the grid user sudo su – grid c. Check login id Id Expected output -> uid=1000(grid) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1003(racoper),1004(asmdba),1006(asmadmin) d. Check server name uname –n Expected output -> test01db01.pythian.com e. Set environment to +ASM1 . oraenv +ASM1 f. Check ORACLE environment variables env | grep ORA Expected output -> ORACLE_SID=+ASM1 ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/11.2.0.3/grid g. Log into the ASM Instance sqlplus / as sysasm h. Drop the disks from the diskgroup alter diskgroup RECO_TEST01 drop disk 'RECO_TEST01_CD_05_TEST01CEL04'; alter diskgroup DATA_TEST01 drop disk 'DATA_TEST01_CD_05_TEST01CEL04'; alter diskgroup DBFS_DG drop disk 'DBFS_DG_CD_05_TEST01CEL04'; i. Log out of the ASM Instance exit j. Log off the grid user exit k. Log into the root user sudo su - l. Log into of test01cel04 ssh test01cel04 m. Check login id id Expected output -> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel) context=root:system_r:unconfined_t:s0-s0:c0.c1023 n. Check server name uname –n Expected output -> test01cel04.pythian.com o. Log into the cellcli cellcli p. Check on asm status for the disks
  10. 10. COLLABORATE 14 – IOUG Forum Engineered Systems 10 | P a g e “My First 100 days with an Exadata” White Paper list griddisk attributes name,asmmodestatus,asmdeactivationoutcome Expected output -> DATA_TEST01_CD_05_test01cel04 OFFLINE Yes DBFS_DG_CD_05_test01cel04 OFFLINE Yes <--All 3 should have OFFLINE status RECO_TEST01_CD_05_test01cel04 OFFLINE Yes Note. - asmmodestatus.- Whether a current ASM diskgroup is using this griddisk. A value of ONLINE indicates this grid disk is being used. asmdeactivationoutcome .-Recall that grid disks can be deactivated, which is effectively taking them offline. Since ASM mirroring ensures that the data is located on another disk, making this disk offline does not lose data. However, if the mirror is offline, or is not present, then making this grid disk offline will result in loss of data. This attribute shows whether the grid disk can be deactivated without loss of data. A value of “Yes” indicates you can deactivate this grid disk without data loss. q. Turn on disk LED alter physicaldisk 20:5 serviceled on r. Log out of cellcli exit s. Log out of test01cel04 exit t. Log out of root exit u. Verify you are Logged into test01db01 Hostname e.g. test01db01.pythian.com v. Log into the grid user sudo su - grid w. Check login id id Expected output -> uid=1000(grid) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1003(racoper),1004(asmdba),1006(asmadmin) x. Check server name uname –n Expected output -> test01db01.pythian.com y. Set environment to +ASM1 . oraenv +ASM1 z. Check ORACLE environment variables env | grep ORA Expected output -> ORACLE_SID=+ASM1 ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/11.2.0.3/grid aa. Log into the database sqlplus / as sysasm bb. Check that disks are no longer present select group_number,path,failgroup,header_status,mount_status,mode_status,name from V$ASM_DISK where path like '%CD_05_test01cel04'; cc. Monitor the rebalance operation, take in mind that this can take a while select * from gv$asm_operation; dd. Repeat 1.29 until no rows
  11. 11. COLLABORATE 14 – IOUG Forum Engineered Systems 11 | P a g e “My First 100 days with an Exadata” White Paper ee. If you get the following error in the ASM log, it is part of the bug documented in MOS Note 1599448.1 : WARNING: Exadata Auto Management: OS PID: 20732 Operation ID: 26227: ONLINE disk DBFS_DG_CD_05_TEST01CEL04 in diskgroup DBFS_DG Fail SQL : Cause : Action : Check alert log to see why this operation failed. Also check process trace file for matching Operation ID. ff. If you query V$ASM_DISK, you will see the mode_status ONLINE, but the header_status FORMER select group_number,path,failgroup,header_status,mount_status,mode_status,name from V$ASM_DISK where path like '%CD_05_test01cel04' GROUP_NUMBER PATH FAILGROUP HEADER_STATU MOUNT_S MODE_ST NAME ------------ -------------------------------------------------- ------------------------------ -- ---------- ------- ------- ----------- 0 o/192.168.10.8/DBFS_DG_CD_05_test01cel04 TEST01CEL04 FORMER CLOSED ONLINE 0 o/192.168.10.8/RECO_TEST01_CD_05_test01cel04 TEST01CEL04 FORMER CLOSED ONLINE 0 o/192.168.10.8/DATA_TEST01_CD_05_test01cel04 TEST01CEL04 FORMER CLOSED ONLINE 2) Engineer to replace disk a. Inform engineer that service light is on for faulty drive b. Await engineer to inform you that disk has been replaced. 3) Add disks back into ASM a. Log into test01db01 b. Log into the root user sudo su – c. Log into of test01cel04 ssh test01cel04 d. Log into the cellcli cellcli e. Turn off disk LED alter physicaldisk 20:5 serviceled off f. Log out of cellcli exit g. Log out of test01cel04 exit h. Log out of root exit i. Verify you are Logged into test01db01 Hostname e.g. test01db01.pythian.com j. Log into the grid user sudo su - grid k. Check login id id Expected output -> uid=1000(grid) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1003(racoper),1004(asmdba),1006(asmadmin) l. Check server name uname –n Expected output -> test01db01.pythian.com
  12. 12. COLLABORATE 14 – IOUG Forum Engineered Systems 12 | P a g e “My First 100 days with an Exadata” White Paper m. Set environment to +ASM1 . oraenv +ASM1 n. Check ORACLE environment variables env | grep ORA Expected output -> ORACLE_SID=+ASM1 ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/11.2.0.3/grid o. Log into the ASM Instance sqlplus / as sysasm p. Check that disks are present select group_number,path,failgroup,header_status,mount_status,mode_status,name from V$ASM_DISK where path like '%CD_05_test01cel04'; q. Add the disks back in to the diskgroup alter diskgroup RECO_TEST01 add failgroup TEST01CEL04 disk 'o/192.168.10.8/RECO_TEST01_CD_05_test01cel04' name RECO_TEST01_CD_05_TEST01CEL04 rebalance power 10; alter diskgroup DATA_TEST01 add failgroup TEST01CEL04 disk 'o/192.168.10.8/DATA_TEST01_CD_05_test01cel04' name DATA_TEST01_CD_05_TEST01CEL04 rebalance power 10; alter diskgroup DBFS_DG add failgroup TEST01CEL04 disk 'o/192.168.10.8/DBFS_DG_CD_05_test01cel04' name DBFS_DG_CD_05_TEST01CEL04 rebalance power 10; r. Monitor the rebalance operation select * from gv$asm_operation; s. Repeat until no rows Also the Oracle Database provides several dynamic performance views to get information concerning the storage cells from the database to which they are connected instead of logging on to the storage cells. View Name Description V$CELL IP addresses assigned to the cells. This also includes the hash value for the cell which is included in the Exadata storage cell session waits in v$session_wait in the P1 column. V$CELL_CONFIG Configuration information about all the levels in a storage cell. The information is at the following levels: CELL – general cell information PHYSICALDISKS – the physical disks on the cell LUNS – the LUN’s presented to the cell CELLDISKS – the cell disks created on the cell GRIDDISKS – the grid disks created on the cell IORM – IO Resource Manager information for the cell V$CELL_REQUEST_TOTALS Snapshots of the requests made to each cell over the last 15 minutes. The information is categorized as: CacheGet Jobs CachePut Jobs Predicate Disk Read Jobs Process Ioctl Jobs Smart IO Total Request Size V$CELL_STATE Current statistics for each storage cell. The types of statistics that are reported are: CAPABILITY
  13. 13. COLLABORATE 14 – IOUG Forum Engineered Systems 13 | P a g e “My First 100 days with an Exadata” White Paper CELL LOCK NPHYSDISKS PHASESTAT PREDIO RCVPORT SENDPORT THREAD V$CELL_THREAD_HISTORY Detail of events over the previous 15 minutes for the storage cell. The event types that are categorized as: AntMaster CacheGet CachePut DiskOwnerFile GC IORM self tuning thread NetworkRead Remote Listener System stats collection thread UnidentifiedJob Integrated  Lights  Out  Manager  (ILOM)   The Cell and Compute Nodes come with a dedicated management channel for device maintenance called Integrated Lights Out Manager that works independently from these nodes. It allows you to manage the node even if it is powered on or off. It runs on its own embedded operating system, it is initialized when the Exadata Server is turned on and has its own dedicated Ethernet port. It allows you to learn about hardware errors of your node as well as control the power of your node. ILOM supports various interfaces for accessing it: • Web Interface • CLI (Command Line Interface) • Remote console • Intelligent Platform Management Interface (IPMI) When using the ILOM to view your Hardware, we need to understand that there are 2 types of these, called Customer-Replace Unit (CRU), which are hardware parts that any qualified service provider can replace and Field-Replaceable Units (FRU) are hardware parts that only qualified service personnel can replace. From time to time, when doing an FRU maintenance with Oracle you might be asked to help out the engineer when clearing an error or provide information to them. This is done thought the Fault Management console in the ILOM, an example of this is shown below -> show faulty Target | Property | Value --------------------+------------------------+--------------------------------- /SP/faultmgmt/0 | fru | /SYS/SP /SP/faultmgmt/0/ | class | fault.security.enclosure-open faults/0 | | /SP/faultmgmt/0/ | sunw-msg-id | SPX86-8001-VY faults/0 | |
  14. 14. COLLABORATE 14 – IOUG Forum Engineered Systems 14 | P a g e “My First 100 days with an Exadata” White Paper /SP/faultmgmt/0/ | component | /SYS/SP faults/0 | | /SP/faultmgmt/0/ | uuid | a6409101-e792-efb0-e4a2-b4674191 faults/0 | | aee9 /SP/faultmgmt/0/ | timestamp | 2014-02-04/21:13:53 faults/0 | | /SP/faultmgmt/0/ | detector | /SYS/INTSW faults/0 | | /SP/faultmgmt/0/ | product_serial_number | < SERIAL NUMBER > faults/0 | | /SP/faultmgmt/0/ | chassis_serial_number | < SERIAL NUMBER > faults/0 | | -> cd /SP/faultmgmt /SP/faultmgmt -> start shell Are you sure you want to start /SP/faultmgmt/shell (y/n)? y faultmgmtsp> fmadm faulty ------------------- ------------------------------------ -------------- -------- Time UUID msgid Severity ------------------- ------------------------------------ -------------- -------- 2014-02-04/21:13:53 a6409101-e792-efb0-e4a2-b4674191aee9 SPX86-8001-VY Major Fault class : fault.security.enclosure-open FRU : /SYS/SP (Part Number: unknown) (Serial Number: unknown) Description : A chassis intrusion failure has occurred. Response : The chassis-wide service required LED will be illuminated. Impact : Server is immediately powered off and the service processor will operate in a degraded mode. Action : The administrator should review the ILOM event log for additional information pertaining to this diagnosis. Please refer to the Details section of the Knowledge Article for additional information. faultmgmtsp> fmadm repair a6409101-e792-efb0-e4a2-b4674191aee9 faultmgmtsp> fmadm faulty No faults found As mentioned before, you can also login to the node via the ILOM. This is very helpful in cases where you can’t reach the node and will help you troubleshoot any issues with the node. /home/antunez> ssh root@10.10.10.10 Password: Oracle(R) Integrated Lights Out Manager Version 3.0.16.10.d r74499 Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. -> start /SP/console Are you sure you want to start /SP/console (y/n)? y
  15. 15. COLLABORATE 14 – IOUG Forum Engineered Systems 15 | P a g e “My First 100 days with an Exadata” White Paper Serial console started. To stop, type ESC ( RDS/IB: connected to 10.68.10.1 version 3.1 RDS/IB: connected to 10.68.10.1 version 3.1 test01db01.pythian.com login: root Password: Last login: Thu Jan 30 20:01:47 from 10.10.10.11 [root@test01db01 ~]# ping test01db02ib0: Unicast, no dst: type 0000, QPN 060000 0020:0800:1404:0001:8000:0048:fe80:0000 PING test01db02.pythian.com (10.10.10.11) 56(84) bytes of data. 64 bytes from test01db02.pythian.com (10.10.10.11): icmp_seq=1 ttl=64 time=0.077 ms ib0: Unicast, no dst: type 0000, QPN 060000 0020:0800:1404:0001:8000:0048:fe80:0000 64 bytes from test01db02.pythian.com (10.10.10.11): icmp_seq=2 ttl=64 time=0.080 ms 64 bytes from test01db02.pythian.com (10.10.10.11): icmp_seq=3 ttl=64 time=0.073 ms ib0: Unicast, no dst: type 0000, QPN 060000 0020:0800:1404:0001:8000:0048:fe80:0000 64 bytes from test01db02.pythian.com (10.10.10.11): icmp_seq=4 ttl=64 time=0.071 ms 64 bytes from test01db02.pythian.com (10.10.10.11): icmp_seq=5 ttl=64 time=0.062 ms ib0: Unicast, no dst: type 0000, QPN 060000 0020:0800:1404:0001:8000:0048:fe80:0000 64 bytes from test01db02.pythian.com (10.10.10.11): icmp_seq=6 ttl=64 time=0.072 ms --- test01db02.pythian.com ping statistics --- 6 packets transmitted, 6 received, 0% packet loss, time 5000ms rtt min/avg/max/mdev = 0.062/0.072/0.080/0.010 ms [root@test01db01 ~]# ib0: Unicast, no dst: type 0000, QPN 060000 0020:0800:1404:0001:8000:0048:fe80:0000 RDS/IB: connected to 10.10.10.6 version 3.1 [root@test01db01 ~]# ps -eaf | grep pmon grid 12167 1 0 21:42 ? 00:00:00 asm_pmon_+ASM1 root 13408 11206 0 21:42 ttyS0 00:00:00 grep pmon [root@test01db01 ~]# sudo su - grid [grid@test01db01 antunez]$ sh ./crs_status.sh | grep -v "ONLINE" NAME TARGET STATE SERVER STATE_DETAILS ------------------------- ---------- ---------- ------------ ------------------ dbfs_mount OFFLINE OFFLINE test01db01 dbfs_mount OFFLINE OFFLINE test01db02 ora.gsd OFFLINE OFFLINE test01db01 ora.gsd OFFLINE OFFLINE test01db02 ora.gsd OFFLINE OFFLINE test02db01 ora.gsd OFFLINE OFFLINE test02db02 ora.test.db OFFLINE OFFLINE test01db01 Instance Shutdown ora.test.db OFFLINE OFFLINE   Also you can use the IPMI interface to access the ILOM [root@test01db01 ~]# ipmitool sunoem cli "show /SP/network" Connected. Use ^D to exit. -> show /SP/network /SP/network Targets: interconnect ipv6 test Properties:
  16. 16. COLLABORATE 14 – IOUG Forum Engineered Systems 16 | P a g e “My First 100 days with an Exadata” White Paper commitpending = (Cannot show property) dhcp_server_ip = none ipaddress = 10.10.144.18 ipdiscovery = static ipgateway = 10.10.144.1 ipnetmask = 255.255.255.0 macaddress = 00:21:22:F3:2F:7J managementport = /SYS/SP/NET0 outofbandmacaddress = 00:21:22:F3:2F:7J pendingipaddress = 10.10.144.18 pendingipdiscovery = static pendingipgateway = 10.10.144.1 pendingipnetmask = 255.255.255.0 pendingmanagementport = /SYS/SP/NET0 sidebandmacaddress = 00:21:22:F3:2F:7J state = enabled Commands: cd set show   Network   The InfiniBand network technology is used to provide a consolidated interconnect backbone that serves all of the hardware components. A full rack configuration has 3 InfiniBand switches for redundancy and throughput. The compute nodes and the storage device within are provisioned with redundant InfiniBand connections. iDB is implemented in the database kernel and transparently maps database operations to Exadata-enhanced operations. iDB is used to ship SQL operations down to the cell nodes for execution and to return query result sets to the database kernel. Instead of returning database blocks, the cells nodes return only the rows and columns that satisfy the SQL query. It uses “zero copy”, which means data is transferred across the network without intermediate buffer copies in the various network layers. EXADATA FEATURES Hybrid  Columnar  Compression  (HCC)  or  Exadata  Hybrid  Columnar  Compression  (EHCC)   HCC is optimized to use both database and storage capabilities on Exadata to deliver tremendous space savings and high performance at the same time. To understand what Hybrid Columnar Compression is, first we need to look at what other types of compression is available for your data blocks. • Basic Compression. - The compression unit is a single Oracle block. It compresses data only on direct path loads. Modifications force the data to be stored in an uncompressed format, as do insert that do not use the direct path load mechanism. This is not recommended for OLTP databases. It uses the following syntax: CREATE TABLE … COMPRESS. • OLTP Compression. – Part of the licensable feature Advance Compression. It allows data to be compressed for all operations, not only direct path loads. It attempts to allow for future updates by leaving 10% space in each block via the PCTFREE setting. Once a block becomes “full” it will be compressed. It uses the following syntax: CREATE TABLE … COMPRESS FOR OLTP.
  17. 17. COLLABORATE 14 – IOUG Forum Engineered Systems 17 | P a g e “My First 100 days with an Exadata” White Paper HCC is only available for tables stored on Exadata storage. As with BASIC compression, data will only be compressed in HCC format when it is loaded using direct path loads. This is very important to understand as any inserts or updates will cause the records to be stored in OLTP compressed format. HCC has four types of compression: • Query Low. - It uses the LZO compression algorithm. Provides the lowest compression ratios but require the least CPU for compression and decompression operation. It is optimized for maximizing speed rather than compression. CREATE TABLE ... COMPRESS FOR QUERY LOW; • Query High. - It uses the ZLIB (gzip) compression algorithm. Mostly recommended for Data Warehouse with focus on space saving. CREATE TABLE ... COMPRESS FOR QUERY HIGH; • Archive Low. - It uses the ZLIB (gzip) compression algorithm as well, but at a higher-level compression than Query High. It is mostly recommended for Archival Data with Load Time as a critical factor. CREATE TABLE ... COMPRESS FOR ARCHIVE LOW; • Archive High. – It uses Bzip2 compression. It is the highest level of compression available but it is the most CPU-intensive. This compression method is mostly recommended for Archival Data with maximum Space Saving. CREATE TABLE ... COMPRESS FOR ARCHIVE HIGH; In an example of the compression ratio, we created 2 tables: The FIRST table is very narrow, consisting of only 12 columns. The table has close to 20 billion rows, and many of the columns have a very HIGH number of distinct values (NDV). SQL> CREATE TABLE EX_TAB_ORIGINAL TABLESPACE USERS AS 2 SELECT LEVEL empl_id, 3 MOD (ROWNUM, 50000) dept_id, 4 TRUNC (DBMS_RANDOM.VALUE (1000, 500000), 2) salary, 5 DECODE (ROUND (DBMS_RANDOM.VALUE (1, 2)), 1, 'M', 2, 'F') gender, 6 TO_DATE (ROUND (DBMS_RANDOM.VALUE (1, 28))|| '-'|| 7 ROUND (DBMS_RANDOM.VALUE (1, 12))|| '-'|| 8 ROUND (DBMS_RANDOM.VALUE (1900, 2010)), 9 'DD-MM-YYYY') dob, 10 DBMS_RANDOM.STRING ('x', DBMS_RANDOM.VALUE (20, 50)) address1, 11 DBMS_RANDOM.STRING ('u', DBMS_RANDOM.VALUE (20, 50)) address2, 12 DBMS_RANDOM.STRING ('a', DBMS_RANDOM.VALUE (20, 50)) address3 13 FROM DUAL CONNECT BY LEVEL < 10000000; SQL> INSERT INTO EX_TAB_ORIGINAL SELECT /*+ PARALLEL (A 8) */ * FROM EX_TAB_ORIGINAL A; 9999999 rows created. SQL> SELECT COUNT(*) FROM EX_TAB_ORIGINAL; COUNT(*) ---------- 19999998 The SECOND table is again very narrow, consisting of only 12 columns; it has close to 19 billion rows, and many of the columns have a very LOW number of distinct values (NDV). Meaning that the same values are repeated many times.
  18. 18. COLLABORATE 14 – IOUG Forum Engineered Systems 18 | P a g e “My First 100 days with an Exadata” White Paper SQL> CREATE TABLE EX_TAB_ORIGINAL_2 TABLESPACE USERS AS 2 SELECT LEVEL empl_id, 3 MOD (ROWNUM, 50000) dept_id, 4 TRUNC (DBMS_RANDOM.VALUE (1000, 500000), 2) salary, 5 DECODE (ROUND (DBMS_RANDOM.VALUE (1, 2)), 1, 'M', 2, 'F') gender, 6 TO_DATE (ROUND (DBMS_RANDOM.VALUE (1, 28))|| '-'|| 7 ROUND (DBMS_RANDOM.VALUE (1, 12))|| '-'|| 8 ROUND (DBMS_RANDOM.VALUE (1900, 2010)), 9 'DD-MM-YYYY') dob, 10 DBMS_RANDOM.STRING ('x', DBMS_RANDOM.VALUE (20, 50)) address1, 11 DBMS_RANDOM.STRING ('u', DBMS_RANDOM.VALUE (20, 50)) address2, DBMS_RANDOM.STRING ('a', DBMS_RANDOM.VALUE (20, 50)) address3 12 13 FROM DUAL CONNECT BY LEVEL < 10; SQL> insert into EX_TAB_ORIGINAL_2 select * from EX_TAB_ORIGINAL_2; 9 rows created. SQL> insert into EX_TAB_ORIGINAL_2 select * from EX_TAB_ORIGINAL_2; 18 rows created. ... ... SQL> insert into EX_TAB_ORIGINAL_2 select * from EX_TAB_ORIGINAL_2; 9437184 rows created. SQL> SELECT COUNT(*) FROM EX_TAB_ORIGINAL_2; COUNT(*) ---------- 18874368 Based on these 2 tables, we created the 12 tables for the different compression method used and the results are below: Compression Method Segment Size High NDV Low NDV MB Reduction MB Reduction No Compression 3,109.00 3,008.00 Basic 2,769.00 10.90% 246.88 91.80% OLTP 3,080.60 0.90% 280.94 90.70% Query Low 2,466.90 20.70% 113.25 96.20% Query High 1,637.40 47.30% 5.31 99.80% Archive Low 1,632.80 47.50% 5.31 99.80% Archive High 1,546.10 50.30% 5.31 99.80% Reduction Ratio
  19. 19. COLLABORATE 14 – IOUG Forum Engineered Systems 19 | P a g e “My First 100 days with an Exadata” White Paper Compression Method Segment Creation - Duration High NDV Low NDV Basic 00:13.7 00:06.6 OLTP 00:13.0 00:05.7 Query Low 00:10.4 00:07.0 Query High 00:26.4 00:13.7 Archive Low 00:36.8 00:13.9 Archive High 02:05.1 00:42.8 Time of Creation for each of the compressed tables Compression Method Select Time - Duration High NDV Low NDV No Compression 00:01.6 00:04.0 Basic 00:01.7 00:02.3 OLTP 00:01.8 00:01.4 Query Low 00:01.3 00:01.1 Query High 00:01.6 00:01.0 Archive Low 00:01.4 00:00.7 Archive High 00:01.8 00:00.9 Select time duration for each of the compressed tables Oracle also provides an EHCC advisor, called GET_EHCC_CR, which can be found in the MOS ID 1269846.1, this advisor can help you to define the compression ratio that a table will have depending on the compression method that will be used. The 5th parameter of this advisor is where the compression type will be defined: 1. Query Low 2. Query High 3. Archive Low 4. Archive High e.g. SQL> EXEC GET_EHCC_CR('DATA','TST','EX_TAB_ORIGINAL',NULL,3); Compression Advisor self-check validation successful. select count(*) on both Uncompressed and EHCC Compressed format = 1000001 rows COMPRESSED_TYPE = "Compress For Archive Low" COMPRESSED_BLOCKS = 10417 UNCOMPRESSED_BLOCKS = 19611 COMPRESSED_ROWS = 96 UNCOMPRESSED_ROWS = 51 COMPRESSION_RATIO = 1.8 PL/SQL procedure successfully completed. Elapsed: 00:01:38.62
  20. 20. COLLABORATE 14 – IOUG Forum Engineered Systems 20 | P a g e “My First 100 days with an Exadata” White Paper Smart  Scan  and  Offloading   Smart Scan is software engineered for Exadata that can provide significant IO savings and dramatically improved performance. Exadata Smart Scan processes queries at the storage layer, returning only relevant rows and columns to the database server. Offloading refers to the concept of moving processing from the database servers to the storage layer. The primary benefit of Offloading is the reduction in the volume of data that must be returned to the database server. There are three basic requirements that must be met for Smart Scans to occur: • There must be a full scan of an object. These correspond to TABLE ACCESS FULL and INDEX FAST FULL SCAN operations of an execution plan. • The scan must use Oracle’s Direct Path Read mechanism. Required buffering model for a Smart Scan. When a session is reading buffers from disk directly into the PGA (opposed to the buffer cache in SGA). This state occurs in the following situations: o The sorts are too large to fit in memory and some of the sort data is written out directly to disk. This data is later read back in, using direct reads. o Parallel execution servers are used for scanning data. o The server process is processing buffers faster than the I/O system can return the buffers. This can indicate an overloaded I/O system. • The object must be stored on Exadata storage. The attribute assigned to ASM disk groups CELL.SMART_SCAN_CAPABLE that specifies whether a disk group is capable of processing Smart Scans. SQL> create diskgroup DATA_TEST normal redundancy disk 'o/*/DATA*' attribute 'compatible.rdbms' = '11.2.0.3.0', 'compatible.asm' = '11.2.0.3.0', 'cell.smart_scan_capable' = 'TRUE', 'au_size' = '4M'; Diskgroup created. A query will not use Smart Scan if the columns being requested by the query include a database large object (LOB), or if a table is a clustered table or an index-organized table. The C function that performs Smart Scans (KCFIS_READ) is called by the direct path read C function (KCBLDRGET), which is called by one of the full scan functions The table below shows the parameters that control the smart scan behavior. Parameter Default Description CELL_OFFLOAD_DECRYPTION TRUE Controls whether decryption is offloaded. Note that when this parameter is set to FALSE, Smart Scans are completely disabled on encrypted data. CELL_OFFLOAD_PLAN_DISPLAY AUTO Controls whether Exadata operation names are used in execution plan output from XPLAN. AUTO means to display them only if the database is using Exadata storage. CELL_OFFLOAD_PROCESSING TRUE Turns Offloading on or off. _SERIAL_DIRECT_READ AUTO Controls the serial direct path read mechanism. The valid values are AUTO, TRUE, FALSE, ALWAYS and NEVER. Offloading Parameters
  21. 21. COLLABORATE 14 – IOUG Forum Engineered Systems 21 | P a g e “My First 100 days with an Exadata” White Paper With Smart Scan, come several optimizations: • Column Projection or Column Filtering The cell server will only return the columns requested, meaning that if you have a table that has 50 columns, and only 5 are selected and 2 involved in the join operation. The cell server will only return 7 columns. • Predicate Filtering The cell server will only return the rows requested instead of all the rows in a table. Since iDB includes the predicate information in its requests, this is accomplished by performing the standard filtering operations at the storage cells before returning the data. • Storage Indexes Cell Server In-Memory structure that its main goal is to reduce the number I/O’s from disk. The Storage Index keeps track of minimum and maximum values of columns for tables stored on that cell. Storage Indexes are automatically created and maintained transparently by the Exadata Storage Server Software, because of this, there is very little that can be done to affect when or how they are used. There are three “commonly” known hidden database parameters that deal with Storage Indexes, and as always use hidden parameters only to test or when specified by Oracle: o _kcfis_storageidx_disabled .- When set to TRUE it will turn Storage Indexes off, when set to the default FALSE it will turn them on. o _kcfis_storageidx_diag_mode.- This will enable you to trace the Storage Index, the default is 0 which is disabled. Setting the parameter to 2 will enable Storage Index diagnostics/tracing. The trace files are generated in the Cell Node in $ADR_BASE/diag/asm/cell/<node>/trace o _cell_storidx_mode.- Controls where Storage Indexes will be applied. There are three valid values for this parameter (EVA, KDST, ALL). EVA, the default, and KDST are Oracle kernel function names. EXADATA PATCHING Patching is a regular task that a DBA does, and has been evolving as you start using more and more technologies, going from a single instance environment, to RAC environment, where you not only have to patch the Database binaries, you also have to patch the Grid Infrastructure. In an Oracle Exadata environment, it is taken to the next level as you now have several components, which is the firmware of the nodes, Operating System, Infiniband Switches as well as the RDBMS and Grid Infrastructure binaries. MOS document 888828.1 will have the latest patches and release dates. There are different types of patches for Exadata Database Machine, as listed below: • Storage Server software Contains updates to firmware, operating system, and/or Exadata Storage Server software. These patches are independent of the Database server but it might require that the Database Server or Database Server firmware be in a specific version. This are also installed in two manners: o Rolling. – Applied one storage server at a time while the databases remain operational. o Non-rolling. – Applied to all storage servers simultaneously with the databases are offline. • Database server o Oracle Grid Infrastructure and Database software Most updates are delivered in bundle patches created specifically for Exadata for Oracle Database (DB_BP) and Oracle Clusterware (GI_BP), which are provided through Quarterly Database Patch for Exadata (QDPE)
  22. 22. COLLABORATE 14 – IOUG Forum Engineered Systems 22 | P a g e “My First 100 days with an Exadata” White Paper o Operating system software and firmware Exadata Specific patch that has to be applied on the Database Server Operating system and firmware. Beginning in 11.2.3.1.0 updates are delivered through the Unbreakable Linux Network (ULN) which require populating and maintaining a yum repository for Exadata (1473002.1). As well, you can use the dbnodeupdate.sh utility, which automates all the steps and checks to upgrade Oracle Exadata database servers to a new Exadata release and replaces the manual steps., though there are versions which this utility can’t be used, verify MOS note 1553103.1 to see where it can or can’t be used. • InfiniBand switch software Patch contains updates to the software and/or firmware for InfiniBand switches. These have no dependency on the Storage Server unless specified in the readme. • Additional components Ethernet switch, KVM, and PDU patches, these are normally maintained at your discretion. For the Storage/Database and Infiniband , there is a regular schedule. But Oracle provides on a regular schedule aligned with the release CPU and PSU patches; a Quarterly Full Stack Download Patch (QFSDP), now this doesn’t mean that you apply it as one, but it just means that it compiles the current software patches for the stack QFSDP releases contain the latest software for the following components: • Infrastructure o Exadata Storage Server o InfiniBand Switch o Power Distribution Unit (PDU) • Database o Oracle Database and Grid Infrastructure o OPatch o OPlan • Systems Management o EM Agent o EM OMS o EM Plug-ins Below is an example of the ease of use of the dbnodeupdate.sh, take in mind that due to the reboot, it will have to be run a second time, but with the –c option to complete the update, this runs the needed post upgrade steps. [root@test01db01 ~]# ./dbnodeupdate.sh -u -l /u01/stage/p16432033/p16432033_112321_Linux-x86-64.zip (*) 2013-12-07 03:24:14: Unzipping helpers (/u01/stage/BP21/17452393/Infrastructure/ExadataDBNodeUpdate/1.81/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers (*) 2013-12-07 03:24:14: Initializing logfile /var/log/cellos/dbnodeupdate.log Warning: Active NFS and/or SMBFS mounts found on this DB server node. At a later stage dbnodeupdate.sh will try unmounting them silently. However, during the collection of system configuration stale network mounts may cause long waits. It is therefore recommended to unmount any active network mount now before continuing. Continue ? [Y/n] Y (*) 2013-12-07 03:24:25: Collecting system configuration details...
  23. 23. COLLABORATE 14 – IOUG Forum Engineered Systems 23 | P a g e “My First 100 days with an Exadata” White Paper (*) 2013-12-07 03:24:40: Checking free space in /u01 (*) 2013-12-07 03:24:40: Unzipping /u01/stage/p16432033/p16432033_112321_Linux-x86-64.zip to /u01/app/oracle/stage.4522, this may take a while (*) 2013-12-07 03:24:52: Backing up /etc/yum.repos.d/Exadata-computenode.repo in /etc/yum.repos.d.bak/071213032414 (*) 2013-12-07 03:24:52: Backing up /etc/yum.repos.d/Exadata-computenode.repo.sample in /etc/yum.repos.d.bak/071213032414 (*) 2013-12-07 03:24:52: Backing up /etc/yum.repos.d/not_supported.repo in /etc/yum.repos.d.bak/071213032414 Active Image version : 11.2.3.1.0.120304 Active Kernel version : 2.6.18-274.18.1.0.1.el5 Active LVM Name : /dev/mapper/VGExaDb-LVDbSys1 Inactive Image version : n/a Inactive LVM Name : /dev/mapper/VGExaDb-LVDbSys2 Current user id : root Action : upgrade Upgrading to : 11.2.3.2.1.130302 Baseurl : file:///var/www/html/yum/unknown/EXADATA/dbserver/11.2/latest.4522/x86_64/ (iso) Iso file : /u01/stage/stage.4522/112_latest_repo_130302.iso Create a backup : Yes Shutdown stack : No (Currently stack is down) Hotspare to be claimed : Yes : Raid reconstruction will be performed online, may take up to 4 hours and impact performance : If you want the hotspare reclaimed at a later time, then execute the following command and restart dbnodeupdate.sh : "touch /opt/oracle/EXADATA_KEEP_HOT_SPARE_ON_YUM_UPDATE" Logfile : /var/log/cellos/dbnodeupdate.log (runid: 071213032414) Diagfile : /var/log/cellos/dbnodeupdate.071213032414.diag Server model : SUN FIRE X4170 M2 SERVER Remote mounts exist : Yes (dbnodeupdate.sh will try unmounting) dbnodeupdate.sh rel. : 1.81 (always check MOS 1553103.1 for the latest release) Automatic checks incl. : Issue 1.8 - Hotspare not reclaimed : Issue 1.10 - Cell and Database image versions 11.2.2.2.2 or lower require workaround before patching : Database servers with an ofa rpm earlier than 1.5.1-4.0.28 can encounter a file system corruption : Issue 1.14 - Upgrade to 11.2.3.x failed due to Sas Exp. FW not upgrd. first to 5.7.0 on X4800 and X4800 M2 : Issue 1.15 - Filesystem checks not disabled on database servers : Issue 1.16 - Verify the vm.min_free_kbytes kernel parameter on database servers to make sure 512MB is reserved : Issue 1.17 - Kdump not working on database servers running uek kernel after upgrading to 11.2.3.2.1 : Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12 Manual checks todo : Issue 1.11 - Database Server upgrades to 11.2.2.3.0 or higher may hit network routing issues after the upgrade Note : After upgrading and rebooting run './dbnodeupdate.sh -c' to finish post steps Continue ? [Y/n] Y (*) 2013-12-07 03:25:30: Verifying GI and DB's are shutdown (*) 2013-12-07 03:25:32: Collecting console history for diag purposes (*) 2013-12-07 03:25:45: Successfully unmounted /mnt/u03 (*) 2013-12-07 03:25:45: Successfully unmounted /mnt/bckup (*) 2013-12-07 03:25:45: Successfully unmounted /dbbackup (*) 2013-12-07 03:25:45: Successfully unmounted /dbbackups/channel1 (*) 2013-12-07 03:25:45: Successfully unmounted /dbbackups/channel2 (*) 2013-12-07 03:25:45: Successfully unmounted /dbbackups/channel3 (*) 2013-12-07 03:25:46: Successfully unmounted /dbbackups/channel4
  24. 24. COLLABORATE 14 – IOUG Forum Engineered Systems 24 | P a g e “My First 100 days with an Exadata” White Paper (*) 2013-12-07 03:25:46: Successfully unmounted /dbbackups/channel5 (*) 2013-12-07 03:25:46: Successfully unmounted /dbbackups/channel6 (*) 2013-12-07 03:25:46: Successfully unmounted /dbbackups/channel7 (*) 2013-12-07 03:25:46: Successfully unmounted /dbbackups/channel8 (*) 2013-12-07 03:25:46: Successfully unmounted /mnt/u04 (*) 2013-12-07 03:25:46: Successfully unmounted /mnt/usb (*) 2013-12-07 03:25:46: Unmount of /boot successful (*) 2013-12-07 03:25:46: Check for /dev/sda1 successful (*) 2013-12-07 03:25:46: Mount of /boot successful (*) 2013-12-07 03:25:46: Performing filesystem backup to /dev/mapper/VGExaDb-LVDbSys2 (estimated 4-12 minutes) (*) 2013-12-07 03:32:40: Backup successful (*) 2013-12-07 03:32:40: Verifying and updating yum.conf (backup in /etc/yum.conf.071213_032414) (*) 2013-12-07 03:32:41: Disabling other repositories, generating Exadata repos (*) 2013-12-07 03:32:41: Backing up /etc/yum.repos.d/Exadata-computenode.repo in /etc/yum.repos.d.bak/071213032414 (*) 2013-12-07 03:32:41: Backing up /etc/yum.repos.d/Exadata-computenode.repo.sample in /etc/yum.repos.d.bak/071213032414 (*) 2013-12-07 03:32:41: Backing up /etc/yum.repos.d/not_supported.repo in /etc/yum.repos.d.bak/071213032414 (*) 2013-12-07 03:32:41: Generating /etc/yum.repos.d/Exadata-computenode.repo (*) 2013-12-07 03:32:41: Verifying baseurl (*) 2013-12-07 03:32:42: Disabling stack from starting (*) 2013-12-07 03:32:43: OSWatcher stopped successful (*) 2013-12-07 03:33:00: EM Agent (in /u01/app/oracle/product/12.1/agent/core/12.1.0.3.0) stopped successfully (*) 2013-12-07 03:33:00: Emptying the yum cache (*) 2013-12-07 03:33:00: Removing rpm libcxgb3-static.x86_64 (if installed) (*) 2013-12-07 03:33:00: Removing rpm rpm-build.x86_64 (if installed) (*) 2013-12-07 03:33:00: Performing yum update. Node is expected to reboot when finished (*) 2013-12-07 03:36:34: All above steps finished. (*) 2013-12-07 03:36:34: System will reboot automatically for changes to take effect (*) 2013-12-07 03:36:34: After reboot run "./dbnodeupdate.sh -c" to complete the upgrade CONCLUSION Oracle Exadata Machine is a new paradigm shift for Database Administrators, where as Arup Nanda mentioned, you leave behind being a DBA (Database Administrator) and swing towards becoming a Database Machine Administrator). Oracle Exadata has now several versions and several configurations, but now you are ready to delve deeper into each topic so that you can become an expert DMA.
  25. 25. COLLABORATE 14 – IOUG Forum Engineered Systems 25 | P a g e “My First 100 days with an Exadata” White Paper REFERENCES Exadata Database Server Patching using the DB Node Update Utility (Doc ID 1553103.1) Exadata Patching Overview and Patch Testing Guidelines (Doc ID 1262380.1). Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Exadata Database Server Patching using the DB Node Update Utility (Doc ID 1553103.1) How to shutdown the Exadata database nodes and storage cells in a rolling fashion so certain hardware tasks can be performed. (Doc ID 1539451.1) Exadata Database Machine : How to identify cell failgroups and Partner disks for a grid disk (Doc ID 1431697.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Oracle's Secret Sauce: Why Exadata Is Rocking the Tech Industry http://www.forbes.com/sites/oracle/2012/11/02/oracles-secret-sauce-why-exadata-is-rocking-the-tech-industry/ A grand tour of Oracle Exadata, Part 1 http://www.pythian.com/blog/exadata-part-1/ Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine http://www.oracle.com/technetwork/database/exadata/exadata-smart-flash-cache-366203.pdf Exadata Part VII: Meaning of the various Disk Layers http://uhesse.com/2011/05/18/exadata-part-vii-meaning-of-the-various-disk-layers/ Exadata Storage Layout http://blog.enkitec.com/wp-content/uploads/2011/02/Enkitec-Exadata-Storage-Layout11.pdf Oracle Exadata Database Machine X4-2 http://www.oracle.com/technetwork/server-storage/engineered-systems/exadata/exadata-dbmachine-x4-2-ds-2076448.pdf Exadata Diskgroup Planning http://blog.oracle-ninja.com/2011/12/exadata-diskgroup-planning/ A Technical Overview of the Oracle Exadata Database Machine and Exadata Storage Server http://www.oracle.com/technetwork/database/exadata/exadata-technical-whitepaper-134575.pdf Exadata MAA Best Practices Series http://www.oracle.com/webfolder/technetwork/exadata/maa-bestp/patching/patch.pdf Patching an Exadata Compute Node http://www.fuadarshad.com/2013/05/patching-exadata-compute-node.html
  26. 26. COLLABORATE 14 – IOUG Forum Engineered Systems 26 | P a g e “My First 100 days with an Exadata” White Paper Exadata Patching Overview http://www.pythian.com/blog/exadata-patching-overview/ Oracle Integrated Lights Out Manager (ILOM) 3.0 Concepts Guide http://docs.oracle.com/cd/E19469-01/820-6410-12/ilom_about.html#50614323_61001

×