Avi Apelbaum - RAC

1,380 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,380
On SlideShare
0
From Embeds
0
Number of Embeds
40
Actions
Shares
0
Downloads
46
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Avi Apelbaum - RAC

  1. 1. RAC, ASM and Linux Forum, October 12, 2010 Avi Apelbaum DBA & System engineer Valinor
  2. 2. Agenda Upgrading 10g cluster to 11gR2 grid Moving ASM to extended RAC Questions
  3. 3. Upgrading 10g cluster to 11gR2 GI <ul><li>Technique 1 : </li></ul><ul><li>Creating a new cluster. </li></ul>
  4. 4. Technique 1 : Creating a new cluster. Step 4 : Take notes of the current services configuration (prefered nodes,TAF policies,etc…) Step 1 : If your db is 10.2.0.1 or below so first upgrade it to 10.2.0.4 Step 2 : Taking (of course) a full backup of the db (rman or storage snapshot). The following steps are if you perform the upgrade on the existing servers: Step 3 : Backup spfile (if not in asm)/init.ora
  5. 5. Step 8 : Install 10.2.0.1 rdbms software and upgrade it to 10.2.0.4 (or the version of your DB). Technique 1 : Creating a new cluster. Step 5 : Uninstall rdbms software (if ASM and DB are separated then both of them ) and cluster software. Step 6 : Uninstall clusterware and cleanup the machine (use metalink:239998.1) Step 7 : Install 11gR2 Grid Infrastructure
  6. 6. Step 9 : Copy the backed up spfile/init.ora to it’s new place. Step 10 : Add the DB to the new cluster by using “srvctl add database” and then add instances by using “srvctl add instance” Step 11 : Add services by using srvctl add service. Technique 1 : Creating a new cluster.
  7. 7. <ul><li>If you choose to do it on a new machine </li></ul><ul><li>you have 3 possibilities: </li></ul><ul><li>After shutdown the DB, unmap LUNs from old machines and map them to the new machine (has to be same OS). If using linux run the command oracleasm scandisks as root user and then oracleasm listdisks. In other case you can use the following command “kfod disks=all dscvgroup=TRUE” </li></ul>Technique 1 : Creating a new cluster.
  8. 8. <ul><li>Export the data and then import it into a newly created database. </li></ul><ul><li>Using Transportable database to move it to a new machine. In this case the DB can me moved between platforms (look at oracle documentation for limitations). </li></ul>Technique 1 : Creating a new cluster.
  9. 9. Upgrading 10g cluster to 11gR2 GI <ul><li>Technique 2 : </li></ul><ul><li>Upgrading the existing cluster. </li></ul>
  10. 10. Technique 2 : Upgrading the existing cluster. <ul><li>This technique is well documented in oracle but I’ve choose to build a new one due the following reasons/issues: </li></ul><ul><li>When beginning the upgrade we had only 1 votedisk. After running rootUpgrade.sh on the first node this node changed/upgraded the only votedisk available and the second node upgrade (of course) failed. </li></ul>
  11. 11. Technique 2 : Upgrading the existing cluster. <ul><li>After a second retry, which succeeded, at the final step we’ve made a restart to the cluster but it failed to start because for some unknown reason the interconnect and public interface configuration were changed in such a way the cluster was not able to start anymore and it was unable to get to a state were the reconfiguration was possible (using oifcfg ). </li></ul>
  12. 12. Upgrading 10g cluster to 11gR2 GI <ul><li>Using ASM for Extended RAC. </li></ul>
  13. 13. Moving ASM to Extended RAC <ul><ul><li>Extended ASM is actually diskgroup in normal or high redundancy in which each Failure group is on a separate storage machines in different locations. </li></ul></ul>
  14. 14. Moving ASM to Extended RAC I used the following main steps to migrate our 11gR2 asm to extended RAC: <ul><li>Step 1 : Map new volumes from both storage machines to all the cluster nodes. The same number and size of volume should be use in both storages. </li></ul><ul><li>Step 2 :Create a new Diskgroup/s with normal redundancy when each failgroup is on a different storage. </li></ul>
  15. 15. <ul><li>Step 2a: Create a normal redundancy diskgroup with at least 3 disks for the votedisks and OCR. </li></ul><ul><li>Step 3 Move votedisks to new DG (“crsctl replace votedisk +<NEW DG NAME>”) </li></ul><ul><li>Step 4 Move ocr disks (ocrconfig) </li></ul><ul><li>Step 5 Move controlfiles to new DG’s </li></ul>Moving ASM to Extended RAC
  16. 16. <ul><li>Step 6 When DB is in mount state copy datafiles to new DG by using the command: “backup as copy database format +<NEW DATA DG” </li></ul><ul><li>Step 7 After succsessfully competion of the copy perform the following to update control file with the copied datafiles :”switch database to copy” and then “alter database open” </li></ul>Moving ASM to extended RAC
  17. 17. <ul><li>Step 8 : Create a new temp TBS or add a new file to the current one and then delete the old file from that temp TBS(alter database tempfile ‘path to file’ drop;) </li></ul><ul><li>Step 9 : In the ASM Instance on each node set the parameter “asm_preferred_read_failure_groups” – this parameter is to instruct the ASM on each node from which FAILGROUP he should read , if possible. This parameter DOES NOT affect writes. The syntax of this parameter is <DiskGroup>.<Failroup>, <DiskGroup>.<FailGroup>,etc… </li></ul><ul><li>DONE. </li></ul>Moving ASM to extended RAC
  18. 18. Q&A Upgrading 10g cluster to 11gR2 GI
  19. 19. [email_address] Upgrading 10g cluster to 11gR2 GI

×