Your SlideShare is downloading. ×
„Maximize Availability with Oracle Database 12c” Tomasz Murtas, Technology Sales Consultant, Oracle Polska
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

„Maximize Availability with Oracle Database 12c” Tomasz Murtas, Technology Sales Consultant, Oracle Polska

1,411
views

Published on

Plug into the Cloud with Oracle Database 12c, 27.06.2013

Plug into the Cloud with Oracle Database 12c, 27.06.2013

Published in: Technology, Business

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,411
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
104
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Developed and prepared by:Ashish RaySenior Director, HA Product ManagementAshish.Ray@oracle.com
  • Customer Pain Points:Database sessions outages (planned and unplanned) have a significant impact on the user experience.Doubtful outcome: users are left not knowing what happened to their funds transfers, orders, payments, bookings ...Usability: users see an error, lose screens of uncommitted data, and need to login again and re-enter or resubmit, sometimes leading to logical corruption.Disruption. DBA’s sometimes need to reboot mid-tiers.Developer Pain Points:Current approach for outages places the onus on developers to write exception handling in every possible place.Every code module needs exception code to know if transactions committed.Exception handling must work for all transaction sources. Rebuilding non-transactional state is near impossible for an application that is modifying state at runtime.
  • Transaction Guard is a reliable protocol and API that applications use to provide to provide a reliable commit outcome. The API is embedded in error handling and should be called following recoverable errors. The outcome indicates whether or not the last transaction was committed and completed. Once the commit outcome is returned to the application, this outcome persists. That is, if Transaction Guard returns committed or uncommitted the status stays this way. This enables the application or user to make a next stable decision.Why use Transaction Guard ?The application uses Transaction Guard to return the known outcome committed or uncommitted. The user or application can make their own decision the next action to take. For example, to resubmit when the last transaction on the session has not committed, or to continue when the last transaction has committed and the last call has completed. Transaction Guard is used by Application Continuity and automatically enabled by it, but it can also be enabled independently. Transaction Guard prevents the transaction being replayed by Application Continuity from being applied more than once. If the application has implemented an application level replay, then it may require that application is integrated with transaction guard to provide idempotence.Understanding Transaction GuardIn the standard commit case, the database commits a transaction and returns a success message to the client. In the illustration shown in the slide, the client submits a commit statement and receives a message stating that communication failed. This type of failure can occur due to several reasons, including a database instance failure or network outage. In this scenario, the client does not know the state of the transaction. Oracle Database solves the communication failure by using a globally unique identifier called a logical transaction ID. When the application is running, both the database and client hold the logical transaction ID. The database gives the client a logical transaction ID at authentication and at each round trip from the client driver that executes one or more commit operations. The logical transaction ID uniquely identifies the last database transaction submitted on the session that failed. For each round trip from the client in which one or more transactions are committed, the database persists a logical transaction ID. This ID can provide transaction idempotence for interactions between the application and the database for each round trip that commits data.When a recoverable outage occurs, the exception handling is modified to get the logical transaction Id, and call a new PL/SQL interface DBMS_APP_CONT.GET_LTXID_OUTCOME that returns the reliable commit outcome. (See development and deployment section)Preserving Commit OutcomeClient receives a Logical Transaction ID (LTXID)Client is guaranteed the outcome of the last submissionGlobal protocol that blocks out of order flowsSafe for applications to return success/resubmitUsed by Application Continuity Typical Usage========Database crashed: (i) FAN aborts dead session (ii) application gets an error; (iii) Connectionpool removes orphan connection from the poolIf recoverable error Get last LTXID of the dead session using getLogicalTransactionIdor from your callbackObtain a new session Call DBMS_APP_CONT.GET_LTXID_OUTCOME* If committed then return result, application may continue Else return “uncommitted”; application cleans up and resubmit request* If uncommitted, prevent transaction from eventually committing Transaction Guard solution coverage======================Clients JDBC-thin, OCI, OCCI, ODP.netDatabase -Uses logical transaction ID (LTXID)Commit Models - Local TXN - Auto-commit, Commit on Success - Commit embedded in PL/SQL - DDL, DCL, Parallel DDL - Remote, Distributed Excludes XA in 12.1============Transaction Guard supports all listed transaction types. The primary exclusions in 12c are XA and read/write database links from Active Data Guard.To configure Transaction Guard, set the service attribute COMMIT_OUTCOMEValues – TRUE and FALSE Default – FALSEApplies to new sessions Optionally change the service attribute RETENTION_TIMEOUTUnits – secondsDefault – 24 hours (86400)Maximum value – 30 days (2592000) Oracle Database 12c provides Transaction Guard interface and API’s for JDBC thin, OCI, OCCI, and ODP.Net.
  • When replay is successful, Application Continuity masks many recoverable database outages from the applications and the users. It achieves the masking by restoring the database session, the full session (including session states, cursors, variables), and the last in-flight transaction (if there is one).Without Application Continuity, database recovery does not mask outages that are caused by network outages, instance failures, hardware failures, repairs, configuration changes, patches and so on. If the database session becomes unavailable due to a recoverable error, Application Continuity attempts to rebuild the session and any open transactions to the correct states; If the transaction is successful and does not need to be re-executed, the successful status is returned to the application.If the replay is successful, the request continues safely without duplication. If the replay is not successful, the database rejects the replay and the application receives the original error. To be successful, the replay must return to the client the exact same data that the client received previously in the request, and that the application potentially made a decision on.How Application Continuity works:Here are the steps:The client application sends a database request that is received by the JDBC replay driver.The replay driver sends the calls that make up the request to the database, receiving directions for each call from the database.The replay driver receives a Fast Application Notification (FAN) notification or a recoverable error.The replay driver performs the following actions:- It checks the request has replay enabled and checks timeouts. Assuming all is good -- It obtains a new database session, and if a callback is registered, runs this callback to initialize the session.- It checks with the database to determine whether replay can progress, for example, whether the last transaction was committed or rolled back.If replay is required, then the JDBC replay driver resubmits calls, receiving directions for each call from the database. Each call must establish the same client-visible state. Before the last call is replayed, the replay driver ends the replay, and returns to normal runtime mode. Solution Coverage===========Client JDBC-Thin driver UCP, WebLogic Server, Third Party Java AppsDatabaseSQL, PL/SQL, JDBC RPC - Select, ALTER SESSION, DML, DDL, COMMIT/ROLLBACK/SAVEPOINT Transaction models - Local, Parallel, Remote, Distributed Mutable function supportHardware acceleration on current Intel & SPARC chips==========Application Continuity is supported for thin JDBC, Universal Connection Pool and, WebLogic Server.  It is included with Oracle Real Application Clusters (Oracle RAC), RAC One, and Oracle Active Data Guard. Application Continuity recovers the database request including any in-flight transaction and the database session states. The requests may include most SQL and PL/SQL, RPCs, and local JDBC callsApplication Continuity uses Transaction Guard. Transaction Guard tags each database session with a logical transaction ID (LTXID), so that the database recognizes if a request committed the transaction before the outage. Application Continuity offers the ability to keep the original values for some ORACLE functions such as Seq.NEXTVAL that change their values each time that they are called. This improves the likelihood that replay will succeed.On current SPARC and Intel-based chips, the validation that Application Continuity uses is supported by firmware at the database server.BENEFIT=====Transaction Guard & Application Continuity: Intelligent, application-transparent fault tolerance
  • ChallengesDatabase utilization hampered by geographic fragmentationLoad balancing and fault tolerance hard to automate globallyResource allocation and management dictated by geographyResulting inSub-optimal resource utilizationHampered or no enterprise-wide data integrationUnclear strategy for consolidation & distribution candidates
  • GDS tested with WLS 12.1.2.In an active/active configuration, a global service can be available on all the GoldenGate replicasCan also start different services on each replica – useful for conflict avoidanceClients connections and requests transparently routed to the closest / best databaseRuntime load balancing metrics give client real-time information on which database to issue next requestSupports all Oracle connection pools (UCP, WLS, OCI, ODP.NET)If a database fails, its global services restarted on another replica
  • Update service runs on primary; Reporting service runs on primary or Active Data Guard standbyGlobal service may be started in another database based on policies (e.g., singleton service, minimum of 3 instances, …)
  • In an active/active configuration, a global service can be available on all the GoldenGate replicasCan also start different services on each replica – useful for conflict avoidanceClients connections and requests transparently routed to the closest / best databaseRuntime load balancing metrics give client real-time information on which database to issue next requestSupports all Oracle connection pools (UCP, WLS, OCI, ODP.NET)If a database fails, its global services restarted on another replica
  • Orders Capture Application run on Primary.History orders Application offloaded to Active Standby. Without GDS, to bring the History App online:Steps- Change the properties (role definition) of the History Service via srvctl- Manually start the History Service on the primary database- Restart the apps- Connect to the History Service on the primary databaseDrawbacks- Unplanned application downtime- Manual, Time-consuming, Error-prone
  • At least 2-3 GSMs per region is recommendedOne GSM per region is assigned as “Master”Master GSM is responsible for publishing FAN events to the clients via ONS serverIf Master GSM dies, other GSM in the region takes overIf all GSMs in the region dies, Master GSM from the other region takes overIf all the GSMs in all the regions fail, the clients can still connect to the local listenersGDS Catalog Database can be replicated for HA/DR
  • RequirementsTo be able to load balance across data centersOptimal resource utilizationGlobal scalability and availabilityCapability to centrally manage global resourcesSolutionGlobal Database Services (GDS)Global Data Services (GDS) allows:Load-balancing of application workloads across regionsExtends RAC-like connect time & run time load balancing globallyAddresses inter-region resource fragmentation, so that underutilized resources in one region can be used to satisfy another region’s workload. Thus enabling optimal resource utilizationGlobal scalability and availabilityEasy to elastically add/remove databases from the GDS infrastructureSupports seamless service failoverCentralized management of global resourcesEasier management for globally distributed multi-database configurations
  • Example: Reserve Bank of India report – “Working Group on Information Security, Electronic Banking, Technology Risk Management and Cyber Frauds”, http://www.rbi.org.in/scripts/PublicationReportDetails.aspx?UrlPage=&ID=609Chapter 7 of this report – “Business Continuity Planning” has some specific guidelines wrt RPO and RTO, and this particular guideline has sparked a lot of interest within the Banking IT community:Given the need for drastically minimizing the data loss during exigencies and enable quick recovery and continuity of critical business operations, banks may need to consider near site DR architecture. Major banks with significant customer delivery channel usage and significant participation in financial markets/payment and settlement systems may need to have a plan of action for creating a near site DR architecture over the medium term (say, within three years).
  • Set via NOAFFIRM attribute
  • 1) DDL to create temporary tables must be issued on the primary databaseEnables more reporting apps to leverage Active Data GuardNew init.ora parameter TEMP_UNDO_ENABLED=============Global Sequences Sequences created using the default CACHE and NOORDER options can be accessed from an Active Data Guard standby databasePrimary allocates a unique range of sequence numbers to each standbyEnables more flexible reporting choices for Active Data GuardSession SequencesUnique range of sequence numbers only within a sessionSuitable for reporting apps leveraging global temporary tables===============In an Active Data Guard environment, sequences created by the primary database with the default CACHE and NOORDER options can be accessed from standby databases as well. When a standby database accesses such a sequence for the first time, it requests that the primary database allocate a range of sequence numbers. The range is based on the cache size and other sequence properties specified when the sequence was created. Then the primary database allocates those sequence numbers to the requesting standby database by adjusting the corresponding sequence entry in the data dictionary. When the standby has used all the numbers in the range, it requests another range of numbers.The primary database ensures that each range request from a standby database gets a range of sequence numbers that do not overlap with the ones previously allocated for both the primary and standby databases. This generates a unique stream of sequence numbers across the entire Data Guard configuration.Because the standby's requests for a range of sequences involve a round-trip to the primary, be sure to specify a large enough value for the CACHE keyword when you create a sequence that will be used on an Active Data Guard standby. Otherwise, performance could suffer.Restrictions: Sequences created with the ORDER or NOCACHE options cannot be accessed on an Active Data Guard standby===============Supported types in 11.2BINARY_DOUBLEBINARY_FLOATBLOBCHARCLOB and NCLOBDATEINTERVAL YEAR TO MONTHINTERVAL DAY TO SECONDLONGLONG RAWNCHARNUMBERNVARCHAR2RAWTIMESTAMPTIMESTAMP WITH LOCAL TIMEZONETIMESTAMP WITH TIMEZONEVARCHAR2 and VARCHARXMLType stored as CLOBLOBs stored as SecureFileAdditional Data Types Supported in Oracle Database 12cXML OR and Binary XMLXDB repository operations and other commonly used XDB operationsADTs with attributes of simple types and varrays, with inheritance and type evolutionVARCHAR32Commonly used AQ operations ANYDATA with non-opaque typesSpatial, Image, Oracle Text, DICOMComplete SecureFile supportDBFSScheduler job definitionsStill unsupported in 12.1BFILECollections (nested tables)ROWID, UROWIDUser-defined typesADTs with attributes of nested tables, refs and bfilesTop level nested tables, varrays, refs and bfilesIf primary key involves ADT columnsSecurefile FRAGMENT_OPERATION. This was intended only for internal consumption, but it got documented. It is supported via EDS. =====================DGMGRL command: validate databaseValidates each database’s current statusVerifies there are no archive log gapsPerforms a log switch on primary to verify the log is applied on all standbysShows any databases or RAC instances that are not discoveredDetects inconsistencies between database properties and values stored in databaseEnsures online redo log files have been cleared in advance of role transitionChecks for previously disabled redo threadsEnsures primary and all standbys are on the same redo branch
  • RMAN automatically creates auxiliary instance on target database host where relevant backups are restored and recoveredRecovered table(s) in auxiliary instance are:Imported directly into target database, orExported to a Data Pump dump fileUseful in scenarios where Flashback cannot be used:Flashback Drop: Table has been purged out of recycle binFlashback Table: Point-in-time needed is older than UNDO_RETENTION
  • To create the backup set containing data that must be transported to the destination database, use the BACKUP command on the source database. To indicate that you are creating a cross-platform backup, the BACKUP command must contain either the FORTRANSPORT or TO PLATFORM clause.================You can transport an entire database from a source platform to a different destination platform. While creating the cross-platform backup to transport a database, you can convert the database either on the source database or the destination database.Back up the source database using the FOR TRANSPORT or TO PLATFORM clause in the BACKUP command. Using either of these clauses creates a cross-platform backup that uses backup sets.Example 28–5 creates a cross-platform backup of the entire database. This backup can be restored on any supported platform. Because the FOR TRANSPORT clause is used, the conversion is performed on the destination database. The source platform is Sun Solaris and the cross-platform database backup is stored in db_trans.bck in the /tmp/xplat_backups directory on the source host.Example 28–5 Creating a Cross-Platform Database Backup for Restore on Any Supported PlatformBACKUPFOR TRANSPORTFORMAT '/tmp/xplat_backups/db_trans.bck'DATABASE;Example 28–6 creates a cross-platform backup of the entire database that can berestored on the Linux x86 64-bit platform. Because the TO PLATFORM clause is used, conversion is performed on the source database. The backup is stored in thebackup set db_trans_lin.bck in the /tmp/xplat_backups directory on the source host.Example 28–6 Creating a Cross-Platform Database Backup for Restore on a SpecificPlatformBACKUPTO PLATFORM='Linux x86 64-bit'FORMAT '/tmp/xplat_backups/db_trans_lin.bck'DATABASE;Restore the backup sets that were transferred from the source by using theRESTORE command with the FOREIGN DATABASE clause.Example 28–7 restores the cross-platform database backup created inExample 28–5. The FROM PLATFORM clause specifies the name of the platform on which the backup was created. This clause is required to convert backups on the destination. The backup set containing the cross-platform database backup is stored in the /tmp/xplat_restores directory on the destination host. The TONEW option specifies that the restored foreign data files must use new OMF-specified names in the destination database. Ensure that the DB_CREATE_ FILE_DEST parameter is set.Example 28–7 Restoring a Cross-Platform Database Backup on the DestinationDatabaseRESTOREFROM PLATFORM ’Solaris[tm] OE (64-bit)’FOREIGN DATABASE TO NEWFROM BACKUPSET '/tmp/xplat_restores/db_trans.bck';Example 28–8 restores the cross-platform database backup that was created inExample 28–6. The destination database is on the Linux x86 64-bit platform. The backup set containing the cross-platform backup that needs to be restored is stored in /tmp/xplat_restores/db_trans_lin.bck. The restored foreign data files are stored in the /oradata/datafiles directory using names that begin with df_.Example 28–8 Restoring a Cross-Platform Database Backup that was Created for aSpecific PlatformRESTOREALL FOREIGN DATAFILESFORMAT ’/oradata/datafiles/df_%U’FROM BACKUPSET ’/tmp/xplat_restores/db_trans_lin.bck’;===================In 11.2.0.3 -> only Exadata target -> 1389592.1 - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backups Minimize read-only impact with multiple incremental backupsSuccessive incrementals converted & applied to restored data filesFinal incremental while tablespace in read-only mode, with separate Data Pump metadata export and importCreate a cross-platform level 0 inconsistent backup of the tablespace my_tbs when the tablespace is read/write mode. This backup is stored in a backup setnamed my_tbs_incon.bck in the directory /tmp/xplat_backups.BACKUPFOR TRANSPORTALLOW INCONSISTENTINCREMENTAL LEVEL 0TABLESPACE my_tbs FORMAT '/tmp/xplat_backups/my_tbs_incon.bck';Create a cross-platform level 1 incremental backup of the tablespace my_tbs that contains the changes made after backup in Step 2 was created. The tablespace is still in read/write mode. This incremental backup is stored in my_tbs_incon1.bck in the directory /tmp/xplat_backups.BACKUPFOR TRANSPORTALLOW INCONSISTENTINCREMENTAL LEVEL 1TABLESPACE my_tbs FORMAT '/tmp/xplat_backups/my_tbs_incon1.bck';ALTER TABLESPACE my_tbs READ ONLY;Create the final cross-platform level 1 incremental backup of the tablespace my_tbs. This backup contains changes made to the database after the backup that was created in Step 3. It must include the export dump file that contains the tablespacemetadata.BACKUPFOR TRANSPORTINCREMENTAL LEVEL 1TABLESPACE my_tbsFORMAT '/tmp/xplat_backups/my_tbs_incr.bck'DATAPUMP FORMAT '/tmp/xplat_backups/my_tbs_incr_dp.bck'DESTINATION '/tmp';Move the backup sets and the export dump file generated in Steps 2, 3, and 5 from the source host to the desired directories on the destination host.Restore the cross-platform level 0 inconsistent backup created in Step 2.Use the FOREIGN DATAFILE clause to specify the data files that must be restored. The FROM PLATFORM clause specifies the name of the platform on which the backup was created. This clause is required to convert backups on the destination database.RESTOREFROM PLATFORM ’Solaris[tm] OE (64-bit)’FOREIGN DATAFILE 6FORMAT '/tmp/aux/mytbs_6.df',7FORMAT '/tmp/aux/mytbs_7.df',20FORMAT '/tmp/aux/mytbs_20.df',10FORMAT '/tmp/aux/mytbs_10.df'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incon.bck';Recover the foreign data files obtained in Step 8 by applying the first cross-platform level 1 incremental backup that was created Step 3.RECOVERFROM PLATFORM ’Solaris[tm] OE (64-bit)’FOREIGN DATAFILECOPY '/tmp/aux/mytbs_6.df','/tmp/aux/mytbs_7.df','/tmp/aux/mytbs_20.df','/tmp/aux/mytbs_10.df'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incon1.bck';Recover the foreign data files obtained in Step 8 by applying the final cross-platform level 1 incremental backup that was created in Step 5. This backup was created with the tablespaces in read-only mode.RECOVERFROM PLATFORM ’Solaris[tm] OE (64-bit)’FOREIGN DATAFILECOPY '/tmp/aux/mytbs_6.df','/tmp/aux/mytbs_7.df','/tmp/aux/mytbs_20.df','/tmp/aux/mytbs_10.df'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incr.bck';Restore the backup set containing the export dump file. This dump file contains the tablespace metadata required to plug the tablespaces into the destination database.RESTOREFROM PLATFORM ’Solaris[tm] OE (64-bit)’DUMP FILE 'my_tbs_restore_md.dmp'DATAPUMP DESTINATION '/tmp/dump'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incr_dp.bck';
  • RMAN DUPLICATE leverages restore process (from backup or source DB) to create new clone or standby databaseClone the entire CDB or ROOT + selected PDBsCommands:RMAN> DUPLICATE TARGET DATABASE TO <CDB1>;RMAN> DUPLICATE TARGET DATABASE TO <CDB1> PLUGGABLE DATABASE <PDB1>, <PDB2>;For in-place cloning or creating a new PDB within a CDB, use SQL:SQL> CLONE PLUGGABLE DATABASE ..SQL> CREATE PLUGGABLE DATABASE ..
  • Previously multi-section possible only for full backup sets. Now:BACKUP INCREMENTAL LEVEL 1SECTION SIZE 100MDATAFILE '/oradata/datafiles/users_df.dbf';BACKUP AS COPYSECTION SIZE 500MDATABASE;==========================================================Rolling Forward a Physical Standby Database and Synchronizing it with the Primary DatabaseIn this example, the DB_UNIQUE_NAME of the primary database is MAIN and that of the physical standby database STANDBY. You want to refresh the physical standby database with the latest changes made to the primary database. You can use the RECOVER command with the FROM SERVICE clause to fetch an incremental backup from the primary database and then apply this backup to the physical standby database. The service name of the primary database is main_tns and the compression algorithm used is Basic.When the RECOVER command is executed, the incremental backup is created on the primary database and then transferred, over the network, to the physical standby database. RMAN uses the SCN from the standby data file header and creates the incremental backup starting from this SCN on the primary database. If block change tracking is enabled for the primary database, it will be used while creating the incremental backup.To refresh a physical standby database with changes made to the primary database, use the following steps:1. Connect to the physical standby database as a user with the SYSBACKUP privilege.%RMANRMAN> CONNECT TARGET "sbu@standby AS SYSBACKUP";Enter the password for the sbu user when prompted.2. Specify that the compression algorithm used is Basic.RMAN> SET COMPRESSION ALGORITHM 'basic';3. Ensure that the tnsnames.ora file in the source database contains an entrycorresponding to the physical standby database. Also ensure that the password files on the source and physical standby database are the same.4. Recover the data file on the physical standby database by using an incrementalbackup of the primary database. The following command creates a compressed, multisection incremental backup on the primary database to recover the standbydatabase.RECOVER DATABASEFROM SERVICE main_tnsSECTION SIZE 120MUSING COMPRESSED BACKUPSET;===================RMAN can transfer the files required for active database duplication as image copies or backup sets.When active database duplication is performed using image copies, after RMAN establishes a connection with the source database, the source database transfers the required database files to the auxiliary database. Using image copies may require additional resources on the source database. This method is referred to as the push-based method of active database duplication.When RMAN performs active database duplication using backup sets, a connection is established with the source database and the auxiliary database. The auxiliary database then connects to the source database through Oracle Net Services and retrieves the required database files from the source database. This method of active database duplication is also to as the pull-based method.Using backup sets for active database duplication provides certain advantages. RMAN can employ unused block compression while creating backups, thus reducing the size of backups that are transported over the network. Backup sets can be created in parallel on the source database by using multisection backups. You can also encrypt backup sets created on the source database.Factors That Determine Whether Backup Sets or Image Copies Are Used for Active Database Duplication RMAN only uses image copies to perform active database duplication when no auxiliary channels are allocated or when the number of auxiliary channels allocated is less than the number of target channels.RMAN uses backup sets to perform active database duplication when the connection to the target database is established using a net service name and any one of the following conditions is satisfied:- The DUPLICATE ... FROM ACTIVE DATABASE command contains either the USING BACKUPSET, USING COMPRESSED BACKUPSET, or SECTION SIZE clause.- The number of auxiliary channels allocated is equal to or greater than the number of target channels allocated.Note:Oracle recommends that you use backup sets to perform active database duplication.
  • From the users view, ASM exposes a small number of Disk Groups. These Disk Groups consists of ASM disks and files that are striped across all the disks in a Disk Group. The Disk Groups are global in nature and database instances running individually or in clusters have shared access to the Disk Groups and the files within them. This is illustrated in this picture. The green database has files in Disk Group A and are striped across all its disks. Disk Group A is shared with both the green database and the purple database.Notice the ASM instance on every server in the cluster. The ASM instances communication amongst themselves and form an ASM cluster.These simple ideas delivered a powerful solution that eliminates many headaches DBAs and Storage Administrators once had with managing storage in an Oracle environment.
  • Oracle Flex ASM enables an Oracle ASM instance to run on a separate physical server from the database servers. With this deployment, larger clusters of Oracle ASM instances can support more ASM clients (database instances) while reducing the Oracle ASM footprint for the overall system.With Oracle Flex ASM as with Standard ASM, you can consolidate all the storage requirements into a single set of disk groups. However, these disk groups are managed by a small set of Oracle Flex ASM instances running in the cluster. If a host running an ASM instance fails, ASM clients using that ASM instance, failover to a surviving ASM instance on a different host. You can specify the number of Oracle ASM instances with a cardinality setting. The default is three instances.The configurations of Oracle ASM in Oracle database 12c are:- Standard ASM: With this mode (Standard Oracle ASM cluster), Oracle ASM instances continue to support existing standard architecture in which database clients are running with Oracle ASM instances on the same host computer. - Oracle Flex ASM:With this mode, database clients running on nodes in a cluster can access Oracle Flex ASM instances remotely for metadata, but perform block I/O operations directly to Oracle ASM disks. All the nodes with in the cluster must have direct access to the ASM disks.You can choose the Oracle ASM deployment model during the installation of Oracle Grid Infrastructure and you canuse Oracle ASM Configuration Assistant (ASMCA) to enable Oracle Flex ASM after the installation / upgrade was performed. This functionality is only available in an Oracle Grid Infrastructure configuration, not an Oracle Restart configuration. Oracle Flex ASM is managed by ASMCA, CRSCTL, SQL*Plus, and SRVCTL. To determine whether an Oracle Flex ASM has been enabled, use the ASMCMD showclustermode command. $ asmcmdshowclustermodeASM cluster : Flex mode enabled You can also use SRVCTL to determine whether Oracle Flex ASM is enabled. If enabled, then srvctlconfigasm displays the number of Oracle ASM instances that has been specified for use with the Oracle Flex ASM configuration. For example:$ srvctlconfigasmASM instance count: 3Clients are automatically relocated to another instance if an Oracle ASM instance fails. If necessary, clients can be manually relocated using the ALTER SYSTEM RELOCATE CLIENT command. For example:SQL> ALTER SYSTEM RELOCATE CLIENT 'client-id';When you issue this statement, the connection to the client is terminated and the client fails over to the least loaded instance. Every database user must have a wallet with credentials to connect to Oracle ASM. CRSCTL commands can be used by the database user to manage this wallet. All Oracle ASM user names and passwords are system generated.
  • When consolidating pre-Oracle 12c databases and Oracle 12c on the same system using a cluster with Oracle Flex ASM enabled, the administrator will have to ensure that a local ASM instance is running on each node in the cluster. This is achieved by issuing a post installation SRVCTL command on Oracle Clusterware level and increasing the number of Oracle Flex ASM instances to the number of servers in the cluster (srvctl modify asm –count ALL). This setup preserves the Oracle Database 12c failure protection on local ASM instance failure and enables database consolidation across versions, maintaining the pre-12c behavior for pre-12c databases. ================Scrubbing Disk GroupsOracle ASM disk scrubbing improves availability and reliability by searching for data that may be less likely to be read. Disk scrubbing checks logical data corruptions and repairs them automatically in normal and high redundancy disks groups. The scrubbing process repairs logical corruptions using the mirror disks. Disk scrubbing can be combined with disk group rebalancing to reduce I/O resources. The disk scrubbing process has minimal impact to the regular I/O in production systems.You can perform scrubbing on a disk group, a specified disk, or a specified file of a disk group with the ALTER DISKGROUP SQL statement. For example, the following SQL statements show various options used when running the ALTER DISKGROUP disk_group SCRUB SQL statement.SQL> ALTER DISKGROUP data SCRUB POWER LOW;SQL> ALTER DISKGROUP data SCRUB FILE 'EXAMPLE.265.767873199' REPAIR POWER HIGH FORCE;When using ALTER DISKGROUP with the SCRUB option, the following items apply:The optional REPAIR option automatically repairs disk corruptions. If the REPAIR option is not specified, then the SCRUB option only checks and reports logical corruptions of the specified target.The optional POWER value can be set to AUTO, LOW, HIGH, or MAX. If the POWER option is not specified, the power value defaults to AUTO and the power adjusts to the optimum level for the system.If the optional WAIT option is specified, the command returns after the scrubbing operation has completed. If the WAIT option is not specified, the scrubbing operation is added into the scrubbing queue and the command returns immediately.If the optional FORCE option is specified, the command is processed even if the system I/O load is high or scrubbing has been disabled internally at the system level.
  • Online Redefinition===========Improved sync_interim_table performance with optimized Materialized View Log processingAbility to redefine table with VPD policies with a new parameter “copy_vpd_opt” in start_redef_tableImproved resilience of finish_redef_table with better lock managementBetter handling of multi-partition redefinitionMultiple partitions can be specified together in a single redefinition sessionBetter availability for partition redefinition with only partition-level locksImproved performance by logging changes for only the specified partitionsAdditional Details==========11.2 log handling enhancementsCommit-scn based MV LogDeferred MV Log purgeRemove MV Log setup and purge out of the refresh processFor a single MV refresh solely depending on the MV Log, the log handling could spend up to 2/3 of the total refresh execution time. Removing the log handling overhead could make the refresh 3X faster.===================================Before:No easy way to redefine multiple partitions, must launch a separate START_REDEFINITION and FINISH_REDEFINITION for each partition:Takes a long time to get all partitions redefinedDifficult to redefine all partitions needed in a single maintenance windowGoals/Benefits:Pay the non-recurring overheads of START_REDEFINITION and FINISH_REDEFINITION once (creation of MV Log, metadata modification, etc)Easy to move a large number of partitions to new tablespace(s) in an online mannerEnables the atomic redefinition of more than one partition.Support partial completion with new continue_after_errors parameter Step 1: Start redefinition with multi-partitionsDBMS_REDEFINITION.START_REDEF_TABLE (‘GROCERY’,’SALES’,int_table=>’tbl1,tbl2,tbl3’, part_name=>’sales_p1,sales_p2,sales_p3’, continue_after_errors=>TRUE);Step 2: Synchronize interim tables for multi-partitions DBMS_REDEFINITION.SYNC_INTERIM_TABLE (‘GROCERY’,’SALES’, int_table=>’tbl1,tbl2,tbl3’, part_name=>’sales_p1,sales_p2,sales_p3’ , continue_after_errors=>TRUE);Step 3: Finish redefinition with multi-partitionsDBMS_REDEFINITION.FINISH_REDEF_TABLE (‘GROCERY’,’SALES’, int_table=>’tbl1,tbl2,tbl3’, part_name=>’sales_p1,sales_p2,sales_p3’, continue_after_errors=>TRUE);If any failures and continue_after_errors=TRUE, record the error and process the next partitionIf any failures and continue_after_errors=FALSE, rollback by exchanging all successfully exchanged partition(s)part_name is used today, but can’t take a list===================================Support VPD policies with a new parameter “copy_vpd_opt” in start_redef_tableOption 1: Not copied (Default)No VPD policiesError when existing VPD policies on original tableOption 2: Copy VPD policies automaticallyColumn names and types unchangedOption 3: Copy VPD policies manuallyApplicable when:Column names or types are changedUsers want to modify VPD policies===================================Existing issues (in finish_redef_table):The execution can be unpredictably long and sometimes never finish => forcing interrupt and abortHard to get dml lock before moving into the final operationThe final refresh after acquiring locks could take very long => a wider “blackout” window blocking DMLsEnhancements:Providing timeout to gracefully exit from finish_redef_tableUtilize dml-lock-wait timeout window to refresh the interim tableGet dml lock in wait mode to get better lock chance ===================================Before: redefine partition P, lock the table T and log to all partition changes DMLs cannot occur to other partitionsUnnecessary change logging12c (only lock/log the partition)DMLs allowed on other partitionsRefresh with only needed changes======================================================The privileges explicitly granted to SYSDG are (see admin/catadmprvs.sql):< SYSTEM PRIVILEGES >alter databasealter sessionalter systemselect any dictionary< OBJECT PRIVILEGES >execute on sys.dbms_drsselect on sys.dba_captureselect on sys.dba_logstdby_eventsselect on sys.dba_logstdby_logselect on sys.dba_logstdby_historyselect on appqossys.wlm_classifier_plandelete on appqossys.wlm_classifier_planAlso, SYSDG is implicitly allowed to perform the following operations:- STARTUP- SHUTDOWN- CREATE RESTORE POINT- DROP RESTORE POINT- FLASHBACK DATABASE- SELECT fixed tables/views (e.g., X$ tables, GV$ and V$ views)================================================Additional Online Operations---------------------Drop index online (create/rebuild index online in 10g and 11g)Alter index unusable onlineAlter index visible / invisibleDrop constraint online (create constraint online in 11g)Set unused column online (add column online in 11g)Add column with default is fast (metadata-only operation), and online (only not-null in 11g)Online move partitionEdition-based redefinition simplification=======================================ONLINE MOVE Partition: you can move a partition while DMLs are ongoing on the partition you are moving. DDL operations that took subtle X-locks here and there, leading to a pile-up of DMLs in systems like SAP. We fixed one or two of these cases for SAP in the 11.2 time frame and added the following DDLs in 12.1: CREATE/DROP INDEX, ADD/DROP CONSTRAINT, ADD/SET UNUSED COLUMN (all Beta 1).Make it easier to use Edition-Based RedefinitionCan editions-enable a database with tables that depend on UDT (such as AQ payload), without schema reorganizationSupports MVs, indexes, and virtual columns (based on PLSQL or views) on editioned objectsGreatly reducing the need to separate application objects into different schemas -------------------------Moving a Table to a New Segment or TablespaceThe ALTER TABLE...MOVE statement enables you to relocate data of a nonpartitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot bemodified using ALTER TABLE. You can also use the ALTER TABLE...MOVE statement with a COMPRESS clause to store the new segment using table compression. Tables are usually moved either to enable compression or to perform data maintenance. For example, you can move a table from one tablespace to another. Most ALTER TABLE...MOVE statements do not permit DML against the table while the statement is executing. The exceptions are the following statements:- ALTER TABLE ... MOVE PARTITION ... ONLINE- ALTER TABLE ... MOVE SUBPARTITION ... ONLINEThese two statements support the ONLINE keyword, which enables DML operations to run uninterrupted on the partition or subpartition that is being moved. For operations that do not move a partition or subpartition, you can use online redefinition to leave the table available for DML while moving it.---------------------------------------------------------Moving a Table Partition or Subpartition OnlineUse the ALTER TABLE...MOVE PARTITION statement or ALTER TABLE...MOVE SUBPARTITION statement to move a table partition or subpartition, respectively. When you use the ONLINE keyword with either of these statements, DML operations can continue to run uninterrupted on the partition or subpartition that is being moved. If you do not include the ONLINE keyword, then DML operations are not permitted on the data in the partition or subpartition until the move operation is complete.When you include the UPDATE INDEXES clause, these statements maintain both local and global indexes during the move. Therefore, using the ONLINE keyword with these statements eliminates the time it takes to regain partition performance after the move by maintaining global indexes and manually rebuilding indexes.To move a table partition or subpartition online: In SQL*Plus, connect as a user with the necessary privileges to alter the table and move the partition or subpartition.Run the ALTER TABLE ... MOVE PARTITION or ALTER TABLE ... MOVE SUBPARTITION statement.Example 20-9 Moving a Table Partition to a New SegmentThe following statement moves the sales_q4_2003 partition of the sh.sales table to a new segment with advanced row compression and index maintenance included:ALTER TABLE sales MOVE PARTITION sales_q4_2003 ROW STORE COMPRESS ADVANCED UPDATE INDEXES ONLINE;
  • Oracle GoldenGate enables zero down time upgrade migration or consolidation by synchronizing the Oracle DB 12c with the existing Oracle or non-Oracledatabases in real time. During the synchronization the production systems can continue to support transaction processing. As soon as the new system on Oracle Database 12c is in sync with legacy systems, users can do immediate switchover, thus experiencing minimal to zero downtime. While the target Oracle DB 12c is instantiated (via DB tools or ODI) with bulk data transfer, GoldenGate captures the change data committed in the production systems and stores them in its queue. Once the target system is ready it delivers the change data to the consolidated system to make sure they are in synch. After that point the target can be tested with production load. At this point Oracle GoldenGateVeridata can verify that there is no data discrepancy. When the new system is ready users can be switched over without any database downtime. With bidirectional replication capabilities GG can capture new transactions happening in the new consolidated environment and deliver to the legacy systems to keep them up to date for fail back option. The other option is to run both legacy and new environment concurrently where GG does the bidirectional synchronization in real time. This allows phased migration of users, and completely seamless transition into the new system with minimized risk. In addition to removing downtime and minimizing risk, GoldenGate allows the IT team to test the new environment without time pressure.
  • Active Active database replication is a key use case for GoldenGate when it comes to achieving high availability. GoldenGate’s bidirectional real-time data replication works across heterogeneous systems.Multi master database replication with GG helps eliminate any downtime planned or unplanned,because of the ability to work with the remaining databases if one db fails.. It also increases system performance by allowing transaction load distribution between completely parallel systems. Data can be filtered to move only certain tables or rows. There are no distance limitations. GG offers out of the box conflict management to handle possible data collisions that comes with multi-master replication.
  • This is included because of the upcoming GoldenGate 12c release references in the last section.
  • Transcript

    • 1. 1 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Maximize Availability With Oracle Database 12c Tomasz Murtas Technology Sales Consultant tomasz.murtas@oracle.com
    • 2. 2 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Oracle Database 12c  Oracle Database 12c introduces significant new (HA) capabilities that – Drastically cut down planned and unplanned downtime – Eliminate compromises between HA and Performance – Tremendously boost operational productivity  These take Availability to unprecedented new levels – Next-generation Maximum Availability Architecture (MAA) – Optimized for Oracle Extreme Availability
    • 3. 3 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Maximum Availability Architecture Active Data Guard – Data Protection, DR – Query Offload GoldenGate – Active-active – Heterogeneous RMAN, Oracle Secure Backup – Backup to tape / cloud Active Replica Edition-based Redefinition, Online Redefinition, Data Guard, GoldenGate – Minimal downtime maintenance, upgrades, migrations RAC – Scalability – Server HA Flashback – Human error correction Production Site Application Continuity – Application HA Global Data Services – Service Failover / Load Balancing
    • 4. 4 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Oracle Database 12c High Availability Key New Features  Application Continuity  Global Data Services  Data Guard Enhancements  RMAN Enhancements  Flex ASM  Other HA Enhancements  GoldenGate Update
    • 5. 5 Copyright © 2013, Oracle and/or its affiliates. All rights reserved.  Database outages can cause in- flight work to be lost, leaving users and applications in-doubt  Often leads to  User pains  Duplicate submissions  Rebooting mid-tiers  Developer pains In-Flight Work: Dealing With Outages Current Situation Application Servers Database Servers End User
    • 6. 6 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Solving Application Development Pains Transaction Guard A reliable protocol and API that returns the outcome of the last transaction New in Oracle Database 12c Application Continuity Safely attempts to replay in- flight work following outages and planned operations
    • 7. 7 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Transaction Guard Preserve and Retrieve COMMIT Outcome  API that supports known commit outcome for every transaction  Without Transaction Guard, upon failures – transaction retry can cause logical corruption  With Transaction Guard, applications can deal gracefully with error situations, vastly improving end-user experience  Used transparently by Application Continuity Application Servers Database Servers End User
    • 8. 8 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Application Continuity Masks Unplanned/Planned Outages  Replays in-flight work on recoverable errors  Masks many hardware, software, network, storage errors and outages when successful  Improves end-user experience and productivity without requiring custom app development Transaction Replayed Application Servers Database Servers End User
    • 9. 9 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Oracle Database 12c High Availability Key New Features  Application Continuity  Global Data Services  Data Guard Enhancements  RMAN Enhancements  Flex ASM  Other HA Enhancements  GoldenGate Update
    • 10. 10 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Databases in Replicated Environments Challenges  No seamless way to efficiently use all the databases  No automated load balancing and fault tolerance Primary Active Standby Active Standby GoldenGate
    • 11. 11 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Global Data Services Global Data Services • Extends RAC-style service failover, load balancing (within and across data centers), and management capabilities to a set of replicated databases • Takes into account network latency, replication lag, and service placement policies • Achieve higher availability, improved manageability and maximize performance Load Balancing and Service Failover for Replicated Databases
    • 12. 12 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Global Data Services  Reporting client routed to ‘best’ database – Based on location, response time, data, acceptable data lag – Reports will automatically run on least loaded server  Reporting client failover – If preferred database not available, will route to another database in same region or a remote database  Global service migration – Automatically migrates services based on failover/switchover - if primary database is down, start Call Center service on the new primary Active Data Guard Example Active Data Guard Reporting Service Call Center Service
    • 13. 13 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Global Data Services  Call Center Client connections and requests transparently routed to the closest / best database – Runtime load balancing metrics give client real-time information on which database to issue next request  If a database fails, its global services restarted on another replica GoldenGate Example GoldenGate Call Center Service
    • 14. 14 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Global Data Services Use Case: Active Data Guard without GDS Primary Active Standby Data Guard Order History ViewOrder Capture Critical E-Commerce App accessing Active Data Guard Standby What happens when Active Standby is down? Orders Service History Service Primary Active Standby Data Guard Order History ViewOrder Capture Orders Service History Service ?
    • 15. 15 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Global Data Services Use Case: Active Data Guard with GDS: All HA When Active Standby is down …  GDS fails over History Service to primary, redirects connection through FAN/FCF Primary Active Standby Data Guard Orders Service History Service Global Data Services Order History View Order Capture History Service
    • 16. 16 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Global Data Services: Concepts  GDS Region: Group of databases and clients in close network proximity, e.g., East, West  GDS Pool: Databases that offer a common set of global services, e.g., HR, Sales  Global Service: Database Service provided by multiple databases with replicated data – Local service + {region affinity, replication lag, database cardinality}  Global Service Manager (GSM): Provides main GDS functionality: service management and load balancing – Clients connect to GSM instead of database listener – At least one GSM per region or multiple GSMs for High Availability – All databases/services register to all GSM Listeners  GDS Catalog: stores all metadata, enables centralized global monitoring & management – Global service configuration stored in GDS Catalog  GDSCTL: Command-line Interface to administer GDS
    • 17. 17 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Global Data Services: Summary Globally Replicated, High Availability Architecture • GDS Framework dynamically balances user requests across multiple replicated sites – Based on location, load, and availability • Provides global availability – Supports automatic service failover • GDS integrates disparate databases into a unified data cloud GSM - Global Service Manager Local Standby Local Standby Data Center #2 EMEA Data Center #1 APAC Active Data Guard Active Data Guard Primary Local Standby Active Data Guard GDSCTL GDS Catalog Primary GDS Catalog Standby Master Oracle GoldenGate Active Data Guard SALES POOL (sales_reporting_srvc, sales_entry_srvc) HR POOL(hr_apac_srvc, hr_emea_srvc) All GDS client databases connected to all GSMs Master Remote Standby Reader Farm Active Data Guard Global Service Managers Global Service Managers
    • 18. 18 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Oracle Database 12c High Availability Key New Features  Application Continuity  Global Data Services  Data Guard Enhancements  RMAN Enhancements  Flex ASM  Other HA Enhancements  GoldenGate Update
    • 19. 19 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Zero Data Loss Challenge The longer the distance, the larger the performance impact Synchronous Communication Leads To Performance Trade-Offs Primary Standby Commit Commit Ack Network Send Network Ack
    • 20. 20 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Primary StandbyASYNC Data Guard Async – Today Some Data Loss Exposure Upon Disaster
    • 21. 21 Copyright © 2013, Oracle and/or its affiliates. All rights reserved.  Far Sync: light-weight Oracle instance: standby control file, standby redo logs, archived redo logs, no data files  Receives redo synchronously from primary, forwards redo asynchronously in real-time to standby  Upon Failover: Async standby transparently obtains last committed redo from Far Sync and applies: zero data loss failover  Second Far Sync Instance can be pre-configured to transmit in reverse direction after failover/switchover  Terminal standbys required to be Active Data Guard Standbys Active Data Guard Far Sync – New in 12.1 Zero Data Loss For Async Deployments
    • 22. 22 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Primary Standby Far Sync Instance Active Data Guard Far Sync Operational Flow ASYNC SYNC
    • 23. 23 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Primary Standby Far Sync Instance Active Data Guard Far Sync Operational Flow (contd.) No Compromise Between Availability and Performance! ASYNC SYNC Zero Data Loss
    • 24. 24 Copyright © 2013, Oracle and/or its affiliates. All rights reserved.  Best data protection, least performance impact  Low cost and complexity  Best way to implement a near DR + Far DR model  Relevant to existing Data Guard ASYNC configurations  Data Guard Failover? No Problem! Just do it – No Data Loss! Active Data Guard Far Sync Benefits
    • 25. 25 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Active Data Guard Real-Time Cascading Eliminates Propagation Delay Primary Standby 1 Standby 2  In 12.1, Standby 1 forwards redo to Standby 2 in real-time, as it is received: no propagation delay for a log switch  Standby 2 (Active Data Guard Standby) is up-to-date for offloading read-only queries and reports SYNC or ASYNC ASYNC  In 11.2, Standby 1 waits till log switch before forwarding redo from archived logs to Standby 2
    • 26. 26 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Data Guard Fast Sync Reduced Primary Database Impact for Maximum Availability Primary Standby Redo Logs Standby Redo Logs Commit Commit Acknowledge  For SYNC transport: remote site acknowledges received redo before writing it to standby redo logs  Reduces latency of commit on primary  Better DR – increased SYNC distance  If network round-trip latency less than time for local online redo log write, synchronous transport will not impact primary database performance NSS RFSLGWR Commit Commit Acknowledge Acknowledge returned on receipt Primary Standby Redo Logs Standby Redo Logs NSS RFSLGWR
    • 27. 27 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Data Guard Other New Features in Oracle Database 12c Rolling Upgrade With Active Data Guard  Automate complexity through simple PL/SQL Package: DBMS_ROLLING (12.1.0.1 onwards), with simple Init, Build, Start, Switchover, Finish procedures  Additional Data Type Support: XML OR, Binary XML, Spatial, Image, Oracle Text, DICOM, ADTs (simple types, varrays), … Validate Role Change Readiness  Ensure Data Guard configuration ready for switchover with automated health checks – verify no log gaps, perform log switch, detects any inconsistencies, ensures online log files cleared on standby, … DML on Global Temporary Tables  Temporary undo is not logged in redo logs  Enables DML on global temporary tables on Active Data Guard: more reporting support  Set by default on Active Data Guard standby Unique Sequences  Primary allocates a unique range of sequence numbers to each Standby  Enables more flexible reporting choices for Active Data Guard
    • 28. 28 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Oracle Database 12c High Availability Key New Features  Application Continuity  Global Data Services  Data Guard Enhancements  RMAN Enhancements  Flex ASM  Other HA Enhancements  GoldenGate Update
    • 29. 29 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. 1 0 1 0 1 0 1 0 1 1 1 0 1 0 0 1 1 0 1 1 0 1 1 1 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 1 0 1 1 0 1 0 0 1 Fine-grained Table Recovery From Backup  Simple RECOVER TABLE command to recover one or more tables (most recent or older version) from an RMAN backup  Eliminates time and complexity associated with manual restore, recover & export – Enables fine-grained point-in-time recovery of individual tables instead of the contents of the entire tablespaceRMAN Backups
    • 30. 30 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. 11101 10110111101 0010100001 1001 11101 10110111101 0010100001 1001 Cross-Platform Backup & Restore  Simplifies procedure for platform migration  Minimize read-only impact with multiple incremental backups Simplified Platform Migration Source Database (AIX) Backup to Disk/Tape (data files, optional endian conversion, metadata export) Restore Backup (optional endian conversion, metadata import) Destination Database (Solaris)
    • 31. 31 Copyright © 2013, Oracle and/or its affiliates. All rights reserved.  Backup and recover specific pluggable databases with new PLUGGABLE DATABASE keywords: RMAN> BACKUP PLUGGABLE DATABASE <PDB1>, <PDB2>;  Familiar BACKUP DATABASE command backs up CDB, including all PDBs  PDB Complete Recovery – RESTORE PLUGGABLE DATABASE <PDB>; – RECOVER PLUGGABLE DATABASE <PDB>;  PDB Point-in-Time Recovery – RMAN> RUN { – SET UNTIL TIME 'SYSDATE-3'; – RESTORE PLUGGABLE DATABASE <PDB>; – RECOVER PLUGGABLE DATABASE <PDB>; – ALTER PLUGGABLE DATABASE <PDB> OPEN RESETLOGS; }  Familiar RECOVER DATABASE command recovers CDB, including all PDBs Oracle Multitenant Backup & Restore Fine-Grained Backup & Recovery to Support Consolidation
    • 32. 32 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Better Performance Other New Features in Oracle Database 12c  Enhanced Multi-section Backup capability: now supports image copies and incremental backups  More efficient synchronization of standby database using simple RMAN command: RECOVER DATABASE … FROM SERVICE  Enhanced Active Duplicate – Cloning workload moved to destination server via auxiliary channels, relieving resource bottlenecks on source – Cloning can now use RMAN compression and multi-section capability to further increase performance
    • 33. 33 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Oracle Database 12c High Availability Key New Features  Application Continuity  Global Data Services  Data Guard Enhancements  RMAN Enhancements  Flex ASM  Other HA Enhancements  GoldenGate Update
    • 34. 34 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Automatic Storage Management (ASM) Overview ASM Cluster Pool of Storage Disk Group BDisk Group AShared Disk Groups Wide File Striping One to One Mapping of ASM Instances to Servers ASM Instance Database Instance ASM Disk RAC Cluster Node4Node3Node2Node1 Node5ASM ASM ASM ASM ASM ASM Instance Database Instance DBA DBA DBB DBB DBCDBB Current State
    • 35. 35 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Flex ASM: Eliminate 1:1 Server Mapping New: ASM Storage Consolidation in Oracle Database 12c ASM Cluster Pool of Storage Disk Group BDisk Group AShared Disk Groups Wide File Striping Databases share ASM instances ASM Instance Database Instance ASM Disk RAC Cluster Node5Node4Node3Node2Node1 Node5 runs as ASM Client to Node4 Node1 runs as ASM Client to Node2 Node1 runs as ASM Client to Node4 Node2 runs as ASM Client to Node3 ASM ASM ASM ASM Instance DBA DBA DBB DBB DBCDBB
    • 36. 36 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Flex ASM: Supporting Oracle Database 11g Previous Database Versions Will Host Local ASM Instance ASM Cluster Pool of Storage Disk Group BDisk Group AShared Disk Groups Wide File Striping Databases share ASM instances ASM Instance Database Instance ASM Disk RAC Cluster Node5Node4Node3Node2Node1 ASM ASM ASM DBA DBA DBB DBB DBCDBB ASM ASM 11.2 DB 11.2 DB
    • 37. 37 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Oracle Database 12c High Availability Key New Features  Application Continuity  Global Data Services  Data Guard Enhancements  RMAN Enhancements  Flex ASM  Other HA Enhancements  GoldenGate Update
    • 38. 38 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Online Redefinition Enhancements  Improved sync_interim_table performance  Ability to redefine table with VPD policies  Improved resilience of finish_redef_table  Better handling of multi-partition redefinition Other HA Enhancements Online Datafile Move  Relocate a datafile while users are actively accessing data: ALTER DATABASE MOVE DATAFILE …  Maintains data availability during storage migration Separation of Duties  SYSDG / SYSBACKUP: Data Guard & RMAN specific administrative privileges  No access to user data: enforce security standards throughout the enterprise Additional Online Operations  Drop index online / Alter index unusable online / Alter index visible / invisible online  Drop constraint online / Set unused column online  Online move partition: ALTER TABLE … MOVE PARTITION … ONLINE
    • 39. 39 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Oracle Database 12c High Availability Key New Features  Application Continuity  Global Data Services  Data Guard Enhancements  RMAN Enhancements  Flex ASM  Other HA Enhancements  GoldenGate Update
    • 40. 40 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Log-based Changed Data Oracle & Non-Oracle Database(s) Message Bus Oracle Database12c * Oracle GoldenGate 12c* Low-Impact, Real-Time Data Integration & Transactional Replication Data Integrator New DB/ HW/OS/APP Fully Active Distributed DB Reporting Database Data Warehouse ODS Zero Downtime Upgrade & Migration Query & Report Offloading Data Synchronization within the Enterprise Real-time BI, Operational Reporting, MDM Event Driven Architecture, SOA Active-Active High Availability Message Bus Global Data Centers Exact Copy of Primary Disaster Recovery for Non-Oracle Databases *: GoldenGate 12c for Oracle Database 12c will be available in FY14
    • 41. 41 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. GoldenGate Zero Downtime Migration/Upgrade Seamless Migration and Upgrades to Oracle Database 12c* • Consolidate/migrate/ maintain systems without downtime • Minimize risk with failback option • Validate data before switchover • Use Active-Active replication for phased user migration Nn- Oracle ERP Oracle Database 12c Non- Oracle ERP Compare & Verify using Oracle GoldenGate Veridata *: GoldenGate 12c for Oracle Database 12c will be available in FY14 Real-Time Replication For Migrations Optional Failback Data Flow Switchover Oracle 10.2 CRM Oracle 11.2 DW
    • 42. 42 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Oracle GoldenGate for Active-Active Databases Increase ROI on Existing Servers & Synchronize Data • Utilize secondary systems for transactions • Enable continuous availability during unplanned & planned outages • Synchronize data across global data centers • Use intelligent conflict detection & resolution *: GoldenGate 12c for Oracle Database 12c will be available in FY14 Oracle Database 12c Oracle 10.2 App2 Oracle 11.2 App3 Non- Oracle App Heterogeneous Bi-Directional Real-Time Replication
    • 43. 43 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Oracle Database 12c  Oracle Database 12c offers a tremendously sophisticated set of high availability (HA) capabilities  These capabilities – Further reduce downtime – Significantly improve productivity – Eliminate traditional compromises Extreme Availability: Summary
    • 44. 44 Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Safe Harbor Statement THE PRECEDING IS INTENDED TO OUTLINE OUR GENERAL PRODUCT DIRECTION. IT IS INTENDED FOR INFORMATION PURPOSES ONLY, AND MAY NOT BE INCORPORATED INTO ANY CONTRACT. IT IS NOT A COMMITMENT TO DELIVER ANY MATERIAL, CODE, OR FUNCTIONALITY, AND SHOULD NOT BE RELIED UPON IN MAKING PURCHASING DECISIONS. THE DEVELOPMENT, RELEASE, AND TIMING OF ANY FEATURES OR FUNCTIONALITY DESCRIBED FOR ORACLE’S PRODUCTS REMAINS AT THE SOLE DISCRETION OF ORACLE.
    • 45. 45 Copyright © 2013, Oracle and/or its affiliates. All rights reserved.
    • 46. 46 Copyright © 2013, Oracle and/or its affiliates. All rights reserved.