SlideShare a Scribd company logo
GOLDENGATE TRAINING
By Vipin Mishra
GoldenGate expert
Training Overview
Topics
Overview of Goldengate
Goldengate Replication & component
Goldengate Director
GoldenGate veridata
GoldenGate vs Oracle Stream
GoldenGate vs Data Guard
Goldengate trail file
How Oracle GoldenGate Works
GoldenGate Checkpointing
Goldengate download and installation
prerequisites of Installation
Database Privilege for GG user
Supplemental logging
Trandata
GoldenGate Initial data Load
GoldenGate DML Replication
DDL Replication - Overview
GoldenGate parameter file
Logdump Utility
CCM and ICM
GoldenGate Upgrade
GoldenGate troubleshoot
GolddenGate performance command
BATCHSQL
GoldenGate functions
Golden Gate abend and bugs and fixes
Golden Gate failover document and DR plan
Golden Gate reporting through hearbeat
Custom Data Manipulation Requirements for the Database Replicate Process
Additional topic for discussion
Overview of Goldengate.
• Oracle GoldenGate is a heterogeneous replication solution
• Oracle acquired GoldenGate Software Inc ( founded in 1995 by Eric Fish and Todd Davidson ), a leading
provider of real-time data integration solutions on September 3, 2009.
• GoldenGate supports:
– Zero Downtime Upgrade and Migration
– System Integration / Data Synchronization
– Query and Report offloading
– Real-time Data distribution
– Real-time Data Warehousing
– Live standby database
– Active-active high availability
• Controversial replacement for Oracle Streams
• Oracle GoldenGate is Oracle’s strategic solution for real time data integration.
• GoldenGate software enables mission critical systems to have continuous availability and access to real-time
data.
• It offers a fast and robust solution for replicating transactional data between operational and analytical
systems.
• Today there are more than 500 customers around the world using GoldenGate technology for over 4000
solutions.
GoldenGate Supported Topologies
Unidirectional Bi-directional Peer-to-Peer
CascadingConsolidationBroadcast
Scalability, Database TieringData WarehouseData Distribution
Instant Failover, Active-Active Load Balancing, High AvailabilityReporting Instance
GoldenGate Supported Databases
Oracle GoldenGate for Non Oracle Databases
Supported non-oracle databases include:
IBM DB2 on Windows, UNIX and Linux
Microsoft SQL Server 2000, 2005, 2008
Sybase on Windows, UNIX and Linux
Teradata on Windows, UNIX and Linux
MySQL on Windows, UNIX and Linux
TimesTen on Windows and Linux (delivery only)
GoldenGate Streams versus GoldenGate
GoldenGate Basic Architecture
Goldengate Replication & component
GoldenGate replication consists of three basic utilities: “extract”,”replicat” and
“data pump”. In addition all three of these utilities access “trail files”, which are
flat files that contain formatted GoldenGate data.
An extract group mines a database’s redo logs; and writes information about
the changes it find into trail files. Extract is functionally similar to the Streams
“capture” process.
A replicat group reads information from trail files; and then writes those
changes into database tables. Replicat is functionally similar to the Streams
“apply” process.
A data pump group copies information between trail files. Data is functionally
similar to Streams “propagate” process.
GoldenGate Configuration Options
GoldenGate Classic Capture
Extract Data Pump
Local
Trail
Source
Database
Online
Redo/Archive
Logs
GoldenGate Integrated Capture
Extract Data Pump
LCRLCRLCR
Log Miner
Local
Trail
Source
Database
Online
Redo/Archive
Logs
GoldenGate Downstream Capture
Real Time Downstream Mode
GoldenGate Downstream Capture
Downstream Archive Log Mode
GoldenGate Bi-Directional Replication
• Also known as Active-Active Replication
14
Capture
Replicat
Replicat
Capture
Data
Pump
Data
Pump
Server A Server B
Online
Redo Logs
Online
Redo Logs
Local
Trail
Remote
Trail
Remote
Trail
Local
Trail
GoldenGate Supported Data Types
• The following data types are supported for both classic and integrated
capture
– NUMBER
– BINARY FLOAT
– BINARY DOUBLE
– CHAR
– VARCHAR2
– LONG
– NCHAR
– NVARCHAR2
– RAW
– LONG RAW
– DATE
– TIMESTAMP
15
GoldenGate Supported Data Types
• There is limited support in classic capture for the following data types:
– INTERVAL DAY
– INTERVAL YEAR
– TIMESTAMP WITH TIME ZONE
– TIMESTAMP WITH LOCAL TIME ZONE
• The following data types are not supported
– Abstract data types with scalar, LOBs, VARRAYs, nested tables , REFS
– ANYDATA
– ANYDATASET
– ANYTYPE
– BFILE
– MLSLABEL
– ORDDICOM
– TIMEZONE_ABBR
– URITYPE
– UROWID16
GoldenGate Restrictions
• Neither capture method supports
– Database replay
– EXTERNAL tables
– Materialized views with ROWID
• Classic capture does not support
– IOT mapping tables
– Key compressed IOTs
– XMLType tables stored as XML Object Relational
– Distributed Transactions
– XA and PDML distributed transactions
– Capture from OLTP table compressed tables
– Capture from compressed tablespaces
– Exadata Hybrid Columnar Compression (EHCC)17
GoldenGate Oracle-Reserved Schemas
The following schema names are reserved by Oracle and should not be configured
for GoldenGate replication:
18
$AURORA EXFSYS REPADMIN
$JIS MDSYS SYS
$ORB ODM SYSMAN
$UNAUTHENTICATED ODM_MTR SYSTEM
$UTILITY OLAPSYS TRACESVR
ANONYMOUS ORDPLUGINS WKPROXY
AURORA ORDSYS WKSYS
CTXSYS OSE$HTTP$ADMIN WMSYS
DBSNMP OUTLN XDB
DMSYS PERFSTAT
DSSYS PUBLIC
GoldenGate RAC Support
• RAC support has some limitations in classic capture mode
– Extract can only run against one instance
– If instance fails,
• Manager must be stopped on failed node:
• Manager and extract must be started on a surviving node
– Failover can be configured in Oracle Grid Infrastructure
– Additional archive log switching may be required in archive log mode
– Before shutting down extract process
• Insert dummy record into a source table
• Switch log files on all nodes
– Additional configuration required to access ASM instance
• Shared storage for trails can be:
– OCFS
– ACFS
– DBFS
– No mention of NFS in the documentation
19
Goldengatetrailfile
Each record in the Trail file contains the following information:
· The operation type, such as an insert, update, or delete
· The transaction indicator (TransInd): 00 beginning, 01 middle, 02 end or 03
whole of transaction
· The before or after indicator (BeforeAfter) for update operations
· The commit timestamp
· The time that the change was written to the GoldenGate file
· The type of database operation
· The length of the record
· The Relative Byte Address (RBA) within the GoldenGate file
· The schema and table name
21
Configuration
How Oracle GoldenGate Works
Modular De-Coupled Architecture
LAN/WAN
Internet
TCP/IP
Bi-directional
Capture
Trail
Pump Delivery
Trail
Source
Database(s)
Target
Database(s)
Capture: committed transactions are captured (and can be filtered) as they occur by reading
the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed, encrypted
for routing to target(s).
Delivery: applies data with transaction integrity,
transforming the data as required.
How Oracle GoldenGate Works
LAN/WAN
Internet
Capture
Capture: committed transactions are captured (and can
be filtered) as they occur by reading the transaction logs.
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Or
Database(s)
How Oracle GoldenGate Works
LAN/WAN
Internet
Capture
Trail
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Ora
Database(s)
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
How Oracle GoldenGate Works
LAN/WAN
Internet
Capture
Trail
Pump
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Ora
Database(s)
How Oracle GoldenGate Works
LAN/WAN
Internet
TCP/IP
Capture
Trail
Pump
Trail
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed,
encrypted for routing to target(s).
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Or
Database(s)
How Oracle GoldenGate Works
LAN/WAN
Internet
TCP/IP
Capture
Trail
Pump Delivery
Trail
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed,
encrypted for routing to target(s).
Delivery: applies data with transaction
integrity, transforming the data as required.
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-O
Database(s)
How Oracle GoldenGate Works
LAN/WAN
Internet
TCP/IP
Capture
Trail
Pump Delivery
Trail
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed,
encrypted for routing to target(s).
Delivery: applies data with transaction
integrity, transforming the data as required.
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Ora
Database(s)
Bi-directional
• Capture, Pump, and Delivery save positions to a checkpoint file so they can recover in
case of failure
GoldenGate Checkpointing
Capture DeliveryPump
Commit Ordered
Source Trail
Commit Ordered
Target Trail
Source
Database
Target
Database
Begin, TX 1
Insert, TX 1
Begin, TX 2
Update, TX 1
Insert, TX 2
Commit, TX 2
Begin, TX 3
Insert, TX 3
Begin, TX 4
Commit, TX 3
Delete, TX 4
Begin, TX 2
Insert, TX 2
Commit, TX 2
Begin, TX 3
Insert, TX 3
Commit, TX 3
Begin, TX 2
Insert, TX 2
Commit, TX 2
Start of Oldest Open (Uncommitted)
Transaction
Current Read
Position
Capture
Checkpoint
Current
Write
Position
Current
Read
Position
Pump
Checkpoint
Current
Write
Position
Current
Read
Position
Delivery
Checkpoint
GoldenGate Installation
GoldenGate GGSCI
Structure of checkpoint table
• SQL> desc ggate.ggschkpt
• Name Null? Type
• ----------------------------------------- -------- ----------------------------
• GROUP_NAME NOT NULL VARCHAR2(8)
• GROUP_KEY NOT NULL NUMBER(19)
• SEQNO NUMBER(10)
• RBA NOT NULL NUMBER(19)
• AUDIT_TS VARCHAR2(29)
• CREATE_TS NOT NULL DATE
• LAST_UPDATE_TS NOT NULL DATE
• CURRENT_DIR NOT NULL VARCHAR2(255)
• LOG_CSN VARCHAR2(129)
• LOG_XID VARCHAR2(129)
• LOG_CMPLT_CSN VARCHAR2(129)
• LOG_CMPLT_XIDS VARCHAR2(2000)
• VERSION NUMBER(3)
Goldengate download and installation
• Go to https://edelivery.oracle.com/
• Accept term and condition after signin
• Select product pack and platform
• Product pack : Oracle fusion middleware
• Platform : AIX or Oracle solaris (as per your req)
• Click on go
• Select goldengate management pack for oracle database
• Download it to your desktop or preffered location
• Once download is finished. ftp or winscp this tar zip file to your server
goldengate home directory (lets say /ogg/11.2)
• Create .profile for this goldengate
• Untar this file to home directory
• Tar –xvf abc.tar
• ./ggsci
• You will redirect to ggsci interface
• GGSCI > create subdirs
• Start mgr
• /ogg/11.2 should have atleast 500 gb space
• Ask dba to create goldengate user in database..lets say gguser is goldengate user
• Gguser should have connect/resource and select privilege in source and dba
privilege in target if we are doing DML replication
• GGUSER should have dba role in source and target if we are using DDL/DML both
replication.
• SQL> create tablespace gg_data
2 datafile ‘/u02/oradata/gg_data01.dbf’ size 200m;
• SQL> create user gguser identified by gguser
2 default tablespace ggs_data
3 temporary tablespace temp;
• User created.
Prerequisites
• SQL> grant connect,resource to ggs_owner;
• Grant succeeded.
• SQL> grant select any dictionary, select any table to ggs_owner;
• Grant succeeded.
• SQL> grant create table to ggs_owner;
• Grant succeeded.
• SQL> grant flashback any table to ggs_owner;
• Grant succeeded.
• SQL> grant execute on dbms_flashback to ggs_owner;
• Grant succeeded.
• SQL> grant execute on utl_file to ggs_owner;
• Grant succeeded.
• Check the login
• Ggsci> dblogin userid gguser,password gguser
Supplemental Logging
Supplemental logging
New
object
Existing
object
Supplemental logging is critical to
healthy replication specially for
tables with update/delete changes
PK
UK
KEYCOLs
ALL
Supplemental Logging
Supplemental logging
New
object
Existing
object
PK
UK
KEYCOLs
DDLOPTIONS &
ADDTRANDATA &
GETREPLICATES &
REPORT
To add SL to source DB
To replicat add SL to target
ALL
Supplemental Logging
– Required at the database level (Minimum Level)
• ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
• Minimal Supplemental Logging
“logs the minimal amount of information needed for LogMiner to identify, group, and merge the redo
operations associated with DML changes”
– Identification key logging (PK, UK, FK)
• Table Level
• Database Level
• Method
– SQL e.g. ALTER TABLE <> ADD SUPPLEMENTAL LOG DATA …
– GoldenGate e.g. ADD TRANDATA <Table Name>
Supplemental Logging
Tim
e
Source
t1 Create table
t2 ADD SUPPLEMENTAL LOG DATA (ALL)
COLUMNS
t3 Create PK index
t4 add supplemental log data (id) (temp)
t5 drop SUPPLEMENTAL LOG DATA (ALL)
T6 add supplemental log data (id)
T7 drop supplemental log data (id) (temp)
Tim
e
Target
t1 Create table
t2 Create PK index
t3 ADD SUPPLEMENTAL LOG DATA (ALL)
COLUMNS
t4 add supplemental log data (id) (temp)
t5 drop supplemental log data (id)
(temp)
1. ADDTRANDATA on source for new object.
2. Monitor replicated object on target and add
SL manually before any DML change
Add trandata
• For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log
group includes one of the following sets of columns, in the listed order of priority, depending on what is
defined on the table:
1. Primary key
2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
columns, and no nullable columns
3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
columns, but can include nullable columns
4. If none of the preceding key types exist (even though there might be other types of keys
defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
the database allows to be used in a unique key, excluding virtual columns, UDTs,
function-based columns, and any columns that are explicitly excluded from the Oracle
GoldenGate configuration.
• The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause thatis
appropriate for the type of unique constraint (or lack of one) that is defined for the table.
• If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA
command to log the required supplemental data. It is possible to use ADD
TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
● You can stop DML activity on any and all tables before users or applications perform DDL on them.
● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
Project overview
• PMSPRD SERVER PMSPRD2 TARGET SERVER
DML REPLICATION
SOURCE TARGET
DR
REPORTING
W
A
N
T
C
P
PMSPRD SERVER
DR SERVER
Parameter files
• Manager process
• port 15000
• dynamicportlist 15010-15020
• PURGEOLDEXTRACTS ./dirdat/pa*, USECHECKPOINTS, MINKEEPDAYS 2
• AUTOSTART extract E_actPRD
• AUTOSTART extract P_actPRD
• autorestart extract P_actPRD, retries 6, waitminutes 25
• autorestart extract E_actPRD, retries 6, waitminutes 25
Creating parameter file
• Extract in source
• EXTRACT E_actPRD
• SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
• SETENV (ORACLE_HOME = "/u01/app/oracle/product/10.2.0.4a")
• SETENV (ORACLE_SID = "pmsp11")
• EXTTRAIL ./dirdat/pa
• DISCARDFILE ./dirrpt/E_actPRD.dsc, APPEND
• DBOPTIONS ALLOWUNUSEDCOLUMN
• USERID gguser, PASSWORD gguser
• TABLE sa.*;
• --
• -- Enter the following statements from GGSCI before starting this and other processes
• --
• -- delete extract E_actPRD
• -- add extract E_actPRD, tranlog begin 2011-07-22 08:30:00;
• -- add exttrail ./dirdat/pa, extract E_actPRD, megabytes 100
DR Pump parameter
• EXTRACT P_actDR
• SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
• SETENV (ORACLE_HOME = "/u01/app/oracle/product/10.2.0.4a")
• SETENV (ORACLE_SID = "pmsp11")
• RMTHOST cmfalgaqdarst01.comfin.ge.com, mgrport 15000
• RMTTRAIL ./dirdat/pa
• PASSTHRU
• TABLE sa.*;
• --
• -- Enter the following statements from GGSCI before starting this process if DB is
newly created
• --
• -- delete extract P_actDR
• -- add extract P_actDR, exttrailsource ./dirdat/pa
• -- add rmttrail ./dirdat/pa, extract P_actDR, megabytes 20
• Pump in source
• EXTRACT P_actPRD
• SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
• SETENV (ORACLE_HOME = "/u01/app/oracle/product/10.2.0.4a")
• SETENV (ORACLE_SID = "pmsp11")
• RMTHOST cmfciohpdb17.comfin.ge.com, mgrport 15000
• RMTTRAIL ./dirdat/pa
• PASSTHRU
• TABLE schemaname.*;
• --
• -- Enter the following statements from GGSCI before starting this process if DB is
newly created
• --
• -- delete extract P_actPRD
• -- add extract P_actPRD, exttrailsource ./dirdat/pa
• -- add rmttrail ./dirdat/pa, extract P_actPRD, megabytes 20
• REPLICAT R_rptPRD in reporting db ( same server)
• --
• REPLICAT R_rptPRD
• --
• -- Environment Variables
• SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
• SETENV (ORACLE_HOME ="/u01/app/oracle/product/10.2.0.4")
• SETENV (ORACLE_SID = "pmsr11")
• --
• ASSUMETARGETDEFS
• DISCARDFILE ./dirrpt/R_rptPRD.dsc, APPEND
• USERID gguser, PASSWORD gguser
• --
• BATCHSQL
• GETTRUNCATES
• -- Call DB procedure to populate parent key field in r7270_arhist table in PMSV2, PMSV3, HISV2, HISV3 schemas
• --Example
• --MAP pmsv2.r7270_arhist, TARGET pmsv2.r7270_arhist, &
• --SQLEXEC (SPNAME gguser.find_open_item, ID pklookup, &
• --PARAMS (in_iHistKey = r7270_arhist_p, in_sSchema = 'PMSV2')), &
• --COLMAP (USEDEFAULTS, parent_key=pklookup.numOut) ;
• --COLMAP (USEDEFAULTS, parent_key=pklookup.numOut,ggstamp=@datenow() );
• --
• -- Use COLMAP to convert value '*CON' to 'CONS' in nbr_sequence column of t4001_ubb_asset table in PMSV2 and
PMSV3
• --
• --MAP pmsv2.t4001_ubb_asset, TARGET pmsv2.t4001_ubb_asset, &
• --COLMAP (USEDEFAULTS, nbr_sequence=@strsub(nbr_sequence, "*CON", "CONS") );
• -
• MAP sa.*, TARGET sa.*;
• -- Enter the following statements from GGSCI before starting this process
• --
• -- delete replicat R_rptPRD
• -- add replicat R_rptPRD, exttrail ./dirdat/pa
•
• REPLICAT R_actPRD (Main target)
• --
• SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
• SETENV (ORACLE_HOME ="/u01/app/oracle/product/10.2.0.4a")
• SETENV (ORACLE_SID = "pmspr30")
• --
• ASSUMETARGETDEFS
• --
• DISCARDFILE ./dirrpt/R_actPRD.dsc, APPEND
• --
• USERID gguser, PASSWORD gguser
• --
• MAP sa.*, TARGET sa.*;
• --
• -- Enter the following statements from GGSCI before starting this process
• --
• -- delete replicat R_actPRD
• -- add replicat R_actPRD, exttrail /dirdat/pa
• Replicate in DR
• REPLICAT R_actDR
• SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
• SETENV (ORACLE_HOME ="/u01/app/oracle/product/10.2.0.4a")
• SETENV (ORACLE_SID = "pmspr40")
• --
• ASSUMETARGETDEFS
• APPLYNOOPUPDATES
• --
• DISCARDFILE ./dirrpt/R_actDR.dsc, APPEND, MEGABYTES 50
• --
• GETTRUNCATES
• --
• USERID gguser, PASSWORD gguser
• --
• MAP sa.*, TARGET sa.*;
• --
• -- Enter the following statements from GGSCI before starting this process
• --
• -- delete replicat R_actDR
• -- add replicat R_actDR, exttrail /dirdat/pa
Logdump utility
• Logdump (The Oracle GoldenGate Log File Dump Utility) is a useful Oracle GoldenGate utility that
enables you to search for, filter, view, and save data that is stored in a GoldenGate trail or extract
file. The following table summarizes the useful commands of logdump.
Logdump continue
• 1. Find out which local (exttrail) and remote trail (rmttrail) files the data pump is reading and writing to.
GGSCI> info <pump>, detail
2. In Logdump, issue the following series of commands to open the remote trail and find the last record.
Logdump> open <trail file from step 1>
Logdump> ghdr on
Logdump> count
Logdump> skip (<total count>-1)
Logdump> n (for next)
3. Write down the Table name, AuditPos and AuditRBA of the last record. (Do not be confused by the Logdump field names: AuditRBA
shows the Oracle redo log sequence number, and AuditPos shows the relative byte address of the record.)
• The following example shows that the operation was captured from Oracle's redo log sequence number 1941 at RBA 1965756.
Logdump 12 >n
___________________________________________________________________
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 12383 (x305f) IO Time : 2005/11/17 16:02:38.295.847
IOType : 5 (x05) OrigNode : 255 (xff)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 1941 AuditPos : 1965756
Continued : N (x00) RecCount : 1 (x01)
2005/11/17 16:02:38.295.847 Insert Len 12383 RBA 7670707
Name: LLI.S_EVT_ACT
After Image: Partition 4
0000 0009 0000 0005 3532 3432 3900 0100 1500 0032 | ........52429......2
3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0002 | 005-11-17:16:01:51..
000A 0000 0006 5445 5354 2031 0003 0015 0000 3230 | ......TEST 1......20
3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 0400 | 05-11-17:16:01:51...
0A00 0000 0654 4553 5420 3200 0500 0500 0000 0130 | .....TEST 2........0
0006 000A 0000 0000 0000 0000 0000 0007 0005 0000 | ....................
0001 3000 0800 0B00 0000 0778 2035 3234 3239 0009 | ..0........x 52429..
Logdump continue
• 4. In Logdump, issue the following series of commands to open the local trail file and find the record that corresponds to the one you found
in Step 2.
Logdump>open <exttrail file>
• Logdump>ghdr on
Logdump>filter include AuditRBA <number>
Logdump>filter include filename <table_name>
Logdump>filter match all
Logdump>n
• Note: The Logdump filter command to search for a record's redo RBA is "AuditRBA", even though the trail record's header calls it
"AuditPos". This is an anomaly of the Logdump program.
•
5. Verify that the operation is from the same Oracle redo log sequence number as the one you recorded in step 3. (AuditRBA as shown in
the trail header), as there is no way to filter for this in Logdump.
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 12383 (x305f) IO Time : 2005/11/17 16:01:51.000.000
IOType : 5 (x05) OrigNode : 255 (xff)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 1941 AuditPos : 1965756 ß same as the one in step 2.
Continued : N (x00) RecCount : 1 (x01)
2005/11/17 16:01:51.000.000 Insert Len 12383 RBA 7297435
Name: LLI.S_EVT_ACT
After Image: Partition 4
0000 0009 0000 0005 3532 3432 3900 0100 1500 0032 | ........52429......2
3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0002 | 005-11-17:16:01:51..
000A 0000 0006 5445 5354 2031 0003 0015 0000 3230 | ......TEST 1......20
3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 0400 | 05-11-17:16:01:51...
0A00 0000 0654 4553 5420 3200 0500 0500 0000 0130 | .....TEST 2........0
0006 000A 0000 0000 0000 0000 0000 0007 0005 0000 | ....................
0001 3000 0800 0B00 0000 0778 2035 3234 3239 0009 | ..0........x 52429..
Filtering suppressed 3289 records
6. Compare the output of the record (data part) from the remote trail with that of the record from the local trail, and make sure it is the
same. This shows they are the same data record.
Logdump continue
• 7. In Logdump, issue the following commands to clear the filter and find the next record in the local trail after the one found in Step
4. Logdump searches based on the GoldenGate record's RBA within the trail file (the "extrba"). Write down the GoldenGate RBA. In
the following example, it would be 7309910.
Logdump>filter clear
Logdump>n
___________________________________________________________________
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 1005 (x03ed) IO Time : 2005/11/17 16:01:52.000.000
IOType : 5 (x05) OrigNode : 255 (xff)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 1941 AuditPos : 1979280
Continued : N (x00) RecCount : 1 (x01)
2005/11/17 16:01:52.000.000 Insert Len 1005 RBA 7309910
Name: LLI.S_EVT_ACT_FNX
After Image: Partition 4
0000 000A 0000 0006 3130 3438 3537 0001 0015 0000 | ........104857......
3230 3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 | 2005-11-17:16:01:51.
0200 0A00 0000 0654 4553 5420 3100 0300 1500 0032 | .......TEST 1......2
3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0004 | 005-11-17:16:01:51..
000A 0000 0006 5445 5354 2032 0005 000A 0000 0000 | ......TEST 2........
0000 0000 0000 0006 0005 0000 0001 3000 0700 0D00 | ..............0.....
0000 0950 4944 3130 3438 3537 0008 0003 0000 4E00 | ...PID104857......N.
•
8. In GGSCI, alter the data pump to start processing at the trail sequence number (extseqno) and relative byte address (extrba) noted
in Step 7. The trail sequence number is the local trail that you opened in Logdump in step 4.
GGSCI> alter extract <pump>, extseqno <trail sequence no>, extrba <trail rba>
For the previous example (assuming the trail sequence number is 000003), the syntax would be:
alter extract pmpsbl, extseqno 3, extrba 7309910
9. Start the data pump.
Logdump other command
• Search for Timestamp in Trail File
• Logdump 6 >sfts 2013-08-04 <–searches for the timestamp ‘2013-08-04’
• >filter include filename FIN_AUDIT.FIN_GL_ATTR
Which will filter out all records that do not contain the table name
specified and narrow down the set of records that subsequent commands
will operate on until a “>filter clear” is issued
• >pos FIRST
Which will go to the first record in the file
• >pos <rba>
Which will go to a specific rba in the trail file
• >log to <filename>.txt
Filtering Records
You can do some pretty fancy stuff with LOGDUMP filtering. A whole suite of commands are set aside for
this. We can filter on just about anything that exists in the Trail file, such as process name, RBA, record
length, record type, even a string!
The following example shows the required syntax to filter on DELETE operations. Note that
LOGDUMP reports how many records have been excluded by the filter.
Logdump 52 >filter include iotype delete
Logdump 53 >n
2010/11/09 13:31:40.000.000 Delete Len 17 RBA 5863
Name: SRC.USERS
Before Image: Partition 4 G b
0000 000d 0000 0009 414e 4f4e 594d 4f55 53 | ........ANONYMOUS
Filtering suppressed 42 records
Logdumpcontinue…
Conditions for extract abend
• Manager of source is stop
• Database is down
• Server is down (patch or reboot)
• Lookup object created and lost in few second
• Lob object and long binary values in columns
• Redo log switch from database and archive log is not enabled in
database
Condition for Pump abend
• Database is down
• Source Server reboot or patch
• Target server reboot or patch
Condition for Replicate abend
•Source table does not exist in target database
•Target database is down for coldbackup
•Target Server is down
•Source table structure is not matching with Target database table
•Discard file size is exceeded with predefined limit
•Column position is differ in target from source database
•Replicat is not find current trail position from trail directory
Check Goldengate Replication Performance
• GGSCI > Stats <parameter_name> totalsonly *
• GGSCI > lag <parameter_name>
• GGSCI > send <pump> gettcpstats
• GGSCI > send <parameter_name> showch
• GGSCI> STATS <parameter_name> totalsonly *.*
• Both database should be in sync.
• Assumetargetdefinition or sourcedefinition should be used in replicate
parameter file.
• All conversion should be performed in replicate side.
Ideal condition for Replication
Cons of Replication
• DDL (data definition language) replication is still to mature and can be an issue.
Supplemental logging causes extra redo log generation in the log storage. Licensing
cost is higher as compared to other replication products. Configuration of Golden
Gate in HA (High Availability) mode needs setup of extra VIPs which can be
challenging in a RAC environment.
• Integration of Golden Gate with monitoring tools like OEM (Oracle Enterprise
Manager) is still not available and most DBAs monitor this with custom scripts.
• File system dependent
• „Application synchronization must be designed in the application
Pros of Replication
•Storage hardware independent
•One solution for all applications/data on a server
•Failover/failback may be integrated and automated
•Read access for second copy,supporting horizontal scaling
•Low cost for software
•Easy to setup
•Easy to manage, especially in resuming and/or recovery of activities
•speed (faster than Streams)
Batchsql
Why Batchsql
In default mode the Replicat process will apply SQL to the target database, one statement at a time. This
often causes a performance bottleneck where the Replicat process cannot apply the changes quickly
enough, compared to the rate at which the Extract process delivers the data. Despite configuring additional
Replicat processes, performance may still be a problem
Use of Batchsql
GoldenGate has addressed this issue through the use of the BATCHSQL Replicat configuration parameter.
As the name implies, BATCHSQL organizes similar SQLs into batches and applies them all at once. The
batches are assembled into arrays in a memory queue on the target database server, ready to be applied.
Similar SQL statements would be those that perform a specific operation type (insert, update, or delete)
against the same target table, having the same column list. For example, multiple inserts into table A
would be in a different batch from inserts into table B, as would updates on table A or B. Furthermore,
referential constraints are considered by GoldenGate where dependant transactions in different batches
are executed first, depicting the order in which batches are executed. Despite the referential integrity
constraints, BATCHSQL can increase Replicat performance significantly.
BATCHSQL BATCHESPERQUEUE 100, OPSPERBATCH 2000
Example of batchsql performance
• GoldenGate replicat performance
• GGSCI> stats r_xxxx TOTALSONLY * REPORTRATE sec
• You get the detailed statistics of operations per second since the start of the process, since the beginning of the day and last hour for all
tables replicated by that process. For instance:
• *** Hourly statistics since 2012-04-16 13:00:00 ***
• Total inserts/second: 1397.76
• Total updates/second: 1307.46
• Total deletes/second: 991.50
• Total discards/second: 0.00
• Total operations/second: 3696.71
• So here we can see it is doing a bit more than 3500 operations per second, divided quite evenly between inserts, updates and deletes. As
usually the GoldenGate is used for realtime replication and there are no big operations, the client does not use performance related
parameters. But this time I decided to play with them. After adding both: BATCHSQL and INSERTAPPEND to the parameter file of the
replicat process, the results were the following (after more than 10 minutes running) and performance is still increasing:
• *** Hourly statistics since 2012-04-16 13:43:17 ***
• Total inserts/second: 2174.92
• Total updates/second: 2540.20
• Total deletes/second: 1636.16
• Total discards/second: 0.00
• Total operations/second: 6351.28
• We see the performance increased by 90% ! I got interested to see if the BATCHSQL parameter only by itself could make the difference. So
I removed the INSERTAPPEND parameter (which only influences the inserts anyway). Here are the results after more than 10 minutes.
• *** Hourly statistics since 2012-04-16 14:00:00 ***
• Total inserts/second: 2402.21
• Total updates/second: 2185.24
• Total deletes/second: 1742.43
• Total discards/second: 0.00
• Total operations/second: 6329.88
• Yep, seems the system of my client in certain situations benefits mostly of the BATCHSQL parameter. For those who don't know, "in
BATCHSQL mode, Replicat organizes similar SQL statements into batches within a memory queue, and then it applies each batch in one
database operation. A batch contains SQL statements that affect the same table, operation type (insert, update, or delete), and column
list."
QUESTIONS?
APPENDIX

More Related Content

What's hot

Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured Streaming
Databricks
 
What’s New in Oracle Database 19c - Part 1
What’s New in Oracle Database 19c - Part 1What’s New in Oracle Database 19c - Part 1
What’s New in Oracle Database 19c - Part 1
Satishbabu Gunukula
 
Oracle sql high performance tuning
Oracle sql high performance tuningOracle sql high performance tuning
Oracle sql high performance tuning
Guy Harrison
 
Chasing the optimizer
Chasing the optimizerChasing the optimizer
Chasing the optimizer
Mauro Pagano
 
Extreme Replication - Performance Tuning Oracle GoldenGate
Extreme Replication - Performance Tuning Oracle GoldenGateExtreme Replication - Performance Tuning Oracle GoldenGate
Extreme Replication - Performance Tuning Oracle GoldenGate
Bobby Curtis
 
Migration to Oracle Multitenant
Migration to Oracle MultitenantMigration to Oracle Multitenant
Migration to Oracle Multitenant
Jitendra Singh
 
Part1 of SQL Tuning Workshop - Understanding the Optimizer
Part1 of SQL Tuning Workshop - Understanding the OptimizerPart1 of SQL Tuning Workshop - Understanding the Optimizer
Part1 of SQL Tuning Workshop - Understanding the Optimizer
Maria Colgan
 
Big Data Processing with Spark and Scala
Big Data Processing with Spark and Scala Big Data Processing with Spark and Scala
Big Data Processing with Spark and Scala
Edureka!
 
Building Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics PrimerBuilding Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics Primer
Databricks
 
Oracle RAC 19c - the Basis for the Autonomous Database
Oracle RAC 19c - the Basis for the Autonomous DatabaseOracle RAC 19c - the Basis for the Autonomous Database
Oracle RAC 19c - the Basis for the Autonomous Database
Markus Michalewicz
 
Oracle Database Migration to Oracle Cloud Infrastructure
Oracle Database Migration to Oracle Cloud InfrastructureOracle Database Migration to Oracle Cloud Infrastructure
Oracle Database Migration to Oracle Cloud Infrastructure
SinanPetrusToma
 
UKOUG - 25 years of hints and tips
UKOUG - 25 years of hints and tipsUKOUG - 25 years of hints and tips
UKOUG - 25 years of hints and tips
Connor McDonald
 
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
Aaron Shilo
 
Delta lake and the delta architecture
Delta lake and the delta architectureDelta lake and the delta architecture
Delta lake and the delta architecture
Adam Doyle
 
Presto on Apache Spark: A Tale of Two Computation Engines
Presto on Apache Spark: A Tale of Two Computation EnginesPresto on Apache Spark: A Tale of Two Computation Engines
Presto on Apache Spark: A Tale of Two Computation Engines
Databricks
 
Oracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLONOracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLON
Markus Michalewicz
 
Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0
Databricks
 
MAA Best Practices for Oracle Database 19c
MAA Best Practices for Oracle Database 19cMAA Best Practices for Oracle Database 19c
MAA Best Practices for Oracle Database 19c
Markus Michalewicz
 
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentalsDB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
John Beresniewicz
 
Azure Data Factory V2; The Data Flows
Azure Data Factory V2; The Data FlowsAzure Data Factory V2; The Data Flows
Azure Data Factory V2; The Data Flows
Thomas Sykes
 

What's hot (20)

Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured Streaming
 
What’s New in Oracle Database 19c - Part 1
What’s New in Oracle Database 19c - Part 1What’s New in Oracle Database 19c - Part 1
What’s New in Oracle Database 19c - Part 1
 
Oracle sql high performance tuning
Oracle sql high performance tuningOracle sql high performance tuning
Oracle sql high performance tuning
 
Chasing the optimizer
Chasing the optimizerChasing the optimizer
Chasing the optimizer
 
Extreme Replication - Performance Tuning Oracle GoldenGate
Extreme Replication - Performance Tuning Oracle GoldenGateExtreme Replication - Performance Tuning Oracle GoldenGate
Extreme Replication - Performance Tuning Oracle GoldenGate
 
Migration to Oracle Multitenant
Migration to Oracle MultitenantMigration to Oracle Multitenant
Migration to Oracle Multitenant
 
Part1 of SQL Tuning Workshop - Understanding the Optimizer
Part1 of SQL Tuning Workshop - Understanding the OptimizerPart1 of SQL Tuning Workshop - Understanding the Optimizer
Part1 of SQL Tuning Workshop - Understanding the Optimizer
 
Big Data Processing with Spark and Scala
Big Data Processing with Spark and Scala Big Data Processing with Spark and Scala
Big Data Processing with Spark and Scala
 
Building Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics PrimerBuilding Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics Primer
 
Oracle RAC 19c - the Basis for the Autonomous Database
Oracle RAC 19c - the Basis for the Autonomous DatabaseOracle RAC 19c - the Basis for the Autonomous Database
Oracle RAC 19c - the Basis for the Autonomous Database
 
Oracle Database Migration to Oracle Cloud Infrastructure
Oracle Database Migration to Oracle Cloud InfrastructureOracle Database Migration to Oracle Cloud Infrastructure
Oracle Database Migration to Oracle Cloud Infrastructure
 
UKOUG - 25 years of hints and tips
UKOUG - 25 years of hints and tipsUKOUG - 25 years of hints and tips
UKOUG - 25 years of hints and tips
 
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...
 
Delta lake and the delta architecture
Delta lake and the delta architectureDelta lake and the delta architecture
Delta lake and the delta architecture
 
Presto on Apache Spark: A Tale of Two Computation Engines
Presto on Apache Spark: A Tale of Two Computation EnginesPresto on Apache Spark: A Tale of Two Computation Engines
Presto on Apache Spark: A Tale of Two Computation Engines
 
Oracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLONOracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLON
 
Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0
 
MAA Best Practices for Oracle Database 19c
MAA Best Practices for Oracle Database 19cMAA Best Practices for Oracle Database 19c
MAA Best Practices for Oracle Database 19c
 
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentalsDB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentals
 
Azure Data Factory V2; The Data Flows
Azure Data Factory V2; The Data FlowsAzure Data Factory V2; The Data Flows
Azure Data Factory V2; The Data Flows
 

Similar to Oracle Goldengate training by Vipin Mishra

Extreme replication at IOUG Collaborate 15
Extreme replication at IOUG Collaborate 15Extreme replication at IOUG Collaborate 15
Extreme replication at IOUG Collaborate 15
Bobby Curtis
 
OGG Architecture Performance
OGG Architecture PerformanceOGG Architecture Performance
OGG Architecture Performance
Enkitec
 
Insync10 goldengate
Insync10 goldengateInsync10 goldengate
Insync10 goldengate
InSync Conference
 
Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)
Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)
Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)
Bobby Curtis
 
Oracle GoldenGate Architecture Performance
Oracle GoldenGate Architecture PerformanceOracle GoldenGate Architecture Performance
Oracle GoldenGate Architecture Performance
Enkitec
 
Understanding Oracle GoldenGate 12c
Understanding Oracle GoldenGate 12cUnderstanding Oracle GoldenGate 12c
Understanding Oracle GoldenGate 12c
IT Help Desk Inc
 
An AMIS overview of database 12c
An AMIS overview of database 12cAn AMIS overview of database 12c
Extreme Replication - RMOUG Presentation
Extreme Replication - RMOUG PresentationExtreme Replication - RMOUG Presentation
Extreme Replication - RMOUG Presentation
Bobby Curtis
 
An AMIS Overview of Oracle database 12c (12.1)
An AMIS Overview of Oracle database 12c (12.1)An AMIS Overview of Oracle database 12c (12.1)
An AMIS Overview of Oracle database 12c (12.1)
Marco Gralike
 
Data Analytics Service Company and Its Ruby Usage
Data Analytics Service Company and Its Ruby UsageData Analytics Service Company and Its Ruby Usage
Data Analytics Service Company and Its Ruby Usage
SATOSHI TAGOMORI
 
IOUG Data Integration SIG w/ Oracle GoldenGate Solutions and Configuration
IOUG Data Integration SIG w/ Oracle GoldenGate Solutions and ConfigurationIOUG Data Integration SIG w/ Oracle GoldenGate Solutions and Configuration
IOUG Data Integration SIG w/ Oracle GoldenGate Solutions and Configuration
Bobby Curtis
 
Gg steps
Gg stepsGg steps
Gg steps
Hari Prasath
 
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
Chester Chen
 
Stsg17 speaker yousunjeong
Stsg17 speaker yousunjeongStsg17 speaker yousunjeong
Stsg17 speaker yousunjeong
Yousun Jeong
 
Big data should be simple
Big data should be simpleBig data should be simple
Big data should be simple
Dori Waldman
 
Synapse 2018 Guarding against failure in a hundred step pipeline
Synapse 2018 Guarding against failure in a hundred step pipelineSynapse 2018 Guarding against failure in a hundred step pipeline
Synapse 2018 Guarding against failure in a hundred step pipeline
Calvin French-Owen
 
Doag data replication with oracle golden gate: Looking behind the scenes
Doag data replication with oracle golden gate: Looking behind the scenesDoag data replication with oracle golden gate: Looking behind the scenes
Doag data replication with oracle golden gate: Looking behind the scenes
Trivadis
 
Apache Spark Components
Apache Spark ComponentsApache Spark Components
Apache Spark Components
Girish Khanzode
 
Replicate from Oracle to data warehouses and analytics
Replicate from Oracle to data warehouses and analyticsReplicate from Oracle to data warehouses and analytics
Replicate from Oracle to data warehouses and analytics
Continuent
 
Golden Gate - How to start such a project?
Golden Gate  - How to start such a project?Golden Gate  - How to start such a project?
Golden Gate - How to start such a project?
Trivadis
 

Similar to Oracle Goldengate training by Vipin Mishra (20)

Extreme replication at IOUG Collaborate 15
Extreme replication at IOUG Collaborate 15Extreme replication at IOUG Collaborate 15
Extreme replication at IOUG Collaborate 15
 
OGG Architecture Performance
OGG Architecture PerformanceOGG Architecture Performance
OGG Architecture Performance
 
Insync10 goldengate
Insync10 goldengateInsync10 goldengate
Insync10 goldengate
 
Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)
Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)
Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)
 
Oracle GoldenGate Architecture Performance
Oracle GoldenGate Architecture PerformanceOracle GoldenGate Architecture Performance
Oracle GoldenGate Architecture Performance
 
Understanding Oracle GoldenGate 12c
Understanding Oracle GoldenGate 12cUnderstanding Oracle GoldenGate 12c
Understanding Oracle GoldenGate 12c
 
An AMIS overview of database 12c
An AMIS overview of database 12cAn AMIS overview of database 12c
An AMIS overview of database 12c
 
Extreme Replication - RMOUG Presentation
Extreme Replication - RMOUG PresentationExtreme Replication - RMOUG Presentation
Extreme Replication - RMOUG Presentation
 
An AMIS Overview of Oracle database 12c (12.1)
An AMIS Overview of Oracle database 12c (12.1)An AMIS Overview of Oracle database 12c (12.1)
An AMIS Overview of Oracle database 12c (12.1)
 
Data Analytics Service Company and Its Ruby Usage
Data Analytics Service Company and Its Ruby UsageData Analytics Service Company and Its Ruby Usage
Data Analytics Service Company and Its Ruby Usage
 
IOUG Data Integration SIG w/ Oracle GoldenGate Solutions and Configuration
IOUG Data Integration SIG w/ Oracle GoldenGate Solutions and ConfigurationIOUG Data Integration SIG w/ Oracle GoldenGate Solutions and Configuration
IOUG Data Integration SIG w/ Oracle GoldenGate Solutions and Configuration
 
Gg steps
Gg stepsGg steps
Gg steps
 
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
 
Stsg17 speaker yousunjeong
Stsg17 speaker yousunjeongStsg17 speaker yousunjeong
Stsg17 speaker yousunjeong
 
Big data should be simple
Big data should be simpleBig data should be simple
Big data should be simple
 
Synapse 2018 Guarding against failure in a hundred step pipeline
Synapse 2018 Guarding against failure in a hundred step pipelineSynapse 2018 Guarding against failure in a hundred step pipeline
Synapse 2018 Guarding against failure in a hundred step pipeline
 
Doag data replication with oracle golden gate: Looking behind the scenes
Doag data replication with oracle golden gate: Looking behind the scenesDoag data replication with oracle golden gate: Looking behind the scenes
Doag data replication with oracle golden gate: Looking behind the scenes
 
Apache Spark Components
Apache Spark ComponentsApache Spark Components
Apache Spark Components
 
Replicate from Oracle to data warehouses and analytics
Replicate from Oracle to data warehouses and analyticsReplicate from Oracle to data warehouses and analytics
Replicate from Oracle to data warehouses and analytics
 
Golden Gate - How to start such a project?
Golden Gate  - How to start such a project?Golden Gate  - How to start such a project?
Golden Gate - How to start such a project?
 

Recently uploaded

Public CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptxPublic CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptx
marufrahmanstratejm
 
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUHCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
panagenda
 
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3
FREE A4 Cyber Security Awareness  Posters-Social Engineering part 3FREE A4 Cyber Security Awareness  Posters-Social Engineering part 3
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3
Data Hops
 
Skybuffer SAM4U tool for SAP license adoption
Skybuffer SAM4U tool for SAP license adoptionSkybuffer SAM4U tool for SAP license adoption
Skybuffer SAM4U tool for SAP license adoption
Tatiana Kojar
 
Columbus Data & Analytics Wednesdays - June 2024
Columbus Data & Analytics Wednesdays - June 2024Columbus Data & Analytics Wednesdays - June 2024
Columbus Data & Analytics Wednesdays - June 2024
Jason Packer
 
Fueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte WebinarFueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte Webinar
Zilliz
 
Digital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframeDigital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
Digital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
Precisely
 
Digital Marketing Trends in 2024 | Guide for Staying Ahead
Digital Marketing Trends in 2024 | Guide for Staying AheadDigital Marketing Trends in 2024 | Guide for Staying Ahead
Digital Marketing Trends in 2024 | Guide for Staying Ahead
Wask
 
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
saastr
 
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development ProvidersYour One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
akankshawande
 
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge GraphGraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
Neo4j
 
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyFreshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
ScyllaDB
 
5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides
DanBrown980551
 
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
Jason Yip
 
Generating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and MilvusGenerating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and Milvus
Zilliz
 
"Choosing proper type of scaling", Olena Syrota
"Choosing proper type of scaling", Olena Syrota"Choosing proper type of scaling", Olena Syrota
"Choosing proper type of scaling", Olena Syrota
Fwdays
 
Y-Combinator seed pitch deck template PP
Y-Combinator seed pitch deck template PPY-Combinator seed pitch deck template PP
Y-Combinator seed pitch deck template PP
c5vrf27qcz
 
Building Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and MilvusBuilding Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and Milvus
Zilliz
 
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
Alex Pruden
 
WeTestAthens: Postman's AI & Automation Techniques
WeTestAthens: Postman's AI & Automation TechniquesWeTestAthens: Postman's AI & Automation Techniques
WeTestAthens: Postman's AI & Automation Techniques
Postman
 

Recently uploaded (20)

Public CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptxPublic CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptx
 
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUHCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
 
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3
FREE A4 Cyber Security Awareness  Posters-Social Engineering part 3FREE A4 Cyber Security Awareness  Posters-Social Engineering part 3
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3
 
Skybuffer SAM4U tool for SAP license adoption
Skybuffer SAM4U tool for SAP license adoptionSkybuffer SAM4U tool for SAP license adoption
Skybuffer SAM4U tool for SAP license adoption
 
Columbus Data & Analytics Wednesdays - June 2024
Columbus Data & Analytics Wednesdays - June 2024Columbus Data & Analytics Wednesdays - June 2024
Columbus Data & Analytics Wednesdays - June 2024
 
Fueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte WebinarFueling AI with Great Data with Airbyte Webinar
Fueling AI with Great Data with Airbyte Webinar
 
Digital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframeDigital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
Digital Banking in the Cloud: How Citizens Bank Unlocked Their Mainframe
 
Digital Marketing Trends in 2024 | Guide for Staying Ahead
Digital Marketing Trends in 2024 | Guide for Staying AheadDigital Marketing Trends in 2024 | Guide for Staying Ahead
Digital Marketing Trends in 2024 | Guide for Staying Ahead
 
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
 
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development ProvidersYour One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
 
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge GraphGraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
 
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyFreshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
 
5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides
 
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...
 
Generating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and MilvusGenerating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and Milvus
 
"Choosing proper type of scaling", Olena Syrota
"Choosing proper type of scaling", Olena Syrota"Choosing proper type of scaling", Olena Syrota
"Choosing proper type of scaling", Olena Syrota
 
Y-Combinator seed pitch deck template PP
Y-Combinator seed pitch deck template PPY-Combinator seed pitch deck template PP
Y-Combinator seed pitch deck template PP
 
Building Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and MilvusBuilding Production Ready Search Pipelines with Spark and Milvus
Building Production Ready Search Pipelines with Spark and Milvus
 
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
 
WeTestAthens: Postman's AI & Automation Techniques
WeTestAthens: Postman's AI & Automation TechniquesWeTestAthens: Postman's AI & Automation Techniques
WeTestAthens: Postman's AI & Automation Techniques
 

Oracle Goldengate training by Vipin Mishra

  • 1. GOLDENGATE TRAINING By Vipin Mishra GoldenGate expert
  • 2. Training Overview Topics Overview of Goldengate Goldengate Replication & component Goldengate Director GoldenGate veridata GoldenGate vs Oracle Stream GoldenGate vs Data Guard Goldengate trail file How Oracle GoldenGate Works GoldenGate Checkpointing Goldengate download and installation prerequisites of Installation Database Privilege for GG user Supplemental logging Trandata GoldenGate Initial data Load GoldenGate DML Replication DDL Replication - Overview GoldenGate parameter file Logdump Utility CCM and ICM GoldenGate Upgrade GoldenGate troubleshoot GolddenGate performance command BATCHSQL GoldenGate functions Golden Gate abend and bugs and fixes Golden Gate failover document and DR plan Golden Gate reporting through hearbeat Custom Data Manipulation Requirements for the Database Replicate Process Additional topic for discussion
  • 3. Overview of Goldengate. • Oracle GoldenGate is a heterogeneous replication solution • Oracle acquired GoldenGate Software Inc ( founded in 1995 by Eric Fish and Todd Davidson ), a leading provider of real-time data integration solutions on September 3, 2009. • GoldenGate supports: – Zero Downtime Upgrade and Migration – System Integration / Data Synchronization – Query and Report offloading – Real-time Data distribution – Real-time Data Warehousing – Live standby database – Active-active high availability • Controversial replacement for Oracle Streams • Oracle GoldenGate is Oracle’s strategic solution for real time data integration. • GoldenGate software enables mission critical systems to have continuous availability and access to real-time data. • It offers a fast and robust solution for replicating transactional data between operational and analytical systems. • Today there are more than 500 customers around the world using GoldenGate technology for over 4000 solutions.
  • 4. GoldenGate Supported Topologies Unidirectional Bi-directional Peer-to-Peer CascadingConsolidationBroadcast Scalability, Database TieringData WarehouseData Distribution Instant Failover, Active-Active Load Balancing, High AvailabilityReporting Instance
  • 5. GoldenGate Supported Databases Oracle GoldenGate for Non Oracle Databases Supported non-oracle databases include: IBM DB2 on Windows, UNIX and Linux Microsoft SQL Server 2000, 2005, 2008 Sybase on Windows, UNIX and Linux Teradata on Windows, UNIX and Linux MySQL on Windows, UNIX and Linux TimesTen on Windows and Linux (delivery only)
  • 8. Goldengate Replication & component GoldenGate replication consists of three basic utilities: “extract”,”replicat” and “data pump”. In addition all three of these utilities access “trail files”, which are flat files that contain formatted GoldenGate data. An extract group mines a database’s redo logs; and writes information about the changes it find into trail files. Extract is functionally similar to the Streams “capture” process. A replicat group reads information from trail files; and then writes those changes into database tables. Replicat is functionally similar to the Streams “apply” process. A data pump group copies information between trail files. Data is functionally similar to Streams “propagate” process.
  • 10. GoldenGate Classic Capture Extract Data Pump Local Trail Source Database Online Redo/Archive Logs
  • 11. GoldenGate Integrated Capture Extract Data Pump LCRLCRLCR Log Miner Local Trail Source Database Online Redo/Archive Logs
  • 12. GoldenGate Downstream Capture Real Time Downstream Mode
  • 14. GoldenGate Bi-Directional Replication • Also known as Active-Active Replication 14 Capture Replicat Replicat Capture Data Pump Data Pump Server A Server B Online Redo Logs Online Redo Logs Local Trail Remote Trail Remote Trail Local Trail
  • 15. GoldenGate Supported Data Types • The following data types are supported for both classic and integrated capture – NUMBER – BINARY FLOAT – BINARY DOUBLE – CHAR – VARCHAR2 – LONG – NCHAR – NVARCHAR2 – RAW – LONG RAW – DATE – TIMESTAMP 15
  • 16. GoldenGate Supported Data Types • There is limited support in classic capture for the following data types: – INTERVAL DAY – INTERVAL YEAR – TIMESTAMP WITH TIME ZONE – TIMESTAMP WITH LOCAL TIME ZONE • The following data types are not supported – Abstract data types with scalar, LOBs, VARRAYs, nested tables , REFS – ANYDATA – ANYDATASET – ANYTYPE – BFILE – MLSLABEL – ORDDICOM – TIMEZONE_ABBR – URITYPE – UROWID16
  • 17. GoldenGate Restrictions • Neither capture method supports – Database replay – EXTERNAL tables – Materialized views with ROWID • Classic capture does not support – IOT mapping tables – Key compressed IOTs – XMLType tables stored as XML Object Relational – Distributed Transactions – XA and PDML distributed transactions – Capture from OLTP table compressed tables – Capture from compressed tablespaces – Exadata Hybrid Columnar Compression (EHCC)17
  • 18. GoldenGate Oracle-Reserved Schemas The following schema names are reserved by Oracle and should not be configured for GoldenGate replication: 18 $AURORA EXFSYS REPADMIN $JIS MDSYS SYS $ORB ODM SYSMAN $UNAUTHENTICATED ODM_MTR SYSTEM $UTILITY OLAPSYS TRACESVR ANONYMOUS ORDPLUGINS WKPROXY AURORA ORDSYS WKSYS CTXSYS OSE$HTTP$ADMIN WMSYS DBSNMP OUTLN XDB DMSYS PERFSTAT DSSYS PUBLIC
  • 19. GoldenGate RAC Support • RAC support has some limitations in classic capture mode – Extract can only run against one instance – If instance fails, • Manager must be stopped on failed node: • Manager and extract must be started on a surviving node – Failover can be configured in Oracle Grid Infrastructure – Additional archive log switching may be required in archive log mode – Before shutting down extract process • Insert dummy record into a source table • Switch log files on all nodes – Additional configuration required to access ASM instance • Shared storage for trails can be: – OCFS – ACFS – DBFS – No mention of NFS in the documentation 19
  • 20. Goldengatetrailfile Each record in the Trail file contains the following information: · The operation type, such as an insert, update, or delete · The transaction indicator (TransInd): 00 beginning, 01 middle, 02 end or 03 whole of transaction · The before or after indicator (BeforeAfter) for update operations · The commit timestamp · The time that the change was written to the GoldenGate file · The type of database operation · The length of the record · The Relative Byte Address (RBA) within the GoldenGate file · The schema and table name
  • 22. How Oracle GoldenGate Works Modular De-Coupled Architecture LAN/WAN Internet TCP/IP Bi-directional Capture Trail Pump Delivery Trail Source Database(s) Target Database(s) Capture: committed transactions are captured (and can be filtered) as they occur by reading the transaction logs. Trail: stages and queues data for routing. Pump: distributes data for routing to target(s). Route: data is compressed, encrypted for routing to target(s). Delivery: applies data with transaction integrity, transforming the data as required.
  • 23. How Oracle GoldenGate Works LAN/WAN Internet Capture Capture: committed transactions are captured (and can be filtered) as they occur by reading the transaction logs. Source Oracle & Non-Oracle Database(s) Target Oracle & Non-Or Database(s)
  • 24. How Oracle GoldenGate Works LAN/WAN Internet Capture Trail Source Oracle & Non-Oracle Database(s) Target Oracle & Non-Ora Database(s) Capture: committed transactions are captured (and can be filtered) as they occur by reading the transaction logs. Trail: stages and queues data for routing.
  • 25. How Oracle GoldenGate Works LAN/WAN Internet Capture Trail Pump Capture: committed transactions are captured (and can be filtered) as they occur by reading the transaction logs. Trail: stages and queues data for routing. Pump: distributes data for routing to target(s). Source Oracle & Non-Oracle Database(s) Target Oracle & Non-Ora Database(s)
  • 26. How Oracle GoldenGate Works LAN/WAN Internet TCP/IP Capture Trail Pump Trail Capture: committed transactions are captured (and can be filtered) as they occur by reading the transaction logs. Trail: stages and queues data for routing. Pump: distributes data for routing to target(s). Route: data is compressed, encrypted for routing to target(s). Source Oracle & Non-Oracle Database(s) Target Oracle & Non-Or Database(s)
  • 27. How Oracle GoldenGate Works LAN/WAN Internet TCP/IP Capture Trail Pump Delivery Trail Capture: committed transactions are captured (and can be filtered) as they occur by reading the transaction logs. Trail: stages and queues data for routing. Pump: distributes data for routing to target(s). Route: data is compressed, encrypted for routing to target(s). Delivery: applies data with transaction integrity, transforming the data as required. Source Oracle & Non-Oracle Database(s) Target Oracle & Non-O Database(s)
  • 28. How Oracle GoldenGate Works LAN/WAN Internet TCP/IP Capture Trail Pump Delivery Trail Capture: committed transactions are captured (and can be filtered) as they occur by reading the transaction logs. Trail: stages and queues data for routing. Pump: distributes data for routing to target(s). Route: data is compressed, encrypted for routing to target(s). Delivery: applies data with transaction integrity, transforming the data as required. Source Oracle & Non-Oracle Database(s) Target Oracle & Non-Ora Database(s) Bi-directional
  • 29. • Capture, Pump, and Delivery save positions to a checkpoint file so they can recover in case of failure GoldenGate Checkpointing Capture DeliveryPump Commit Ordered Source Trail Commit Ordered Target Trail Source Database Target Database Begin, TX 1 Insert, TX 1 Begin, TX 2 Update, TX 1 Insert, TX 2 Commit, TX 2 Begin, TX 3 Insert, TX 3 Begin, TX 4 Commit, TX 3 Delete, TX 4 Begin, TX 2 Insert, TX 2 Commit, TX 2 Begin, TX 3 Insert, TX 3 Commit, TX 3 Begin, TX 2 Insert, TX 2 Commit, TX 2 Start of Oldest Open (Uncommitted) Transaction Current Read Position Capture Checkpoint Current Write Position Current Read Position Pump Checkpoint Current Write Position Current Read Position Delivery Checkpoint
  • 32. Structure of checkpoint table • SQL> desc ggate.ggschkpt • Name Null? Type • ----------------------------------------- -------- ---------------------------- • GROUP_NAME NOT NULL VARCHAR2(8) • GROUP_KEY NOT NULL NUMBER(19) • SEQNO NUMBER(10) • RBA NOT NULL NUMBER(19) • AUDIT_TS VARCHAR2(29) • CREATE_TS NOT NULL DATE • LAST_UPDATE_TS NOT NULL DATE • CURRENT_DIR NOT NULL VARCHAR2(255) • LOG_CSN VARCHAR2(129) • LOG_XID VARCHAR2(129) • LOG_CMPLT_CSN VARCHAR2(129) • LOG_CMPLT_XIDS VARCHAR2(2000) • VERSION NUMBER(3)
  • 33. Goldengate download and installation • Go to https://edelivery.oracle.com/ • Accept term and condition after signin • Select product pack and platform • Product pack : Oracle fusion middleware • Platform : AIX or Oracle solaris (as per your req) • Click on go • Select goldengate management pack for oracle database • Download it to your desktop or preffered location • Once download is finished. ftp or winscp this tar zip file to your server goldengate home directory (lets say /ogg/11.2) • Create .profile for this goldengate • Untar this file to home directory • Tar –xvf abc.tar • ./ggsci • You will redirect to ggsci interface • GGSCI > create subdirs • Start mgr
  • 34. • /ogg/11.2 should have atleast 500 gb space • Ask dba to create goldengate user in database..lets say gguser is goldengate user • Gguser should have connect/resource and select privilege in source and dba privilege in target if we are doing DML replication • GGUSER should have dba role in source and target if we are using DDL/DML both replication. • SQL> create tablespace gg_data 2 datafile ‘/u02/oradata/gg_data01.dbf’ size 200m; • SQL> create user gguser identified by gguser 2 default tablespace ggs_data 3 temporary tablespace temp; • User created. Prerequisites
  • 35. • SQL> grant connect,resource to ggs_owner; • Grant succeeded. • SQL> grant select any dictionary, select any table to ggs_owner; • Grant succeeded. • SQL> grant create table to ggs_owner; • Grant succeeded. • SQL> grant flashback any table to ggs_owner; • Grant succeeded. • SQL> grant execute on dbms_flashback to ggs_owner; • Grant succeeded. • SQL> grant execute on utl_file to ggs_owner; • Grant succeeded. • Check the login • Ggsci> dblogin userid gguser,password gguser
  • 36. Supplemental Logging Supplemental logging New object Existing object Supplemental logging is critical to healthy replication specially for tables with update/delete changes PK UK KEYCOLs ALL
  • 37. Supplemental Logging Supplemental logging New object Existing object PK UK KEYCOLs DDLOPTIONS & ADDTRANDATA & GETREPLICATES & REPORT To add SL to source DB To replicat add SL to target ALL
  • 38. Supplemental Logging – Required at the database level (Minimum Level) • ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; • Minimal Supplemental Logging “logs the minimal amount of information needed for LogMiner to identify, group, and merge the redo operations associated with DML changes” – Identification key logging (PK, UK, FK) • Table Level • Database Level • Method – SQL e.g. ALTER TABLE <> ADD SUPPLEMENTAL LOG DATA … – GoldenGate e.g. ADD TRANDATA <Table Name>
  • 39. Supplemental Logging Tim e Source t1 Create table t2 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS t3 Create PK index t4 add supplemental log data (id) (temp) t5 drop SUPPLEMENTAL LOG DATA (ALL) T6 add supplemental log data (id) T7 drop supplemental log data (id) (temp) Tim e Target t1 Create table t2 Create PK index t3 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS t4 add supplemental log data (id) (temp) t5 drop supplemental log data (id) (temp) 1. ADDTRANDATA on source for new object. 2. Monitor replicated object on target and add SL manually before any DML change
  • 40. Add trandata • For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log group includes one of the following sets of columns, in the listed order of priority, depending on what is defined on the table: 1. Primary key 2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased columns, and no nullable columns 3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased columns, but can include nullable columns 4. If none of the preceding key types exist (even though there might be other types of keys defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that the database allows to be used in a unique key, excluding virtual columns, UDTs, function-based columns, and any columns that are explicitly excluded from the Oracle GoldenGate configuration. • The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause thatis appropriate for the type of unique constraint (or lack of one) that is defined for the table. • If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA command to log the required supplemental data. It is possible to use ADD TRANDATA when DDL support is enabled, but only if you can guarantee one of the following: ● You can stop DML activity on any and all tables before users or applications perform DDL on them. ● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
  • 41. Project overview • PMSPRD SERVER PMSPRD2 TARGET SERVER DML REPLICATION SOURCE TARGET DR REPORTING W A N T C P PMSPRD SERVER DR SERVER
  • 42. Parameter files • Manager process • port 15000 • dynamicportlist 15010-15020 • PURGEOLDEXTRACTS ./dirdat/pa*, USECHECKPOINTS, MINKEEPDAYS 2 • AUTOSTART extract E_actPRD • AUTOSTART extract P_actPRD • autorestart extract P_actPRD, retries 6, waitminutes 25 • autorestart extract E_actPRD, retries 6, waitminutes 25
  • 43. Creating parameter file • Extract in source • EXTRACT E_actPRD • SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8") • SETENV (ORACLE_HOME = "/u01/app/oracle/product/10.2.0.4a") • SETENV (ORACLE_SID = "pmsp11") • EXTTRAIL ./dirdat/pa • DISCARDFILE ./dirrpt/E_actPRD.dsc, APPEND • DBOPTIONS ALLOWUNUSEDCOLUMN • USERID gguser, PASSWORD gguser • TABLE sa.*; • -- • -- Enter the following statements from GGSCI before starting this and other processes • -- • -- delete extract E_actPRD • -- add extract E_actPRD, tranlog begin 2011-07-22 08:30:00; • -- add exttrail ./dirdat/pa, extract E_actPRD, megabytes 100
  • 44. DR Pump parameter • EXTRACT P_actDR • SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8") • SETENV (ORACLE_HOME = "/u01/app/oracle/product/10.2.0.4a") • SETENV (ORACLE_SID = "pmsp11") • RMTHOST cmfalgaqdarst01.comfin.ge.com, mgrport 15000 • RMTTRAIL ./dirdat/pa • PASSTHRU • TABLE sa.*; • -- • -- Enter the following statements from GGSCI before starting this process if DB is newly created • -- • -- delete extract P_actDR • -- add extract P_actDR, exttrailsource ./dirdat/pa • -- add rmttrail ./dirdat/pa, extract P_actDR, megabytes 20
  • 45. • Pump in source • EXTRACT P_actPRD • SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8") • SETENV (ORACLE_HOME = "/u01/app/oracle/product/10.2.0.4a") • SETENV (ORACLE_SID = "pmsp11") • RMTHOST cmfciohpdb17.comfin.ge.com, mgrport 15000 • RMTTRAIL ./dirdat/pa • PASSTHRU • TABLE schemaname.*; • -- • -- Enter the following statements from GGSCI before starting this process if DB is newly created • -- • -- delete extract P_actPRD • -- add extract P_actPRD, exttrailsource ./dirdat/pa • -- add rmttrail ./dirdat/pa, extract P_actPRD, megabytes 20
  • 46. • REPLICAT R_rptPRD in reporting db ( same server) • -- • REPLICAT R_rptPRD • -- • -- Environment Variables • SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8") • SETENV (ORACLE_HOME ="/u01/app/oracle/product/10.2.0.4") • SETENV (ORACLE_SID = "pmsr11") • -- • ASSUMETARGETDEFS • DISCARDFILE ./dirrpt/R_rptPRD.dsc, APPEND • USERID gguser, PASSWORD gguser • -- • BATCHSQL • GETTRUNCATES • -- Call DB procedure to populate parent key field in r7270_arhist table in PMSV2, PMSV3, HISV2, HISV3 schemas • --Example • --MAP pmsv2.r7270_arhist, TARGET pmsv2.r7270_arhist, & • --SQLEXEC (SPNAME gguser.find_open_item, ID pklookup, & • --PARAMS (in_iHistKey = r7270_arhist_p, in_sSchema = 'PMSV2')), & • --COLMAP (USEDEFAULTS, parent_key=pklookup.numOut) ; • --COLMAP (USEDEFAULTS, parent_key=pklookup.numOut,ggstamp=@datenow() ); • -- • -- Use COLMAP to convert value '*CON' to 'CONS' in nbr_sequence column of t4001_ubb_asset table in PMSV2 and PMSV3 • -- • --MAP pmsv2.t4001_ubb_asset, TARGET pmsv2.t4001_ubb_asset, & • --COLMAP (USEDEFAULTS, nbr_sequence=@strsub(nbr_sequence, "*CON", "CONS") ); • - • MAP sa.*, TARGET sa.*; • -- Enter the following statements from GGSCI before starting this process • -- • -- delete replicat R_rptPRD • -- add replicat R_rptPRD, exttrail ./dirdat/pa •
  • 47. • REPLICAT R_actPRD (Main target) • -- • SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8") • SETENV (ORACLE_HOME ="/u01/app/oracle/product/10.2.0.4a") • SETENV (ORACLE_SID = "pmspr30") • -- • ASSUMETARGETDEFS • -- • DISCARDFILE ./dirrpt/R_actPRD.dsc, APPEND • -- • USERID gguser, PASSWORD gguser • -- • MAP sa.*, TARGET sa.*; • -- • -- Enter the following statements from GGSCI before starting this process • -- • -- delete replicat R_actPRD • -- add replicat R_actPRD, exttrail /dirdat/pa
  • 48. • Replicate in DR • REPLICAT R_actDR • SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8") • SETENV (ORACLE_HOME ="/u01/app/oracle/product/10.2.0.4a") • SETENV (ORACLE_SID = "pmspr40") • -- • ASSUMETARGETDEFS • APPLYNOOPUPDATES • -- • DISCARDFILE ./dirrpt/R_actDR.dsc, APPEND, MEGABYTES 50 • -- • GETTRUNCATES • -- • USERID gguser, PASSWORD gguser • -- • MAP sa.*, TARGET sa.*; • -- • -- Enter the following statements from GGSCI before starting this process • -- • -- delete replicat R_actDR • -- add replicat R_actDR, exttrail /dirdat/pa
  • 49. Logdump utility • Logdump (The Oracle GoldenGate Log File Dump Utility) is a useful Oracle GoldenGate utility that enables you to search for, filter, view, and save data that is stored in a GoldenGate trail or extract file. The following table summarizes the useful commands of logdump.
  • 50. Logdump continue • 1. Find out which local (exttrail) and remote trail (rmttrail) files the data pump is reading and writing to. GGSCI> info <pump>, detail 2. In Logdump, issue the following series of commands to open the remote trail and find the last record. Logdump> open <trail file from step 1> Logdump> ghdr on Logdump> count Logdump> skip (<total count>-1) Logdump> n (for next) 3. Write down the Table name, AuditPos and AuditRBA of the last record. (Do not be confused by the Logdump field names: AuditRBA shows the Oracle redo log sequence number, and AuditPos shows the relative byte address of the record.) • The following example shows that the operation was captured from Oracle's redo log sequence number 1941 at RBA 1965756. Logdump 12 >n ___________________________________________________________________ Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: A (x41) RecLength : 12383 (x305f) IO Time : 2005/11/17 16:02:38.295.847 IOType : 5 (x05) OrigNode : 255 (xff) TransInd : . (x03) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 1941 AuditPos : 1965756 Continued : N (x00) RecCount : 1 (x01) 2005/11/17 16:02:38.295.847 Insert Len 12383 RBA 7670707 Name: LLI.S_EVT_ACT After Image: Partition 4 0000 0009 0000 0005 3532 3432 3900 0100 1500 0032 | ........52429......2 3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0002 | 005-11-17:16:01:51.. 000A 0000 0006 5445 5354 2031 0003 0015 0000 3230 | ......TEST 1......20 3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 0400 | 05-11-17:16:01:51... 0A00 0000 0654 4553 5420 3200 0500 0500 0000 0130 | .....TEST 2........0 0006 000A 0000 0000 0000 0000 0000 0007 0005 0000 | .................... 0001 3000 0800 0B00 0000 0778 2035 3234 3239 0009 | ..0........x 52429..
  • 51. Logdump continue • 4. In Logdump, issue the following series of commands to open the local trail file and find the record that corresponds to the one you found in Step 2. Logdump>open <exttrail file> • Logdump>ghdr on Logdump>filter include AuditRBA <number> Logdump>filter include filename <table_name> Logdump>filter match all Logdump>n • Note: The Logdump filter command to search for a record's redo RBA is "AuditRBA", even though the trail record's header calls it "AuditPos". This is an anomaly of the Logdump program. • 5. Verify that the operation is from the same Oracle redo log sequence number as the one you recorded in step 3. (AuditRBA as shown in the trail header), as there is no way to filter for this in Logdump. Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: A (x41) RecLength : 12383 (x305f) IO Time : 2005/11/17 16:01:51.000.000 IOType : 5 (x05) OrigNode : 255 (xff) TransInd : . (x03) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 1941 AuditPos : 1965756 ß same as the one in step 2. Continued : N (x00) RecCount : 1 (x01) 2005/11/17 16:01:51.000.000 Insert Len 12383 RBA 7297435 Name: LLI.S_EVT_ACT After Image: Partition 4 0000 0009 0000 0005 3532 3432 3900 0100 1500 0032 | ........52429......2 3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0002 | 005-11-17:16:01:51.. 000A 0000 0006 5445 5354 2031 0003 0015 0000 3230 | ......TEST 1......20 3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 0400 | 05-11-17:16:01:51... 0A00 0000 0654 4553 5420 3200 0500 0500 0000 0130 | .....TEST 2........0 0006 000A 0000 0000 0000 0000 0000 0007 0005 0000 | .................... 0001 3000 0800 0B00 0000 0778 2035 3234 3239 0009 | ..0........x 52429.. Filtering suppressed 3289 records 6. Compare the output of the record (data part) from the remote trail with that of the record from the local trail, and make sure it is the same. This shows they are the same data record.
  • 52. Logdump continue • 7. In Logdump, issue the following commands to clear the filter and find the next record in the local trail after the one found in Step 4. Logdump searches based on the GoldenGate record's RBA within the trail file (the "extrba"). Write down the GoldenGate RBA. In the following example, it would be 7309910. Logdump>filter clear Logdump>n ___________________________________________________________________ Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: A (x41) RecLength : 1005 (x03ed) IO Time : 2005/11/17 16:01:52.000.000 IOType : 5 (x05) OrigNode : 255 (xff) TransInd : . (x03) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 1941 AuditPos : 1979280 Continued : N (x00) RecCount : 1 (x01) 2005/11/17 16:01:52.000.000 Insert Len 1005 RBA 7309910 Name: LLI.S_EVT_ACT_FNX After Image: Partition 4 0000 000A 0000 0006 3130 3438 3537 0001 0015 0000 | ........104857...... 3230 3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 | 2005-11-17:16:01:51. 0200 0A00 0000 0654 4553 5420 3100 0300 1500 0032 | .......TEST 1......2 3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0004 | 005-11-17:16:01:51.. 000A 0000 0006 5445 5354 2032 0005 000A 0000 0000 | ......TEST 2........ 0000 0000 0000 0006 0005 0000 0001 3000 0700 0D00 | ..............0..... 0000 0950 4944 3130 3438 3537 0008 0003 0000 4E00 | ...PID104857......N. • 8. In GGSCI, alter the data pump to start processing at the trail sequence number (extseqno) and relative byte address (extrba) noted in Step 7. The trail sequence number is the local trail that you opened in Logdump in step 4. GGSCI> alter extract <pump>, extseqno <trail sequence no>, extrba <trail rba> For the previous example (assuming the trail sequence number is 000003), the syntax would be: alter extract pmpsbl, extseqno 3, extrba 7309910 9. Start the data pump.
  • 53. Logdump other command • Search for Timestamp in Trail File • Logdump 6 >sfts 2013-08-04 <–searches for the timestamp ‘2013-08-04’ • >filter include filename FIN_AUDIT.FIN_GL_ATTR Which will filter out all records that do not contain the table name specified and narrow down the set of records that subsequent commands will operate on until a “>filter clear” is issued • >pos FIRST Which will go to the first record in the file • >pos <rba> Which will go to a specific rba in the trail file • >log to <filename>.txt
  • 54. Filtering Records You can do some pretty fancy stuff with LOGDUMP filtering. A whole suite of commands are set aside for this. We can filter on just about anything that exists in the Trail file, such as process name, RBA, record length, record type, even a string! The following example shows the required syntax to filter on DELETE operations. Note that LOGDUMP reports how many records have been excluded by the filter. Logdump 52 >filter include iotype delete Logdump 53 >n 2010/11/09 13:31:40.000.000 Delete Len 17 RBA 5863 Name: SRC.USERS Before Image: Partition 4 G b 0000 000d 0000 0009 414e 4f4e 594d 4f55 53 | ........ANONYMOUS Filtering suppressed 42 records Logdumpcontinue…
  • 55. Conditions for extract abend • Manager of source is stop • Database is down • Server is down (patch or reboot) • Lookup object created and lost in few second • Lob object and long binary values in columns • Redo log switch from database and archive log is not enabled in database
  • 56. Condition for Pump abend • Database is down • Source Server reboot or patch • Target server reboot or patch Condition for Replicate abend •Source table does not exist in target database •Target database is down for coldbackup •Target Server is down •Source table structure is not matching with Target database table •Discard file size is exceeded with predefined limit •Column position is differ in target from source database •Replicat is not find current trail position from trail directory
  • 57. Check Goldengate Replication Performance • GGSCI > Stats <parameter_name> totalsonly * • GGSCI > lag <parameter_name> • GGSCI > send <pump> gettcpstats • GGSCI > send <parameter_name> showch • GGSCI> STATS <parameter_name> totalsonly *.* • Both database should be in sync. • Assumetargetdefinition or sourcedefinition should be used in replicate parameter file. • All conversion should be performed in replicate side. Ideal condition for Replication
  • 58. Cons of Replication • DDL (data definition language) replication is still to mature and can be an issue. Supplemental logging causes extra redo log generation in the log storage. Licensing cost is higher as compared to other replication products. Configuration of Golden Gate in HA (High Availability) mode needs setup of extra VIPs which can be challenging in a RAC environment. • Integration of Golden Gate with monitoring tools like OEM (Oracle Enterprise Manager) is still not available and most DBAs monitor this with custom scripts. • File system dependent • „Application synchronization must be designed in the application Pros of Replication •Storage hardware independent •One solution for all applications/data on a server •Failover/failback may be integrated and automated •Read access for second copy,supporting horizontal scaling •Low cost for software •Easy to setup •Easy to manage, especially in resuming and/or recovery of activities •speed (faster than Streams)
  • 59. Batchsql Why Batchsql In default mode the Replicat process will apply SQL to the target database, one statement at a time. This often causes a performance bottleneck where the Replicat process cannot apply the changes quickly enough, compared to the rate at which the Extract process delivers the data. Despite configuring additional Replicat processes, performance may still be a problem Use of Batchsql GoldenGate has addressed this issue through the use of the BATCHSQL Replicat configuration parameter. As the name implies, BATCHSQL organizes similar SQLs into batches and applies them all at once. The batches are assembled into arrays in a memory queue on the target database server, ready to be applied. Similar SQL statements would be those that perform a specific operation type (insert, update, or delete) against the same target table, having the same column list. For example, multiple inserts into table A would be in a different batch from inserts into table B, as would updates on table A or B. Furthermore, referential constraints are considered by GoldenGate where dependant transactions in different batches are executed first, depicting the order in which batches are executed. Despite the referential integrity constraints, BATCHSQL can increase Replicat performance significantly. BATCHSQL BATCHESPERQUEUE 100, OPSPERBATCH 2000
  • 60. Example of batchsql performance • GoldenGate replicat performance • GGSCI> stats r_xxxx TOTALSONLY * REPORTRATE sec • You get the detailed statistics of operations per second since the start of the process, since the beginning of the day and last hour for all tables replicated by that process. For instance: • *** Hourly statistics since 2012-04-16 13:00:00 *** • Total inserts/second: 1397.76 • Total updates/second: 1307.46 • Total deletes/second: 991.50 • Total discards/second: 0.00 • Total operations/second: 3696.71 • So here we can see it is doing a bit more than 3500 operations per second, divided quite evenly between inserts, updates and deletes. As usually the GoldenGate is used for realtime replication and there are no big operations, the client does not use performance related parameters. But this time I decided to play with them. After adding both: BATCHSQL and INSERTAPPEND to the parameter file of the replicat process, the results were the following (after more than 10 minutes running) and performance is still increasing: • *** Hourly statistics since 2012-04-16 13:43:17 *** • Total inserts/second: 2174.92 • Total updates/second: 2540.20 • Total deletes/second: 1636.16 • Total discards/second: 0.00 • Total operations/second: 6351.28 • We see the performance increased by 90% ! I got interested to see if the BATCHSQL parameter only by itself could make the difference. So I removed the INSERTAPPEND parameter (which only influences the inserts anyway). Here are the results after more than 10 minutes. • *** Hourly statistics since 2012-04-16 14:00:00 *** • Total inserts/second: 2402.21 • Total updates/second: 2185.24 • Total deletes/second: 1742.43 • Total discards/second: 0.00 • Total operations/second: 6329.88 • Yep, seems the system of my client in certain situations benefits mostly of the BATCHSQL parameter. For those who don't know, "in BATCHSQL mode, Replicat organizes similar SQL statements into batches within a memory queue, and then it applies each batch in one database operation. A batch contains SQL statements that affect the same table, operation type (insert, update, or delete), and column list."