1.
GOLDENGATE TRAINING
By Vipin Mishra
GoldenGate expert
2.
Training Overview
Topics
Overview of Goldengate
Goldengate Replication & component
Goldengate Director
GoldenGate veridata
GoldenGate vs Oracle Stream
GoldenGate vs Data Guard
Goldengate trail file
How Oracle GoldenGate Works
GoldenGate Checkpointing
Goldengate download and installation
prerequisites of Installation
Database Privilege for GG user
Supplemental logging
Trandata
GoldenGate Initial data Load
GoldenGate DML Replication
DDL Replication - Overview
GoldenGate parameter file
Logdump Utility
CCM and ICM
GoldenGate Upgrade
GoldenGate troubleshoot
GolddenGate performance command
BATCHSQL
GoldenGate functions
Golden Gate abend and bugs and fixes
Golden Gate failover document and DR plan
Golden Gate reporting through hearbeat
Custom Data Manipulation Requirements for the Database Replicate Process
Additional topic for discussion
3.
Overview of Goldengate.
• Oracle GoldenGate is a heterogeneous replication solution
• Oracle acquired GoldenGate Software Inc ( founded in 1995 by Eric Fish and Todd Davidson ), a leading
provider of real-time data integration solutions on September 3, 2009.
• GoldenGate supports:
– Zero Downtime Upgrade and Migration
– System Integration / Data Synchronization
– Query and Report offloading
– Real-time Data distribution
– Real-time Data Warehousing
– Live standby database
– Active-active high availability
• Controversial replacement for Oracle Streams
• Oracle GoldenGate is Oracle’s strategic solution for real time data integration.
• GoldenGate software enables mission critical systems to have continuous availability and access to real-time
data.
• It offers a fast and robust solution for replicating transactional data between operational and analytical
systems.
• Today there are more than 500 customers around the world using GoldenGate technology for over 4000
solutions.
5.
GoldenGate Supported Databases
Oracle GoldenGate for Non Oracle Databases
Supported non-oracle databases include:
IBM DB2 on Windows, UNIX and Linux
Microsoft SQL Server 2000, 2005, 2008
Sybase on Windows, UNIX and Linux
Teradata on Windows, UNIX and Linux
MySQL on Windows, UNIX and Linux
TimesTen on Windows and Linux (delivery only)
8.
Goldengate Replication & component
GoldenGate replication consists of three basic utilities: “extract”,”replicat” and
“data pump”. In addition all three of these utilities access “trail files”, which are
flat files that contain formatted GoldenGate data.
An extract group mines a database’s redo logs; and writes information about
the changes it find into trail files. Extract is functionally similar to the Streams
“capture” process.
A replicat group reads information from trail files; and then writes those
changes into database tables. Replicat is functionally similar to the Streams
“apply” process.
A data pump group copies information between trail files. Data is functionally
similar to Streams “propagate” process.
14.
GoldenGate Bi-Directional Replication
• Also known as Active-Active Replication
14
Capture
Replicat
Replicat
Capture
Data
Pump
Data
Pump
Server A Server B
Online
Redo Logs
Online
Redo Logs
Local
Trail
Remote
Trail
Remote
Trail
Local
Trail
15.
GoldenGate Supported Data Types
• The following data types are supported for both classic and integrated
capture
– NUMBER
– BINARY FLOAT
– BINARY DOUBLE
– CHAR
– VARCHAR2
– LONG
– NCHAR
– NVARCHAR2
– RAW
– LONG RAW
– DATE
– TIMESTAMP
15
16.
GoldenGate Supported Data Types
• There is limited support in classic capture for the following data types:
– INTERVAL DAY
– INTERVAL YEAR
– TIMESTAMP WITH TIME ZONE
– TIMESTAMP WITH LOCAL TIME ZONE
• The following data types are not supported
– Abstract data types with scalar, LOBs, VARRAYs, nested tables , REFS
– ANYDATA
– ANYDATASET
– ANYTYPE
– BFILE
– MLSLABEL
– ORDDICOM
– TIMEZONE_ABBR
– URITYPE
– UROWID16
17.
GoldenGate Restrictions
• Neither capture method supports
– Database replay
– EXTERNAL tables
– Materialized views with ROWID
• Classic capture does not support
– IOT mapping tables
– Key compressed IOTs
– XMLType tables stored as XML Object Relational
– Distributed Transactions
– XA and PDML distributed transactions
– Capture from OLTP table compressed tables
– Capture from compressed tablespaces
– Exadata Hybrid Columnar Compression (EHCC)17
18.
GoldenGate Oracle-Reserved Schemas
The following schema names are reserved by Oracle and should not be configured
for GoldenGate replication:
18
$AURORA EXFSYS REPADMIN
$JIS MDSYS SYS
$ORB ODM SYSMAN
$UNAUTHENTICATED ODM_MTR SYSTEM
$UTILITY OLAPSYS TRACESVR
ANONYMOUS ORDPLUGINS WKPROXY
AURORA ORDSYS WKSYS
CTXSYS OSE$HTTP$ADMIN WMSYS
DBSNMP OUTLN XDB
DMSYS PERFSTAT
DSSYS PUBLIC
19.
GoldenGate RAC Support
• RAC support has some limitations in classic capture mode
– Extract can only run against one instance
– If instance fails,
• Manager must be stopped on failed node:
• Manager and extract must be started on a surviving node
– Failover can be configured in Oracle Grid Infrastructure
– Additional archive log switching may be required in archive log mode
– Before shutting down extract process
• Insert dummy record into a source table
• Switch log files on all nodes
– Additional configuration required to access ASM instance
• Shared storage for trails can be:
– OCFS
– ACFS
– DBFS
– No mention of NFS in the documentation
19
20.
Goldengatetrailfile
Each record in the Trail file contains the following information:
· The operation type, such as an insert, update, or delete
· The transaction indicator (TransInd): 00 beginning, 01 middle, 02 end or 03
whole of transaction
· The before or after indicator (BeforeAfter) for update operations
· The commit timestamp
· The time that the change was written to the GoldenGate file
· The type of database operation
· The length of the record
· The Relative Byte Address (RBA) within the GoldenGate file
· The schema and table name
22.
How Oracle GoldenGate Works
Modular De-Coupled Architecture
LAN/WAN
Internet
TCP/IP
Bi-directional
Capture
Trail
Pump Delivery
Trail
Source
Database(s)
Target
Database(s)
Capture: committed transactions are captured (and can be filtered) as they occur by reading
the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed, encrypted
for routing to target(s).
Delivery: applies data with transaction integrity,
transforming the data as required.
23.
How Oracle GoldenGate Works
LAN/WAN
Internet
Capture
Capture: committed transactions are captured (and can
be filtered) as they occur by reading the transaction logs.
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Or
Database(s)
24.
How Oracle GoldenGate Works
LAN/WAN
Internet
Capture
Trail
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Ora
Database(s)
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
25.
How Oracle GoldenGate Works
LAN/WAN
Internet
Capture
Trail
Pump
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Ora
Database(s)
26.
How Oracle GoldenGate Works
LAN/WAN
Internet
TCP/IP
Capture
Trail
Pump
Trail
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed,
encrypted for routing to target(s).
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Or
Database(s)
27.
How Oracle GoldenGate Works
LAN/WAN
Internet
TCP/IP
Capture
Trail
Pump Delivery
Trail
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed,
encrypted for routing to target(s).
Delivery: applies data with transaction
integrity, transforming the data as required.
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-O
Database(s)
28.
How Oracle GoldenGate Works
LAN/WAN
Internet
TCP/IP
Capture
Trail
Pump Delivery
Trail
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed,
encrypted for routing to target(s).
Delivery: applies data with transaction
integrity, transforming the data as required.
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Ora
Database(s)
Bi-directional
29.
• Capture, Pump, and Delivery save positions to a checkpoint file so they can recover in
case of failure
GoldenGate Checkpointing
Capture DeliveryPump
Commit Ordered
Source Trail
Commit Ordered
Target Trail
Source
Database
Target
Database
Begin, TX 1
Insert, TX 1
Begin, TX 2
Update, TX 1
Insert, TX 2
Commit, TX 2
Begin, TX 3
Insert, TX 3
Begin, TX 4
Commit, TX 3
Delete, TX 4
Begin, TX 2
Insert, TX 2
Commit, TX 2
Begin, TX 3
Insert, TX 3
Commit, TX 3
Begin, TX 2
Insert, TX 2
Commit, TX 2
Start of Oldest Open (Uncommitted)
Transaction
Current Read
Position
Capture
Checkpoint
Current
Write
Position
Current
Read
Position
Pump
Checkpoint
Current
Write
Position
Current
Read
Position
Delivery
Checkpoint
32.
Structure of checkpoint table
• SQL> desc ggate.ggschkpt
• Name Null? Type
• ----------------------------------------- -------- ----------------------------
• GROUP_NAME NOT NULL VARCHAR2(8)
• GROUP_KEY NOT NULL NUMBER(19)
• SEQNO NUMBER(10)
• RBA NOT NULL NUMBER(19)
• AUDIT_TS VARCHAR2(29)
• CREATE_TS NOT NULL DATE
• LAST_UPDATE_TS NOT NULL DATE
• CURRENT_DIR NOT NULL VARCHAR2(255)
• LOG_CSN VARCHAR2(129)
• LOG_XID VARCHAR2(129)
• LOG_CMPLT_CSN VARCHAR2(129)
• LOG_CMPLT_XIDS VARCHAR2(2000)
• VERSION NUMBER(3)
33.
Goldengate download and installation
• Go to https://edelivery.oracle.com/
• Accept term and condition after signin
• Select product pack and platform
• Product pack : Oracle fusion middleware
• Platform : AIX or Oracle solaris (as per your req)
• Click on go
• Select goldengate management pack for oracle database
• Download it to your desktop or preffered location
• Once download is finished. ftp or winscp this tar zip file to your server
goldengate home directory (lets say /ogg/11.2)
• Create .profile for this goldengate
• Untar this file to home directory
• Tar –xvf abc.tar
• ./ggsci
• You will redirect to ggsci interface
• GGSCI > create subdirs
• Start mgr
34.
• /ogg/11.2 should have atleast 500 gb space
• Ask dba to create goldengate user in database..lets say gguser is goldengate user
• Gguser should have connect/resource and select privilege in source and dba
privilege in target if we are doing DML replication
• GGUSER should have dba role in source and target if we are using DDL/DML both
replication.
• SQL> create tablespace gg_data
2 datafile ‘/u02/oradata/gg_data01.dbf’ size 200m;
• SQL> create user gguser identified by gguser
2 default tablespace ggs_data
3 temporary tablespace temp;
• User created.
Prerequisites
35.
• SQL> grant connect,resource to ggs_owner;
• Grant succeeded.
• SQL> grant select any dictionary, select any table to ggs_owner;
• Grant succeeded.
• SQL> grant create table to ggs_owner;
• Grant succeeded.
• SQL> grant flashback any table to ggs_owner;
• Grant succeeded.
• SQL> grant execute on dbms_flashback to ggs_owner;
• Grant succeeded.
• SQL> grant execute on utl_file to ggs_owner;
• Grant succeeded.
• Check the login
• Ggsci> dblogin userid gguser,password gguser
36.
Supplemental Logging
Supplemental logging
New
object
Existing
object
Supplemental logging is critical to
healthy replication specially for
tables with update/delete changes
PK
UK
KEYCOLs
ALL
37.
Supplemental Logging
Supplemental logging
New
object
Existing
object
PK
UK
KEYCOLs
DDLOPTIONS &
ADDTRANDATA &
GETREPLICATES &
REPORT
To add SL to source DB
To replicat add SL to target
ALL
38.
Supplemental Logging
– Required at the database level (Minimum Level)
• ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
• Minimal Supplemental Logging
“logs the minimal amount of information needed for LogMiner to identify, group, and merge the redo
operations associated with DML changes”
– Identification key logging (PK, UK, FK)
• Table Level
• Database Level
• Method
– SQL e.g. ALTER TABLE <> ADD SUPPLEMENTAL LOG DATA …
– GoldenGate e.g. ADD TRANDATA <Table Name>
39.
Supplemental Logging
Tim
e
Source
t1 Create table
t2 ADD SUPPLEMENTAL LOG DATA (ALL)
COLUMNS
t3 Create PK index
t4 add supplemental log data (id) (temp)
t5 drop SUPPLEMENTAL LOG DATA (ALL)
T6 add supplemental log data (id)
T7 drop supplemental log data (id) (temp)
Tim
e
Target
t1 Create table
t2 Create PK index
t3 ADD SUPPLEMENTAL LOG DATA (ALL)
COLUMNS
t4 add supplemental log data (id) (temp)
t5 drop supplemental log data (id)
(temp)
1. ADDTRANDATA on source for new object.
2. Monitor replicated object on target and add
SL manually before any DML change
40.
Add trandata
• For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log
group includes one of the following sets of columns, in the listed order of priority, depending on what is
defined on the table:
1. Primary key
2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
columns, and no nullable columns
3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
columns, but can include nullable columns
4. If none of the preceding key types exist (even though there might be other types of keys
defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
the database allows to be used in a unique key, excluding virtual columns, UDTs,
function-based columns, and any columns that are explicitly excluded from the Oracle
GoldenGate configuration.
• The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause thatis
appropriate for the type of unique constraint (or lack of one) that is defined for the table.
• If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA
command to log the required supplemental data. It is possible to use ADD
TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
● You can stop DML activity on any and all tables before users or applications perform DDL on them.
● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
41.
Project overview
• PMSPRD SERVER PMSPRD2 TARGET SERVER
DML REPLICATION
SOURCE TARGET
DR
REPORTING
W
A
N
T
C
P
PMSPRD SERVER
DR SERVER
48.
• Replicate in DR
• REPLICAT R_actDR
• SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
• SETENV (ORACLE_HOME ="/u01/app/oracle/product/10.2.0.4a")
• SETENV (ORACLE_SID = "pmspr40")
• --
• ASSUMETARGETDEFS
• APPLYNOOPUPDATES
• --
• DISCARDFILE ./dirrpt/R_actDR.dsc, APPEND, MEGABYTES 50
• --
• GETTRUNCATES
• --
• USERID gguser, PASSWORD gguser
• --
• MAP sa.*, TARGET sa.*;
• --
• -- Enter the following statements from GGSCI before starting this process
• --
• -- delete replicat R_actDR
• -- add replicat R_actDR, exttrail /dirdat/pa
49.
Logdump utility
• Logdump (The Oracle GoldenGate Log File Dump Utility) is a useful Oracle GoldenGate utility that
enables you to search for, filter, view, and save data that is stored in a GoldenGate trail or extract
file. The following table summarizes the useful commands of logdump.
50.
Logdump continue
• 1. Find out which local (exttrail) and remote trail (rmttrail) files the data pump is reading and writing to.
GGSCI> info <pump>, detail
2. In Logdump, issue the following series of commands to open the remote trail and find the last record.
Logdump> open <trail file from step 1>
Logdump> ghdr on
Logdump> count
Logdump> skip (<total count>-1)
Logdump> n (for next)
3. Write down the Table name, AuditPos and AuditRBA of the last record. (Do not be confused by the Logdump field names: AuditRBA
shows the Oracle redo log sequence number, and AuditPos shows the relative byte address of the record.)
• The following example shows that the operation was captured from Oracle's redo log sequence number 1941 at RBA 1965756.
Logdump 12 >n
___________________________________________________________________
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 12383 (x305f) IO Time : 2005/11/17 16:02:38.295.847
IOType : 5 (x05) OrigNode : 255 (xff)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 1941 AuditPos : 1965756
Continued : N (x00) RecCount : 1 (x01)
2005/11/17 16:02:38.295.847 Insert Len 12383 RBA 7670707
Name: LLI.S_EVT_ACT
After Image: Partition 4
0000 0009 0000 0005 3532 3432 3900 0100 1500 0032 | ........52429......2
3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0002 | 005-11-17:16:01:51..
000A 0000 0006 5445 5354 2031 0003 0015 0000 3230 | ......TEST 1......20
3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 0400 | 05-11-17:16:01:51...
0A00 0000 0654 4553 5420 3200 0500 0500 0000 0130 | .....TEST 2........0
0006 000A 0000 0000 0000 0000 0000 0007 0005 0000 | ....................
0001 3000 0800 0B00 0000 0778 2035 3234 3239 0009 | ..0........x 52429..
51.
Logdump continue
• 4. In Logdump, issue the following series of commands to open the local trail file and find the record that corresponds to the one you found
in Step 2.
Logdump>open <exttrail file>
• Logdump>ghdr on
Logdump>filter include AuditRBA <number>
Logdump>filter include filename <table_name>
Logdump>filter match all
Logdump>n
• Note: The Logdump filter command to search for a record's redo RBA is "AuditRBA", even though the trail record's header calls it
"AuditPos". This is an anomaly of the Logdump program.
•
5. Verify that the operation is from the same Oracle redo log sequence number as the one you recorded in step 3. (AuditRBA as shown in
the trail header), as there is no way to filter for this in Logdump.
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 12383 (x305f) IO Time : 2005/11/17 16:01:51.000.000
IOType : 5 (x05) OrigNode : 255 (xff)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 1941 AuditPos : 1965756 ß same as the one in step 2.
Continued : N (x00) RecCount : 1 (x01)
2005/11/17 16:01:51.000.000 Insert Len 12383 RBA 7297435
Name: LLI.S_EVT_ACT
After Image: Partition 4
0000 0009 0000 0005 3532 3432 3900 0100 1500 0032 | ........52429......2
3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0002 | 005-11-17:16:01:51..
000A 0000 0006 5445 5354 2031 0003 0015 0000 3230 | ......TEST 1......20
3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 0400 | 05-11-17:16:01:51...
0A00 0000 0654 4553 5420 3200 0500 0500 0000 0130 | .....TEST 2........0
0006 000A 0000 0000 0000 0000 0000 0007 0005 0000 | ....................
0001 3000 0800 0B00 0000 0778 2035 3234 3239 0009 | ..0........x 52429..
Filtering suppressed 3289 records
6. Compare the output of the record (data part) from the remote trail with that of the record from the local trail, and make sure it is the
same. This shows they are the same data record.
52.
Logdump continue
• 7. In Logdump, issue the following commands to clear the filter and find the next record in the local trail after the one found in Step
4. Logdump searches based on the GoldenGate record's RBA within the trail file (the "extrba"). Write down the GoldenGate RBA. In
the following example, it would be 7309910.
Logdump>filter clear
Logdump>n
___________________________________________________________________
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 1005 (x03ed) IO Time : 2005/11/17 16:01:52.000.000
IOType : 5 (x05) OrigNode : 255 (xff)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 1941 AuditPos : 1979280
Continued : N (x00) RecCount : 1 (x01)
2005/11/17 16:01:52.000.000 Insert Len 1005 RBA 7309910
Name: LLI.S_EVT_ACT_FNX
After Image: Partition 4
0000 000A 0000 0006 3130 3438 3537 0001 0015 0000 | ........104857......
3230 3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 | 2005-11-17:16:01:51.
0200 0A00 0000 0654 4553 5420 3100 0300 1500 0032 | .......TEST 1......2
3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0004 | 005-11-17:16:01:51..
000A 0000 0006 5445 5354 2032 0005 000A 0000 0000 | ......TEST 2........
0000 0000 0000 0006 0005 0000 0001 3000 0700 0D00 | ..............0.....
0000 0950 4944 3130 3438 3537 0008 0003 0000 4E00 | ...PID104857......N.
•
8. In GGSCI, alter the data pump to start processing at the trail sequence number (extseqno) and relative byte address (extrba) noted
in Step 7. The trail sequence number is the local trail that you opened in Logdump in step 4.
GGSCI> alter extract <pump>, extseqno <trail sequence no>, extrba <trail rba>
For the previous example (assuming the trail sequence number is 000003), the syntax would be:
alter extract pmpsbl, extseqno 3, extrba 7309910
9. Start the data pump.
53.
Logdump other command
• Search for Timestamp in Trail File
• Logdump 6 >sfts 2013-08-04 <–searches for the timestamp ‘2013-08-04’
• >filter include filename FIN_AUDIT.FIN_GL_ATTR
Which will filter out all records that do not contain the table name
specified and narrow down the set of records that subsequent commands
will operate on until a “>filter clear” is issued
• >pos FIRST
Which will go to the first record in the file
• >pos <rba>
Which will go to a specific rba in the trail file
• >log to <filename>.txt
54.
Filtering Records
You can do some pretty fancy stuff with LOGDUMP filtering. A whole suite of commands are set aside for
this. We can filter on just about anything that exists in the Trail file, such as process name, RBA, record
length, record type, even a string!
The following example shows the required syntax to filter on DELETE operations. Note that
LOGDUMP reports how many records have been excluded by the filter.
Logdump 52 >filter include iotype delete
Logdump 53 >n
2010/11/09 13:31:40.000.000 Delete Len 17 RBA 5863
Name: SRC.USERS
Before Image: Partition 4 G b
0000 000d 0000 0009 414e 4f4e 594d 4f55 53 | ........ANONYMOUS
Filtering suppressed 42 records
Logdumpcontinue…
55.
Conditions for extract abend
• Manager of source is stop
• Database is down
• Server is down (patch or reboot)
• Lookup object created and lost in few second
• Lob object and long binary values in columns
• Redo log switch from database and archive log is not enabled in
database
56.
Condition for Pump abend
• Database is down
• Source Server reboot or patch
• Target server reboot or patch
Condition for Replicate abend
•Source table does not exist in target database
•Target database is down for coldbackup
•Target Server is down
•Source table structure is not matching with Target database table
•Discard file size is exceeded with predefined limit
•Column position is differ in target from source database
•Replicat is not find current trail position from trail directory
57.
Check Goldengate Replication Performance
• GGSCI > Stats <parameter_name> totalsonly *
• GGSCI > lag <parameter_name>
• GGSCI > send <pump> gettcpstats
• GGSCI > send <parameter_name> showch
• GGSCI> STATS <parameter_name> totalsonly *.*
• Both database should be in sync.
• Assumetargetdefinition or sourcedefinition should be used in replicate
parameter file.
• All conversion should be performed in replicate side.
Ideal condition for Replication
58.
Cons of Replication
• DDL (data definition language) replication is still to mature and can be an issue.
Supplemental logging causes extra redo log generation in the log storage. Licensing
cost is higher as compared to other replication products. Configuration of Golden
Gate in HA (High Availability) mode needs setup of extra VIPs which can be
challenging in a RAC environment.
• Integration of Golden Gate with monitoring tools like OEM (Oracle Enterprise
Manager) is still not available and most DBAs monitor this with custom scripts.
• File system dependent
• „Application synchronization must be designed in the application
Pros of Replication
•Storage hardware independent
•One solution for all applications/data on a server
•Failover/failback may be integrated and automated
•Read access for second copy,supporting horizontal scaling
•Low cost for software
•Easy to setup
•Easy to manage, especially in resuming and/or recovery of activities
•speed (faster than Streams)
59.
Batchsql
Why Batchsql
In default mode the Replicat process will apply SQL to the target database, one statement at a time. This
often causes a performance bottleneck where the Replicat process cannot apply the changes quickly
enough, compared to the rate at which the Extract process delivers the data. Despite configuring additional
Replicat processes, performance may still be a problem
Use of Batchsql
GoldenGate has addressed this issue through the use of the BATCHSQL Replicat configuration parameter.
As the name implies, BATCHSQL organizes similar SQLs into batches and applies them all at once. The
batches are assembled into arrays in a memory queue on the target database server, ready to be applied.
Similar SQL statements would be those that perform a specific operation type (insert, update, or delete)
against the same target table, having the same column list. For example, multiple inserts into table A
would be in a different batch from inserts into table B, as would updates on table A or B. Furthermore,
referential constraints are considered by GoldenGate where dependant transactions in different batches
are executed first, depicting the order in which batches are executed. Despite the referential integrity
constraints, BATCHSQL can increase Replicat performance significantly.
BATCHSQL BATCHESPERQUEUE 100, OPSPERBATCH 2000
60.
Example of batchsql performance
• GoldenGate replicat performance
• GGSCI> stats r_xxxx TOTALSONLY * REPORTRATE sec
• You get the detailed statistics of operations per second since the start of the process, since the beginning of the day and last hour for all
tables replicated by that process. For instance:
• *** Hourly statistics since 2012-04-16 13:00:00 ***
• Total inserts/second: 1397.76
• Total updates/second: 1307.46
• Total deletes/second: 991.50
• Total discards/second: 0.00
• Total operations/second: 3696.71
• So here we can see it is doing a bit more than 3500 operations per second, divided quite evenly between inserts, updates and deletes. As
usually the GoldenGate is used for realtime replication and there are no big operations, the client does not use performance related
parameters. But this time I decided to play with them. After adding both: BATCHSQL and INSERTAPPEND to the parameter file of the
replicat process, the results were the following (after more than 10 minutes running) and performance is still increasing:
• *** Hourly statistics since 2012-04-16 13:43:17 ***
• Total inserts/second: 2174.92
• Total updates/second: 2540.20
• Total deletes/second: 1636.16
• Total discards/second: 0.00
• Total operations/second: 6351.28
• We see the performance increased by 90% ! I got interested to see if the BATCHSQL parameter only by itself could make the difference. So
I removed the INSERTAPPEND parameter (which only influences the inserts anyway). Here are the results after more than 10 minutes.
• *** Hourly statistics since 2012-04-16 14:00:00 ***
• Total inserts/second: 2402.21
• Total updates/second: 2185.24
• Total deletes/second: 1742.43
• Total discards/second: 0.00
• Total operations/second: 6329.88
• Yep, seems the system of my client in certain situations benefits mostly of the BATCHSQL parameter. For those who don't know, "in
BATCHSQL mode, Replicat organizes similar SQL statements into batches within a memory queue, and then it applies each batch in one
database operation. A batch contains SQL statements that affect the same table, operation type (insert, update, or delete), and column
list."
It appears that you have an ad-blocker running. By whitelisting SlideShare on your ad-blocker, you are supporting our community of content creators.
Hate ads?
We've updated our privacy policy.
We’ve updated our privacy policy so that we are compliant with changing global privacy regulations and to provide you with insight into the limited ways in which we use your data.
You can read the details below. By accepting, you agree to the updated privacy policy.