Moving 180TB in only a few hours
Mike Dietrich
Vice President
Product Management and Development
Oracle
Migrating the Beast
Copyright © 2024, Oracle and/or its affiliates
Photo
by
Erda
Estremera
on
Unsplash
Copyright © 2024, Oracle and/or its affiliates
2
MIKE DIETRICH
Vice President
Product Management and Development
Database Upgrade, Migrations & Patching
mikedietrich
@mikedietrichde
https://mikedietrichde.com
Find slides and much more on our blogs
Copyright © 2024, Oracle and/or its affiliates
3
MikeDietrichDE.com
Mike.Dietrich@oracle.com
dohdatabase.com
Daniel.Overby.Hansen@oracle.com
DBArj.com.br
Rodrigo.R.Jorge@oracle.com
AlexZaballa.com
Alex.Zaballa@oracle.com
Copyright © 2024, Oracle and/or its affiliates
4
Virtual Classroom Seminars
https://MikeDietrichDE.com/videos
More than 35 hours of technical content,
on-demand, anytime, anywhere
Introduction
Who is who?
5 Copyright © 2024, Oracle and/or its affiliates
Copyright © 2024, Oracle and/or its affiliates
6
ANDREAS GROETZ
Oracle DBA Tech Lead
Entain Services Austria GmbH
Entain is one of the world's largest sports betting
and gaming groups. Leveraging the power of the
Entain Platform, they bring moments of
excitement into their customers lives through
more than 30 iconic brands such as bwin, Coral,
Ladbrokes and many more.
Entain operates on over 140 licenses across 40+
territories and employs over 29,000 talented
workforce. Entain is listed on the London Stock
Exchange and is a constituent of the FTSE 100
Index.
8
Challenges
What is special, what makes it so complex?
9 Copyright © 2024, Oracle and/or its affiliates
Migration Challenges
Copyright © 2024, Oracle and/or its affiliates
10
SPARC SuperCluster Exadata X9M Extreme Flash
ZDLRA
Migration Challenges
Copyright © 2024, Oracle and/or its affiliates
11
SPARC SuperCluster Exadata X9M Extreme Flash
ZDLRA
180TB
size
Migration Challenges
Copyright © 2024, Oracle and/or its affiliates
12
SPARC SuperCluster Exadata X9M Extreme Flash
ZDLRA
15TB
redo/day
Migration Challenges
Copyright © 2024, Oracle and/or its affiliates
13
SPARC SuperCluster Exadata X9M Extreme Flash
ZDLRA
5 Physical Standby DBs
Local, and in different region, 2500km away
Constraints
Limiting factors, and other things to know
14 Copyright © 2024, Oracle and/or its affiliates
Up to 15TB redo/day is beyond what
Oracle GoldenGate will be able to synch
The system is highly active 24 x 7 x 365
15 Copyright © 2024, Oracle and/or its affiliates
Photo
by
Mihály
Köles
on
Unsplash
Very large database,
very constrained downtime
• 180+TB database size
• 5-6 TB growth/month
• Every minute of downtime costs $$$
• Users immediately move to competitors
16 Copyright © 2024, Oracle and/or its affiliates
Photo
by
Dave
Hoefler
on
Unsplash
Migrating tablespaces upfront or
separately definitely not an option
• Way too many cross-dependencies
• Tablespaces aren't isolated
17 Copyright © 2024, Oracle and/or its affiliates
Every complex Oracle data type you
can imagine is used
• XML binary types
• Nested partitioned tables
• Evolved object types
18 Copyright © 2024, Oracle and/or its affiliates
Photo
by
Mihály
Köles
on
Unsplash
Photo
by
Linh
Ha
on
Unsplash
Tight downtime window
• Dry run: 2 hours outage approved
• Tablespaces read-only
• Full Transportable Export
• Live migration: 13 hours approved
19 Copyright © 2024, Oracle and/or its affiliates
Photo
by
Mihály
Köles
on
Unsplash
Photo
by
Linh
Ha
on
Unsplash
Photo
by
Masaaki
Komori
on
Unsplash
The available Oracle V4 PERL
migration scripts would have been
worked technically, but ...
• No section-size backup support
• No standby backup support
• No selective PDB migration support
20 Copyright © 2024, Oracle and/or its affiliates
Photo
by
Morgane
Le
Breton
on
Unsplash
Migration
21 Copyright © 2024, Oracle and/or its affiliates
• Data Pump par file:
FULL=YES
TRANSPORTABLE=ALWAYS
Typically, we use Full Transportable Export/
Import for large cross-endian migrations
Copyright © 2024, Oracle and/or its affiliates
22
USERS
DATA
USERS
Copyright © 2024, Oracle and/or its affiliates
23
Concept
SYSTEM
SYSAUX
SYSTEM
SYSAUX
DATA
Source
Non-CDB
19c
Target
PDB
23ai
Tablespaces
different endian format
Level 0 backup
Restore and
convert
Not plugged in
Level 1 incremental
Convert,
recover,
merge
Read-only
Final level 1
Data Pump
Full Transportable
Export/Import
Tablespace
plug-in
Metadata import
(users, tables, indexes, triggers, PL/SQL, ...)
Shut down
source
M5 Migration Script
The new migrations scripts superseeding the V4 PERL scripts
24 Copyright © 2024, Oracle and/or its affiliates
# source database
RUN
{
ALLOCATE CHANNEL d1 DEVICE TYPE DISK FORMAT '...';
ALLOCATE CHANNEL d2 DEVICE TYPE DISK FORMAT '...';
BACKUP
FILESPERSET 1
SECTION SIZE 64G
TAG UP19_L0_240206101548
TABLESPACE <list-of-tablespace>;
}
25 Copyright © 2024, Oracle and/or its affiliates
# source database
RUN
{
ALLOCATE CHANNEL d1 DEVICE TYPE DISK FORMAT '...';
ALLOCATE CHANNEL d2 DEVICE TYPE DISK FORMAT '...';
BACKUP
FILESPERSET 1
SECTION SIZE 64G
TAG UP19_L0_240206101548
TABLESPACE <list-of-tablespace>;
}
Copyright © 2024, Oracle and/or its affiliates
26
# target database
RUN
{
ALLOCATE CHANNEL DISK1 DEVICE TYPE DISK FORMAT '...';
ALLOCATE CHANNEL DISK2 DEVICE TYPE DISK FORMAT '...';
RESTORE ALL FOREIGN DATAFILES TO NEW FROM BACKUPSET
'<backup-set-1>',
'<backup-set-2>',
...
'<backup-set-n>'
};
Benefits
M5 procedure supports:
• Encrypted tablespaces
• Multisection backups
• Migrating multiple databases into the same CDB simultaneously
• Compressed backup sets
• Better parallelism
Copyright © 2024, Oracle and/or its affiliates
27
Requirements
• Source and target database must
• be 19.18.0 or higher
• use Data Pump Bundle Patch
• Target must use Oracle Managed Files (OMF)
• Parameter DB_CREATE_FILE_DEST parameter
Copyright © 2024, Oracle and/or its affiliates
28
• Download from Doc ID 2999157.1
Always use the latest version of M5 script
Copyright © 2024, Oracle and/or its affiliates
29
• Check the License Guide for details
Use Block Change Tracking
for faster incremental backups
Copyright © 2024, Oracle and/or its affiliates
30
Data Center 1
Migration Plan
Copyright © 2024, Oracle and/or its affiliates
31
Data Center 2
Data Center Far Away
2500km distance
PROD
LOCAL
STBY
SPARC
M8 SC
SPARC
M7 SC
Disaster Recovery Site
REP
STBY
NEW
PROD
LOCAL
STBY
REP
STBY Disaster Recovery Site
Migration
FTEX + M5
Exadata
X9M-EF
ZDLRA
Migration Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Export
Import
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
32
SPARC
SuperCluster
Exadata
X9M Extreme Flash
ZDLRA
Local standby
M5 Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Import
• Download M5 script from Doc ID 2999157.1
• Configure shared NFS
• Edit dbmig_ts_list.txt
• Edit dbmig_driver.properties
• Create new, empty target database
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
33
Migration Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Export
Import
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
34
SPARC
SuperCluster
Exadata
X9M Extreme Flash
ZDLRA
M5 Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Import
• Start initial level 0 backup
• Use driver script dbmig_driver_m5.sh L0
• Driver script creates a restore script
• Restore using restore_L0_<source_sid>_<timestamp>.cmd
• Check logs
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
35
Migration Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Export
Import
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
36
SPARC
SuperCluster
Exadata
X9M Extreme Flash
ZDLRA
M5 Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Import
• Start level 1 incremental backup
• Use driver script dbmig_driver_m5.sh L1
• Driver script creates a restore script
• Restore using restore_L1_<source_sid>_<timestamp>.cmd
• Check logs
• Repeat as often as desired
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
37
Migration Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Export
Import
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
38
SPARC
SuperCluster
Exadata
X9M Extreme Flash
ZDLRA
M5 Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Import
• Maintenance window begins
• Read-only sessions can still use the database
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
39
Migration Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Export
Import
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
40
SPARC
SuperCluster
Exadata
X9M Extreme Flash
ZDLRA
M5 Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Import
• Start final level 1 incremental backup
• Use driver script dbmig_driver_m5.sh L1F
• Sets tablespaces read-only
• Performs level 1 incremental backup
• Starts Data Pump full transportable export
• Optionally, shuts down source database
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
41
M5 Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Import
• Driver script created a restore script
• Restore using restore_L1F_<source_sid>_<timestamp>.cmd
• Check logs
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
42
Migration Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Export
Import
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
43
SPARC
SuperCluster
Exadata
X9M Extreme Flash
ZDLRA
M5 Workflow
Configure Level 0 Level 1 Outage
Final
Backup
Final
restore
Import
• Copy Data Pump dump file to DATA_PUMP_DIR
• Use import driver script in test mode
• Start impdp.sh <dump_file> <restore_log> test
• Check generated parameter file
• Use impdp.sh <dump_file> <restore_log> run
• Check Data Pump log file
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
44
Recommendations
• Use a shared NFS mount point
• Attach to source and target
• Use for script, backups, logs, etc.
• If NFS is not possible
• Manually copy files from source to target after each run
• M5 can copy scripts using DEST_SERVER (for ZDLRA only)
Copyright © 2024, Oracle and/or its affiliates
45
Wanna try it out?
Copyright © 2024, Oracle and/or its affiliates
46
Oracle LiveLabs – Run the lab just inside your browser!
The Project Plan
Timelines and the Run Book
47 Copyright © 2024, Oracle and/or its affiliates
Overall Project Timeline
Copyright © 2024, Oracle and/or its affiliates
48
Information
exchange and
first meeting
May 2022
Exadata and
ZDLRA setup
Oct 2022
Initial tests, HW setup
Hot phase started
Exec Escalation
Apr 2023
First mild
escalation
Dec 2022
Migration Tests, Bug Fixing, Standby
Live migration
completed
successfully
Oct 17, 2023
Go-live
Real dry run with
PROD and FTEX
downtime
Oct 3, 2023
Stress Test
PoC, HW order
2021/2022
PoC Phase
HW order &
delivery
Key to Success: Runbook
Complex projects absolutely require a detailed runbook
• This run book covered over 200 individual tasks
Copyright © 2024, Oracle and/or its affiliates
49
ID Task Status
Responsible
Primary
Person
Responsible
Secondary
Person
Predecessor
Start Time
(CEST)
Duration
(hh:mm)
End Time
(CEST)
Start
Time
(IST)
End Time
(IST)
Actual Start
Time
(CEST)
Actual
Duration
Actual End
Time
(CEST)
Comments -
Blocker
Timeline Live Migration
Copyright © 2024, Oracle and/or its affiliates
50
Task 22:00 23:00 0:00 1:00 2:00 3:00 4:00 5:00 6:00 7:00 8:00
Stats and AWR expdp etc
Application downtime
Export / truncate audit
Tablespaces read-only
Final L1 backup
Full Transportable Export
Recover final L1 backup
Full Transportable Import
Gather and import stats
Database migration time
Functional/internal tests
22:45 h – 8:00 h
50
min
52
min
30
min
48
min
165 min = 2:45 hrs
30+30
min
23:48 h – 5:05 h
175 min = 2:55 hrs
How many people were involved?
Copyright © 2024, Oracle and/or its affiliates
51
6 DBAs
2-3 infrastructure
2 network
4 DB developer
War Room
4 product
engineering
4 compliance
2 data center
engineers
2 monitoring,
communication
10 testing
Managers
The Top-10 Migration Issues
Listed in random order
52 Copyright © 2024, Oracle and/or its affiliates
Where we started ...
PoC first FTEX export:
01-OCT-21 05:32:36.275: Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at
Fri Oct 1 05:32:36 2021 elapsed 0 04:25:22
PoC first FTEX import:
05-OCT-21 01:48:59.534: Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 103000
error(s) at Tue Oct 5 01:48:59 2021 elapsed 3 18:34:09
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
53
The way forward ...
Copyright © 2024, Oracle and/or its affiliates
54
80 SRs opened and solved in various areas
18 one-off patches
5 merges
Migration
ZDLRA
Standby
Infrastructure
Daily calls with Oracle, countless evening / night / weekend hours
Many areas required special attention….
Optimizer Statistics
Scheduler jobs
Resource Manager
Cross Schema objects
AQ
Evolved Types/partitioned nested tables
Binary XML
Standby DBs
…
Photo
by
Justin
Chrn
on
Unsplash
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
55
Issue 1 | Long Running Meta Import
Fix applied to remedy export errors
• BUG 34201281 - MERGE ON DATABASE RU 19.12.0.0.0 OF 33963454 34052641
Result:
• Now Full Transportable import alone took over 6 days (!!)
Copyright © 2024, Oracle and/or its affiliates
56
08-JUN-22 16:21:17.887: W-1 Processing object type DATABASE_EXPORT/.../PROCACT_INSTANCE
14-JUN-22 18:56:58.813: W-1 Completed 108 PROCACT_INSTANCE objects in 527737 seconds
…
14-JUN-22 19:15:50.016: Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 316 error(s)
at Tue Jun 14 19:15:49 2022 elapsed 6 06:45:38
Issue 1 | Long Running Meta Import
Long running action identified via tracing:
• 300+ million rows
► BUG 33514440 - SYS_REMAP_XMLTYPE OPERATOR CAUSING PERFORMANCE WITH LARGE DATASETS
Issue in internal package DBMS_CSX_INT
• Fast merge of XMLTYPE is not happening as expected
• Reason: Incorrect internal check query
• Tokens between source and target are not identical
• Reason: Different Endianness
Copyright © 2024, Oracle and/or its affiliates
57
UPDATE "POSTMAN"."T_MAIL_LOG"
SET "C_CVAR"=SYS_REMAP_XMLTYPE("C_CVAR")
Issue 1 | Long Running Meta Import
Solution:
• Use workaround from MOS Note: 2309649.1 in UPGRADE mode
• MOS Note: 2309649.1 - How to Migrate Large Amount of Binary XML Data between Databases
• Bug 34425044 - BINARY XML MIGRATION SCRIPT NEEDS UPDATING (TTS_MIGRATION_BINARY_XML.ZIP / 2309649.1)
Steps:
• Create new empty PDB
• Open PDB in UPGRADE mode and set _ORACLE_SCRIPT=TRUE
• Apply the XML token workaround from MOS Note 2309649.1
• Additional preparation tasks (creating directories, resource groups etc.)
• Set guaranteed restore point
• Start Full Transportable import
Copyright © 2024, Oracle and/or its affiliates
58
25-JUL-22 12:28:40.813: Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed
with 317 error(s) at Mon Jul 25 12:28:40 2022 elapsed 0 04:33:03
Copyright © 2024, Oracle and/or its affiliates
59
Using binary XML
--Create a new table using XMLTYPE
CREATE TABLE CARS (CARDATA XMLTYPE);
--XMLTYPE columns consists of two columns
SELECT COLUMN_NAME, DATA_TYPE, HIDDEN_COLUMN, VIRTUAL_COLUMN FROM USER_TAB_COLS;
COLUMN_NAME DATA_TYPE HIDDEN_COLUMN VIRTUAL_COLUMN
_______________ ____________ ________________ _________________
CARDATA XMLTYPE NO YES
SYS_NC00002$ BLOB YES NO
Copyright © 2024, Oracle and/or its affiliates
60
XMLTYPE
is a virtual column
Encoded/compressed
XML data
select *
from dba_xml_tab_cols
where storage_type= 'BINARY'
and owner != 'SYS';
--Detecting Binary XML in your Oracle Database
Copyright © 2024, Oracle and/or its affiliates
61
Binary XML
Copyright © 2024, Oracle and/or its affiliates
62
<CARS>
<CAR>
<MODEL>
Volvo V90
</MODEL>
</CAR>
</CARS>
XML document
INSERT INTO CARS ...
Compress and encode XML to binary format
1. Generate tokens from namespace and tags
2. Insert into central token table
3. Insert encoded XML into CARS
INSERT INTO CARS VALUES('<CARS><CAR><MODEL>Volvo V90</MODEL></CAR></CARS>');
SELECT CARDATA, SYS_NC00002$ FROM CARS;
CARDATA
____________________________
<CARS>
<CAR>
<MODEL>Volvo V90</MODEL>
</CAR>
</CARS>
SYS_NC00002$
__________________________________________________________
9F01639E000000C850B4C81F1FC0085D90566F6C766F20563930D9D9A0
Copyright © 2024, Oracle and/or its affiliates
63
SYS_NC00002$
__________________________________________________________
9F01639E000000C850B4C81F1FC0085D90566F6C766F20563930D9D9A0
select * from XDB.X$QN6LVUTKVD49541E0L8000000001;
NMSPCID LOCALNAME FLAGS ID
__________ _________________ ________ _______
... ... ... ...
07 CARS 00 50B4
07 CAR 00 1F1F
07 MODEL 00 5D90
Copyright © 2024, Oracle and/or its affiliates
64
<CARS>
<CAR>
<MODEL>
Volvo V90
</MODEL>
</CAR>
</CARS>
Token repository
Binary XML
• The table contains the encoded binary XML
• The dictionary contains the token repository
• Both are required to read and understand the data
Copyright © 2024, Oracle and/or its affiliates
65
Copyright © 2024, Oracle and/or its affiliates
66
Data Pump moves tokens during
Full Transportable Export/Import
impdp ... transport_datafiles=<list-of-files>
ORA-39945: Token conflicting with existing tokens
--If a token is already in use in the target database
--Data Pump skips the table to avoid data corruption
Copyright © 2024, Oracle and/or its affiliates
67
Binary XML
Copyright © 2024, Oracle and/or its affiliates
68
Possible solutions
Conventional Data Pump export
1
Prune the target database tokens
2
• When Data Pump inserts the binary XML, the target database generates a new token
• Re-use all source database tokens in the target database
• Works on brand new, empty target databases only
• Be aware of bug 34425044
• Strongly recommended patching the source and target database
using the latest available Release Update and Data Pump Bundle Patch
• How to Migrate Large Amount of Binary XML Data between Databases (Doc ID 2309649.1)
Binary XML
Binary XML uses tokens to compress/encode XML Data. The token ids and their value (Tag
name) are stored in a central token table. During TTS import of data the tokens need to be
reused, that means tokens on the exporting side and importing side cannot conflict with
each other. In case of a conflict not XL data can be imported and an error message is raised
during TTS import.
The Export and Import utilities can be used to move Binary XML storage data between
environments. This option works well on smaller datasets, however, since it involves several
internal insert commands, it will be very resource intensive on larger data migrations. For
customers with a limited timescale planned migration window, this is not a feasible option.
How to move XMLType tables/columns with Binary XML Storage between
schemas/databases (Doc ID 1405457.1)
Copyright © 2024, Oracle and/or its affiliates
69
Binary XML
How to find XML token table:
select TOKSUF from xdb.xdb$ttset;
Add as suffix:
--Tags
select * from XDB.X$QN<toksuf>;
--Namespaces
select * from XDB.X$NM<toksuf>;
Staging tables during FTEX:
xdb.xdb$tsetmap
xdb.xdb$ttset
Copyright © 2024, Oracle and/or its affiliates
70
Issue 2 | XMLTYPE
Data type owners changed during transportable export/import
• Before:
• After:
Copyright © 2024, Oracle and/or its affiliates
71
OWNER TABLE_NAME COLUMN_NAME DATA_TYPE DATA_TYPE_OWNER
OPS1 T_PMAIL C_XMLPAR XMLTYPE PUBLIC
OPS1 T_PMAIL_LOG C_XMLPAR XMLTYPE PUBLIC
POSTMAN T_MAIL_LOG C_CVAR XMLTYPE PUBLIC
POSTMAN T_MAIL C_CVAR XMLTYPE PUBLIC
OWNER TABLE_NAME COLUMN_NAME DATA_TYPE DATA_TYPE_OWNER
OPS1 T_PMAIL C_XMLPAR XMLTYPE SYS
OPS1 T_PMAIL_LOG C_XMLPAR XMLTYPE PUBLIC
POSTMAN T_MAIL_LOG C_CVAR XMLTYPE PUBLIC
POSTMAN T_MAIL C_CVAR XMLTYPE SYS
Issue 3 | Data Pump Bundle Patch
Many of the TTS and Metadata fixes got included into Data Pump Bundle
• Data Pump Recommended Proactive Patches For 19.10 and Above (Doc ID 2819284.1)
Copyright © 2024, Oracle and/or its affiliates
72
Starting Point
RU 19.12.0
Fixes missing
DPBP 19.14.0
Fixes missing
DPBP 19.15.0
Fixes missing
DPBP 19.16.0
Regression
DPBP 19.17.0 Migration Release
All required fixes
are included
DPBP 19.18.0
RU 19.18.0
Issue 3 | Data Pump Bundle Patch
Necessary fixes were included in DPBP on top of RU 19.18.0
• Fixes containing .c files can't be included into the Data Pump Bundle Patch
• Bug 30910255 - xml storage type corrupted during tts import with partitioned columns
• Fix included since 19.18.0 and with Oracle 23c
• Merge request for Solaris would have been an alternative
• MOS Note: 2881221.1 - ORA-00600:[qmcxdGetTextXYZ1] is thrown for a query after XTTS
migration has been performed
It took a while to include all necessary fixes into a Data Pump Bundle Patch
Copyright © 2024, Oracle and/or its affiliates
73
Issue 4 | Metadata API and Nested Tables
Full transportable import errors out for a nested partitioned table
Root cause was a string overflow in the Metadata API
• Data Pump creates the index, and then alters it – here the overflow happened
• Side effect was a misleading error message
Resulted in fixes:
• Bug 34899010 - ORA-39151 DURING XTTS IMPORT - ALTER INDEX PROBLEM
• Bug 34965629 - ORA-39151 DURING XTTS IMPORT - INCORRECT MESSAGE
Copyright © 2024, Oracle and/or its affiliates
74
PLS-00172: string literal too long
ORA-39151: Table "DBA_XY"."X_GAMES" exists.
All dependent metadata and data will be skipped due to table_exists_action
Issue 5 | Evolved Object Types
Evolved TYPEs can lead to Data Pump errors during transportable import:
4 options:
• Use regular Data Pump import
• Recreate types after import
• Recreate types without evolution before export
• Create types in target exactly the same before import
Further Information:
• Blog Post: Understand why Data Pump errors with evolved types
Copyright © 2024, Oracle and/or its affiliates
75
ORA-39083: Object type TABLE:"APPUSER"."CARS" failed to create with error:
ORA-39218: type check on object type "APPUSER"."CAR_TYPE" failed
ORA-39216: object type "APPUSER"."CAR_TYPE" hashcode or version number mismatch
Copyright © 2024, Oracle and/or its affiliates
76
Using evolved types in table definitions
--Create a new type. The type is now version 1
--Use the type in a table
CREATE TYPE CAR_INFO_TYPE IS OBJECT (model VARCHAR2(40));
CREATE TABLE CARS (id number, car_info car_info_type);
INSERT INTO CARS VALUES (1, car_info_type('Volvo V90'));
--Make a change to the type. The type is now version 2
ALTER TYPE CAR_INFO_TYPE ADD ATTRIBUTE horsepower NUMBER CASCADE NOT INCLUDING TABLE DATA;
INSERT INTO CARS VALUES (2, car_info_type('BMW 530i', 250));
--Make another change to the type. The type is now version 3
ALTER TYPE CAR_INFO_TYPE ADD ATTRIBUTE color VARCHAR2(20) CASCADE NOT INCLUDING TABLE DATA;
INSERT INTO CARS VALUES (3, car_info_type('Hyundai Sonata', 160, 'Black'));
Copyright © 2024, Oracle and/or its affiliates
77
The type is now evolving
Existing data is not updated
Evolved Types
Copyright © 2024, Oracle and/or its affiliates
78
CARS
1 car_info_type v1: Volvo V90
2 car_info_type v2: BMW 530i, 250
3 car_info_type v3: Hyundai Sonata, 160, Black
DICTIONARY
car_info_type v1 model
car_info_type v2 model, horsepower
car_info_type v3 model, horsepower, color
SELECT * FROM CARS
Copyright © 2024, Oracle and/or its affiliates
79
Data Pump recreates types during
Full Transportable Export/Import
Evolved Types
• To avoid data corruption,
Data Pump must recreate the exact same type evolution in target database
• Due to implementation restrictions,
it is not always possible to recreate the exact same type evolution
• In such situations, to avoid corruption,
Data Pump reports ORA-39218 or ORA-39216 on import
Copyright © 2024, Oracle and/or its affiliates
80
--Identifying tables with evolved types
--These tables potentially pose a problem during transportable import
select owner, table_name, column_name, data_type_owner, data_type
from dba_tab_cols
where (data_type_owner, data_type) IN (
select distinct u.username, o.name
from obj$ o, dba_users u, type$ t
where o.owner# = u.user_id
and oracle_maintained='N'
and o.oid$ = t.toid
and t.version# > 1
group by u.username, o.name);
Copyright © 2024, Oracle and/or its affiliates
81
Find types with
more than one version
Find tables
using these types
Evolved Types | Possible Solutions
Copyright © 2024, Oracle and/or its affiliates
82
Conventional Data Pump export
1
Manually recreate type in target database with matching evolution
2
Recreate type without evolution before export
3
Blog post with details
Issue 6 | Advanced Queueing
Copyright © 2024, Oracle and/or its affiliates
83
<queue_table_name>
AQ$_<queue_table_name>_E
AQ$_<queue_table_name>_I
AQ$_<queue_table_name>_T
AQ$_<queue_table_name>_F
Source database Target database
<queue_table_name>
AQ$_<queue_table_name>_E
AQ$_<queue_table_name>_I
AQ$_<queue_table_name>_T
AQ$_<queue_table_name>_F
AQ$_<queue_table_name>_C
AQ$_<queue_table_name>_D
AQ$_<queue_table_name>_G
AQ$_<queue_table_name>_H
AQ$_<queue_table_name>_L
AQ$_<queue_table_name>_P
AQ$_<queue_table_name>_S
AQ$_<queue_table_name>_V
Queue table
Queue
infrastructure
Issue 6 | Advanced Queueing
Queue tables and underlying objects may change during import
• COMPATIBLE during creation of queue tables matters
• COMPATIBLE during import matters as well
• MOS Note: 2291530.1 - Understanding How AQ Objects Are Exported And Imported
• Blog post: Changing data types in queue tables during import
Options:
• Recreate the queue tables with "old" COMPATIBLE setting
• Benefit from new COMPATIBLE setting and test the application
Copyright © 2024, Oracle and/or its affiliates
84
• Understanding How Advanced Queueing (AQ) Objects
Are Exported And Imported (Doc ID 2291530.1)
Take into account when comparing source
and target databases' object count
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
85
• Manually start queues after migration
• Use DBMS_AQADM.START_QUEUE
Data Pump does not start queues
Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates
86
Issue 7 | Default Tablespaces
Due to a security fix export wants to write into the user's default tablespace
• Bug 27692190
• But default tablespaces are read-only while full transportable export runs
Workaround:
• Change default tablespace for all users to SYSTEM
• Full Transportable Export
• Full Transportable Import
• Revert default tablespaces back to original in target (and source)
Copyright © 2024, Oracle and/or its affiliates
87
Issue 8 | Exporting Statistics
Copyright © 2024, Oracle and/or its affiliates
88
Exporting statistics is slow using
DBMS_STATS.EXPORT_SCHEMA_STATS
8h
10046 trace reveals long runtime on:
EXP_STAT$
EXP_OBJ$
Expression Statistics for Auto-Indexing
Default: _column_tracking_level=53;
Issue 8 | Exporting Statistics
Copyright © 2024, Oracle and/or its affiliates
89
Solution?
Clean up expression statistics in:
EXP_STAT$
EXP_OBJ$
MOS Note: 2803002.1
DBMS_STATS.EXPORT_SCHEMA_STATS
results in:
ORA-00600 [qosdExpStatRead: expcnt mismatch]
Issue 8 | Exporting Statistics
Copyright © 2024, Oracle and/or its affiliates
90
Solution? Fix for Bug 31143146
Included since RU 19.12.0
exec dbms_optim_bundle.set_fix_controls('31143146:1','*', 'BOTH','YES');
Disabled by default
DBMS_STATS.EXPORT_SCHEMA_STATS works fast and flawless
_column_tracking_level=1
Issue 9 | Gather Dictionary Statistics
Copyright © 2024, Oracle and/or its affiliates
91
Gather dictionary / fixed
objects statistics
is slow?
exec dbms_stats.set_global_prefs ('CONCURRENT', 'ALL');
exec dbms_stats.set_global_prefs ('DEGREE', DBMS_STATS.AUTO_DEGREE);
Run it only from
one instance!
Issue 10 | Auditing
Tablespaces containing auditing tables can't be set read-only
Data Pump always unloads the audit records into the dump file
• Huge audit trail will lead to a huge dump file and longer outage
Options:
• Export audit records, and eventually import them afterwards
• Archive audit records, purge the audit trail
Copyright © 2024, Oracle and/or its affiliates
92
Copyright © 2024, Oracle and/or its affiliates
93
DBAs
run the world
Oracle
YouTube | Oracle Database Upgrades and Migrations
Copyright © 2024, Oracle and/or its affiliates
94
• 300+ videos
• New videos every week
• No marketing
• No buzzword
• All tech
Thank You
95 Copyright © 2024, Oracle and/or its affiliates

59597874043496781074_2024_10_11_MigratingTheBeast.pdf

  • 1.
    Moving 180TB inonly a few hours Mike Dietrich Vice President Product Management and Development Oracle Migrating the Beast Copyright © 2024, Oracle and/or its affiliates Photo by Erda Estremera on Unsplash
  • 2.
    Copyright © 2024,Oracle and/or its affiliates 2 MIKE DIETRICH Vice President Product Management and Development Database Upgrade, Migrations & Patching mikedietrich @mikedietrichde https://mikedietrichde.com
  • 3.
    Find slides andmuch more on our blogs Copyright © 2024, Oracle and/or its affiliates 3 MikeDietrichDE.com Mike.Dietrich@oracle.com dohdatabase.com Daniel.Overby.Hansen@oracle.com DBArj.com.br Rodrigo.R.Jorge@oracle.com AlexZaballa.com Alex.Zaballa@oracle.com
  • 4.
    Copyright © 2024,Oracle and/or its affiliates 4 Virtual Classroom Seminars https://MikeDietrichDE.com/videos More than 35 hours of technical content, on-demand, anytime, anywhere
  • 5.
    Introduction Who is who? 5Copyright © 2024, Oracle and/or its affiliates
  • 6.
    Copyright © 2024,Oracle and/or its affiliates 6 ANDREAS GROETZ Oracle DBA Tech Lead Entain Services Austria GmbH
  • 7.
    Entain is oneof the world's largest sports betting and gaming groups. Leveraging the power of the Entain Platform, they bring moments of excitement into their customers lives through more than 30 iconic brands such as bwin, Coral, Ladbrokes and many more. Entain operates on over 140 licenses across 40+ territories and employs over 29,000 talented workforce. Entain is listed on the London Stock Exchange and is a constituent of the FTSE 100 Index.
  • 8.
  • 9.
    Challenges What is special,what makes it so complex? 9 Copyright © 2024, Oracle and/or its affiliates
  • 10.
    Migration Challenges Copyright ©2024, Oracle and/or its affiliates 10 SPARC SuperCluster Exadata X9M Extreme Flash ZDLRA
  • 11.
    Migration Challenges Copyright ©2024, Oracle and/or its affiliates 11 SPARC SuperCluster Exadata X9M Extreme Flash ZDLRA 180TB size
  • 12.
    Migration Challenges Copyright ©2024, Oracle and/or its affiliates 12 SPARC SuperCluster Exadata X9M Extreme Flash ZDLRA 15TB redo/day
  • 13.
    Migration Challenges Copyright ©2024, Oracle and/or its affiliates 13 SPARC SuperCluster Exadata X9M Extreme Flash ZDLRA 5 Physical Standby DBs Local, and in different region, 2500km away
  • 14.
    Constraints Limiting factors, andother things to know 14 Copyright © 2024, Oracle and/or its affiliates
  • 15.
    Up to 15TBredo/day is beyond what Oracle GoldenGate will be able to synch The system is highly active 24 x 7 x 365 15 Copyright © 2024, Oracle and/or its affiliates Photo by Mihály Köles on Unsplash
  • 16.
    Very large database, veryconstrained downtime • 180+TB database size • 5-6 TB growth/month • Every minute of downtime costs $$$ • Users immediately move to competitors 16 Copyright © 2024, Oracle and/or its affiliates Photo by Dave Hoefler on Unsplash
  • 17.
    Migrating tablespaces upfrontor separately definitely not an option • Way too many cross-dependencies • Tablespaces aren't isolated 17 Copyright © 2024, Oracle and/or its affiliates
  • 18.
    Every complex Oracledata type you can imagine is used • XML binary types • Nested partitioned tables • Evolved object types 18 Copyright © 2024, Oracle and/or its affiliates Photo by Mihály Köles on Unsplash Photo by Linh Ha on Unsplash
  • 19.
    Tight downtime window •Dry run: 2 hours outage approved • Tablespaces read-only • Full Transportable Export • Live migration: 13 hours approved 19 Copyright © 2024, Oracle and/or its affiliates Photo by Mihály Köles on Unsplash Photo by Linh Ha on Unsplash Photo by Masaaki Komori on Unsplash
  • 20.
    The available OracleV4 PERL migration scripts would have been worked technically, but ... • No section-size backup support • No standby backup support • No selective PDB migration support 20 Copyright © 2024, Oracle and/or its affiliates Photo by Morgane Le Breton on Unsplash
  • 21.
    Migration 21 Copyright ©2024, Oracle and/or its affiliates
  • 22.
    • Data Pumppar file: FULL=YES TRANSPORTABLE=ALWAYS Typically, we use Full Transportable Export/ Import for large cross-endian migrations Copyright © 2024, Oracle and/or its affiliates 22
  • 23.
    USERS DATA USERS Copyright © 2024,Oracle and/or its affiliates 23 Concept SYSTEM SYSAUX SYSTEM SYSAUX DATA Source Non-CDB 19c Target PDB 23ai Tablespaces different endian format Level 0 backup Restore and convert Not plugged in Level 1 incremental Convert, recover, merge Read-only Final level 1 Data Pump Full Transportable Export/Import Tablespace plug-in Metadata import (users, tables, indexes, triggers, PL/SQL, ...) Shut down source
  • 24.
    M5 Migration Script Thenew migrations scripts superseeding the V4 PERL scripts 24 Copyright © 2024, Oracle and/or its affiliates
  • 25.
    # source database RUN { ALLOCATECHANNEL d1 DEVICE TYPE DISK FORMAT '...'; ALLOCATE CHANNEL d2 DEVICE TYPE DISK FORMAT '...'; BACKUP FILESPERSET 1 SECTION SIZE 64G TAG UP19_L0_240206101548 TABLESPACE <list-of-tablespace>; } 25 Copyright © 2024, Oracle and/or its affiliates
  • 26.
    # source database RUN { ALLOCATECHANNEL d1 DEVICE TYPE DISK FORMAT '...'; ALLOCATE CHANNEL d2 DEVICE TYPE DISK FORMAT '...'; BACKUP FILESPERSET 1 SECTION SIZE 64G TAG UP19_L0_240206101548 TABLESPACE <list-of-tablespace>; } Copyright © 2024, Oracle and/or its affiliates 26 # target database RUN { ALLOCATE CHANNEL DISK1 DEVICE TYPE DISK FORMAT '...'; ALLOCATE CHANNEL DISK2 DEVICE TYPE DISK FORMAT '...'; RESTORE ALL FOREIGN DATAFILES TO NEW FROM BACKUPSET '<backup-set-1>', '<backup-set-2>', ... '<backup-set-n>' };
  • 27.
    Benefits M5 procedure supports: •Encrypted tablespaces • Multisection backups • Migrating multiple databases into the same CDB simultaneously • Compressed backup sets • Better parallelism Copyright © 2024, Oracle and/or its affiliates 27
  • 28.
    Requirements • Source andtarget database must • be 19.18.0 or higher • use Data Pump Bundle Patch • Target must use Oracle Managed Files (OMF) • Parameter DB_CREATE_FILE_DEST parameter Copyright © 2024, Oracle and/or its affiliates 28
  • 29.
    • Download fromDoc ID 2999157.1 Always use the latest version of M5 script Copyright © 2024, Oracle and/or its affiliates 29
  • 30.
    • Check theLicense Guide for details Use Block Change Tracking for faster incremental backups Copyright © 2024, Oracle and/or its affiliates 30
  • 31.
    Data Center 1 MigrationPlan Copyright © 2024, Oracle and/or its affiliates 31 Data Center 2 Data Center Far Away 2500km distance PROD LOCAL STBY SPARC M8 SC SPARC M7 SC Disaster Recovery Site REP STBY NEW PROD LOCAL STBY REP STBY Disaster Recovery Site Migration FTEX + M5 Exadata X9M-EF ZDLRA
  • 32.
    Migration Workflow Configure Level0 Level 1 Outage Final Backup Final restore Export Import Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 32 SPARC SuperCluster Exadata X9M Extreme Flash ZDLRA Local standby
  • 33.
    M5 Workflow Configure Level0 Level 1 Outage Final Backup Final restore Import • Download M5 script from Doc ID 2999157.1 • Configure shared NFS • Edit dbmig_ts_list.txt • Edit dbmig_driver.properties • Create new, empty target database Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 33
  • 34.
    Migration Workflow Configure Level0 Level 1 Outage Final Backup Final restore Export Import Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 34 SPARC SuperCluster Exadata X9M Extreme Flash ZDLRA
  • 35.
    M5 Workflow Configure Level0 Level 1 Outage Final Backup Final restore Import • Start initial level 0 backup • Use driver script dbmig_driver_m5.sh L0 • Driver script creates a restore script • Restore using restore_L0_<source_sid>_<timestamp>.cmd • Check logs Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 35
  • 36.
    Migration Workflow Configure Level0 Level 1 Outage Final Backup Final restore Export Import Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 36 SPARC SuperCluster Exadata X9M Extreme Flash ZDLRA
  • 37.
    M5 Workflow Configure Level0 Level 1 Outage Final Backup Final restore Import • Start level 1 incremental backup • Use driver script dbmig_driver_m5.sh L1 • Driver script creates a restore script • Restore using restore_L1_<source_sid>_<timestamp>.cmd • Check logs • Repeat as often as desired Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 37
  • 38.
    Migration Workflow Configure Level0 Level 1 Outage Final Backup Final restore Export Import Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 38 SPARC SuperCluster Exadata X9M Extreme Flash ZDLRA
  • 39.
    M5 Workflow Configure Level0 Level 1 Outage Final Backup Final restore Import • Maintenance window begins • Read-only sessions can still use the database Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 39
  • 40.
    Migration Workflow Configure Level0 Level 1 Outage Final Backup Final restore Export Import Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 40 SPARC SuperCluster Exadata X9M Extreme Flash ZDLRA
  • 41.
    M5 Workflow Configure Level0 Level 1 Outage Final Backup Final restore Import • Start final level 1 incremental backup • Use driver script dbmig_driver_m5.sh L1F • Sets tablespaces read-only • Performs level 1 incremental backup • Starts Data Pump full transportable export • Optionally, shuts down source database Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 41
  • 42.
    M5 Workflow Configure Level0 Level 1 Outage Final Backup Final restore Import • Driver script created a restore script • Restore using restore_L1F_<source_sid>_<timestamp>.cmd • Check logs Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 42
  • 43.
    Migration Workflow Configure Level0 Level 1 Outage Final Backup Final restore Export Import Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 43 SPARC SuperCluster Exadata X9M Extreme Flash ZDLRA
  • 44.
    M5 Workflow Configure Level0 Level 1 Outage Final Backup Final restore Import • Copy Data Pump dump file to DATA_PUMP_DIR • Use import driver script in test mode • Start impdp.sh <dump_file> <restore_log> test • Check generated parameter file • Use impdp.sh <dump_file> <restore_log> run • Check Data Pump log file Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 44
  • 45.
    Recommendations • Use ashared NFS mount point • Attach to source and target • Use for script, backups, logs, etc. • If NFS is not possible • Manually copy files from source to target after each run • M5 can copy scripts using DEST_SERVER (for ZDLRA only) Copyright © 2024, Oracle and/or its affiliates 45
  • 46.
    Wanna try itout? Copyright © 2024, Oracle and/or its affiliates 46 Oracle LiveLabs – Run the lab just inside your browser!
  • 47.
    The Project Plan Timelinesand the Run Book 47 Copyright © 2024, Oracle and/or its affiliates
  • 48.
    Overall Project Timeline Copyright© 2024, Oracle and/or its affiliates 48 Information exchange and first meeting May 2022 Exadata and ZDLRA setup Oct 2022 Initial tests, HW setup Hot phase started Exec Escalation Apr 2023 First mild escalation Dec 2022 Migration Tests, Bug Fixing, Standby Live migration completed successfully Oct 17, 2023 Go-live Real dry run with PROD and FTEX downtime Oct 3, 2023 Stress Test PoC, HW order 2021/2022 PoC Phase HW order & delivery
  • 49.
    Key to Success:Runbook Complex projects absolutely require a detailed runbook • This run book covered over 200 individual tasks Copyright © 2024, Oracle and/or its affiliates 49 ID Task Status Responsible Primary Person Responsible Secondary Person Predecessor Start Time (CEST) Duration (hh:mm) End Time (CEST) Start Time (IST) End Time (IST) Actual Start Time (CEST) Actual Duration Actual End Time (CEST) Comments - Blocker
  • 50.
    Timeline Live Migration Copyright© 2024, Oracle and/or its affiliates 50 Task 22:00 23:00 0:00 1:00 2:00 3:00 4:00 5:00 6:00 7:00 8:00 Stats and AWR expdp etc Application downtime Export / truncate audit Tablespaces read-only Final L1 backup Full Transportable Export Recover final L1 backup Full Transportable Import Gather and import stats Database migration time Functional/internal tests 22:45 h – 8:00 h 50 min 52 min 30 min 48 min 165 min = 2:45 hrs 30+30 min 23:48 h – 5:05 h 175 min = 2:55 hrs
  • 51.
    How many peoplewere involved? Copyright © 2024, Oracle and/or its affiliates 51 6 DBAs 2-3 infrastructure 2 network 4 DB developer War Room 4 product engineering 4 compliance 2 data center engineers 2 monitoring, communication 10 testing Managers
  • 52.
    The Top-10 MigrationIssues Listed in random order 52 Copyright © 2024, Oracle and/or its affiliates
  • 53.
    Where we started... PoC first FTEX export: 01-OCT-21 05:32:36.275: Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at Fri Oct 1 05:32:36 2021 elapsed 0 04:25:22 PoC first FTEX import: 05-OCT-21 01:48:59.534: Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 103000 error(s) at Tue Oct 5 01:48:59 2021 elapsed 3 18:34:09 Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 53
  • 54.
    The way forward... Copyright © 2024, Oracle and/or its affiliates 54 80 SRs opened and solved in various areas 18 one-off patches 5 merges Migration ZDLRA Standby Infrastructure Daily calls with Oracle, countless evening / night / weekend hours
  • 55.
    Many areas requiredspecial attention…. Optimizer Statistics Scheduler jobs Resource Manager Cross Schema objects AQ Evolved Types/partitioned nested tables Binary XML Standby DBs … Photo by Justin Chrn on Unsplash Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 55
  • 56.
    Issue 1 |Long Running Meta Import Fix applied to remedy export errors • BUG 34201281 - MERGE ON DATABASE RU 19.12.0.0.0 OF 33963454 34052641 Result: • Now Full Transportable import alone took over 6 days (!!) Copyright © 2024, Oracle and/or its affiliates 56 08-JUN-22 16:21:17.887: W-1 Processing object type DATABASE_EXPORT/.../PROCACT_INSTANCE 14-JUN-22 18:56:58.813: W-1 Completed 108 PROCACT_INSTANCE objects in 527737 seconds … 14-JUN-22 19:15:50.016: Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 316 error(s) at Tue Jun 14 19:15:49 2022 elapsed 6 06:45:38
  • 57.
    Issue 1 |Long Running Meta Import Long running action identified via tracing: • 300+ million rows ► BUG 33514440 - SYS_REMAP_XMLTYPE OPERATOR CAUSING PERFORMANCE WITH LARGE DATASETS Issue in internal package DBMS_CSX_INT • Fast merge of XMLTYPE is not happening as expected • Reason: Incorrect internal check query • Tokens between source and target are not identical • Reason: Different Endianness Copyright © 2024, Oracle and/or its affiliates 57 UPDATE "POSTMAN"."T_MAIL_LOG" SET "C_CVAR"=SYS_REMAP_XMLTYPE("C_CVAR")
  • 58.
    Issue 1 |Long Running Meta Import Solution: • Use workaround from MOS Note: 2309649.1 in UPGRADE mode • MOS Note: 2309649.1 - How to Migrate Large Amount of Binary XML Data between Databases • Bug 34425044 - BINARY XML MIGRATION SCRIPT NEEDS UPDATING (TTS_MIGRATION_BINARY_XML.ZIP / 2309649.1) Steps: • Create new empty PDB • Open PDB in UPGRADE mode and set _ORACLE_SCRIPT=TRUE • Apply the XML token workaround from MOS Note 2309649.1 • Additional preparation tasks (creating directories, resource groups etc.) • Set guaranteed restore point • Start Full Transportable import Copyright © 2024, Oracle and/or its affiliates 58 25-JUL-22 12:28:40.813: Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 317 error(s) at Mon Jul 25 12:28:40 2022 elapsed 0 04:33:03
  • 59.
    Copyright © 2024,Oracle and/or its affiliates 59 Using binary XML
  • 60.
    --Create a newtable using XMLTYPE CREATE TABLE CARS (CARDATA XMLTYPE); --XMLTYPE columns consists of two columns SELECT COLUMN_NAME, DATA_TYPE, HIDDEN_COLUMN, VIRTUAL_COLUMN FROM USER_TAB_COLS; COLUMN_NAME DATA_TYPE HIDDEN_COLUMN VIRTUAL_COLUMN _______________ ____________ ________________ _________________ CARDATA XMLTYPE NO YES SYS_NC00002$ BLOB YES NO Copyright © 2024, Oracle and/or its affiliates 60 XMLTYPE is a virtual column Encoded/compressed XML data
  • 61.
    select * from dba_xml_tab_cols wherestorage_type= 'BINARY' and owner != 'SYS'; --Detecting Binary XML in your Oracle Database Copyright © 2024, Oracle and/or its affiliates 61
  • 62.
    Binary XML Copyright ©2024, Oracle and/or its affiliates 62 <CARS> <CAR> <MODEL> Volvo V90 </MODEL> </CAR> </CARS> XML document INSERT INTO CARS ... Compress and encode XML to binary format 1. Generate tokens from namespace and tags 2. Insert into central token table 3. Insert encoded XML into CARS
  • 63.
    INSERT INTO CARSVALUES('<CARS><CAR><MODEL>Volvo V90</MODEL></CAR></CARS>'); SELECT CARDATA, SYS_NC00002$ FROM CARS; CARDATA ____________________________ <CARS> <CAR> <MODEL>Volvo V90</MODEL> </CAR> </CARS> SYS_NC00002$ __________________________________________________________ 9F01639E000000C850B4C81F1FC0085D90566F6C766F20563930D9D9A0 Copyright © 2024, Oracle and/or its affiliates 63
  • 64.
    SYS_NC00002$ __________________________________________________________ 9F01639E000000C850B4C81F1FC0085D90566F6C766F20563930D9D9A0 select * fromXDB.X$QN6LVUTKVD49541E0L8000000001; NMSPCID LOCALNAME FLAGS ID __________ _________________ ________ _______ ... ... ... ... 07 CARS 00 50B4 07 CAR 00 1F1F 07 MODEL 00 5D90 Copyright © 2024, Oracle and/or its affiliates 64 <CARS> <CAR> <MODEL> Volvo V90 </MODEL> </CAR> </CARS> Token repository
  • 65.
    Binary XML • Thetable contains the encoded binary XML • The dictionary contains the token repository • Both are required to read and understand the data Copyright © 2024, Oracle and/or its affiliates 65
  • 66.
    Copyright © 2024,Oracle and/or its affiliates 66 Data Pump moves tokens during Full Transportable Export/Import
  • 67.
    impdp ... transport_datafiles=<list-of-files> ORA-39945:Token conflicting with existing tokens --If a token is already in use in the target database --Data Pump skips the table to avoid data corruption Copyright © 2024, Oracle and/or its affiliates 67
  • 68.
    Binary XML Copyright ©2024, Oracle and/or its affiliates 68 Possible solutions Conventional Data Pump export 1 Prune the target database tokens 2 • When Data Pump inserts the binary XML, the target database generates a new token • Re-use all source database tokens in the target database • Works on brand new, empty target databases only • Be aware of bug 34425044 • Strongly recommended patching the source and target database using the latest available Release Update and Data Pump Bundle Patch • How to Migrate Large Amount of Binary XML Data between Databases (Doc ID 2309649.1)
  • 69.
    Binary XML Binary XMLuses tokens to compress/encode XML Data. The token ids and their value (Tag name) are stored in a central token table. During TTS import of data the tokens need to be reused, that means tokens on the exporting side and importing side cannot conflict with each other. In case of a conflict not XL data can be imported and an error message is raised during TTS import. The Export and Import utilities can be used to move Binary XML storage data between environments. This option works well on smaller datasets, however, since it involves several internal insert commands, it will be very resource intensive on larger data migrations. For customers with a limited timescale planned migration window, this is not a feasible option. How to move XMLType tables/columns with Binary XML Storage between schemas/databases (Doc ID 1405457.1) Copyright © 2024, Oracle and/or its affiliates 69
  • 70.
    Binary XML How tofind XML token table: select TOKSUF from xdb.xdb$ttset; Add as suffix: --Tags select * from XDB.X$QN<toksuf>; --Namespaces select * from XDB.X$NM<toksuf>; Staging tables during FTEX: xdb.xdb$tsetmap xdb.xdb$ttset Copyright © 2024, Oracle and/or its affiliates 70
  • 71.
    Issue 2 |XMLTYPE Data type owners changed during transportable export/import • Before: • After: Copyright © 2024, Oracle and/or its affiliates 71 OWNER TABLE_NAME COLUMN_NAME DATA_TYPE DATA_TYPE_OWNER OPS1 T_PMAIL C_XMLPAR XMLTYPE PUBLIC OPS1 T_PMAIL_LOG C_XMLPAR XMLTYPE PUBLIC POSTMAN T_MAIL_LOG C_CVAR XMLTYPE PUBLIC POSTMAN T_MAIL C_CVAR XMLTYPE PUBLIC OWNER TABLE_NAME COLUMN_NAME DATA_TYPE DATA_TYPE_OWNER OPS1 T_PMAIL C_XMLPAR XMLTYPE SYS OPS1 T_PMAIL_LOG C_XMLPAR XMLTYPE PUBLIC POSTMAN T_MAIL_LOG C_CVAR XMLTYPE PUBLIC POSTMAN T_MAIL C_CVAR XMLTYPE SYS
  • 72.
    Issue 3 |Data Pump Bundle Patch Many of the TTS and Metadata fixes got included into Data Pump Bundle • Data Pump Recommended Proactive Patches For 19.10 and Above (Doc ID 2819284.1) Copyright © 2024, Oracle and/or its affiliates 72 Starting Point RU 19.12.0 Fixes missing DPBP 19.14.0 Fixes missing DPBP 19.15.0 Fixes missing DPBP 19.16.0 Regression DPBP 19.17.0 Migration Release All required fixes are included DPBP 19.18.0 RU 19.18.0
  • 73.
    Issue 3 |Data Pump Bundle Patch Necessary fixes were included in DPBP on top of RU 19.18.0 • Fixes containing .c files can't be included into the Data Pump Bundle Patch • Bug 30910255 - xml storage type corrupted during tts import with partitioned columns • Fix included since 19.18.0 and with Oracle 23c • Merge request for Solaris would have been an alternative • MOS Note: 2881221.1 - ORA-00600:[qmcxdGetTextXYZ1] is thrown for a query after XTTS migration has been performed It took a while to include all necessary fixes into a Data Pump Bundle Patch Copyright © 2024, Oracle and/or its affiliates 73
  • 74.
    Issue 4 |Metadata API and Nested Tables Full transportable import errors out for a nested partitioned table Root cause was a string overflow in the Metadata API • Data Pump creates the index, and then alters it – here the overflow happened • Side effect was a misleading error message Resulted in fixes: • Bug 34899010 - ORA-39151 DURING XTTS IMPORT - ALTER INDEX PROBLEM • Bug 34965629 - ORA-39151 DURING XTTS IMPORT - INCORRECT MESSAGE Copyright © 2024, Oracle and/or its affiliates 74 PLS-00172: string literal too long ORA-39151: Table "DBA_XY"."X_GAMES" exists. All dependent metadata and data will be skipped due to table_exists_action
  • 75.
    Issue 5 |Evolved Object Types Evolved TYPEs can lead to Data Pump errors during transportable import: 4 options: • Use regular Data Pump import • Recreate types after import • Recreate types without evolution before export • Create types in target exactly the same before import Further Information: • Blog Post: Understand why Data Pump errors with evolved types Copyright © 2024, Oracle and/or its affiliates 75 ORA-39083: Object type TABLE:"APPUSER"."CARS" failed to create with error: ORA-39218: type check on object type "APPUSER"."CAR_TYPE" failed ORA-39216: object type "APPUSER"."CAR_TYPE" hashcode or version number mismatch
  • 76.
    Copyright © 2024,Oracle and/or its affiliates 76 Using evolved types in table definitions
  • 77.
    --Create a newtype. The type is now version 1 --Use the type in a table CREATE TYPE CAR_INFO_TYPE IS OBJECT (model VARCHAR2(40)); CREATE TABLE CARS (id number, car_info car_info_type); INSERT INTO CARS VALUES (1, car_info_type('Volvo V90')); --Make a change to the type. The type is now version 2 ALTER TYPE CAR_INFO_TYPE ADD ATTRIBUTE horsepower NUMBER CASCADE NOT INCLUDING TABLE DATA; INSERT INTO CARS VALUES (2, car_info_type('BMW 530i', 250)); --Make another change to the type. The type is now version 3 ALTER TYPE CAR_INFO_TYPE ADD ATTRIBUTE color VARCHAR2(20) CASCADE NOT INCLUDING TABLE DATA; INSERT INTO CARS VALUES (3, car_info_type('Hyundai Sonata', 160, 'Black')); Copyright © 2024, Oracle and/or its affiliates 77 The type is now evolving Existing data is not updated
  • 78.
    Evolved Types Copyright ©2024, Oracle and/or its affiliates 78 CARS 1 car_info_type v1: Volvo V90 2 car_info_type v2: BMW 530i, 250 3 car_info_type v3: Hyundai Sonata, 160, Black DICTIONARY car_info_type v1 model car_info_type v2 model, horsepower car_info_type v3 model, horsepower, color SELECT * FROM CARS
  • 79.
    Copyright © 2024,Oracle and/or its affiliates 79 Data Pump recreates types during Full Transportable Export/Import
  • 80.
    Evolved Types • Toavoid data corruption, Data Pump must recreate the exact same type evolution in target database • Due to implementation restrictions, it is not always possible to recreate the exact same type evolution • In such situations, to avoid corruption, Data Pump reports ORA-39218 or ORA-39216 on import Copyright © 2024, Oracle and/or its affiliates 80
  • 81.
    --Identifying tables withevolved types --These tables potentially pose a problem during transportable import select owner, table_name, column_name, data_type_owner, data_type from dba_tab_cols where (data_type_owner, data_type) IN ( select distinct u.username, o.name from obj$ o, dba_users u, type$ t where o.owner# = u.user_id and oracle_maintained='N' and o.oid$ = t.toid and t.version# > 1 group by u.username, o.name); Copyright © 2024, Oracle and/or its affiliates 81 Find types with more than one version Find tables using these types
  • 82.
    Evolved Types |Possible Solutions Copyright © 2024, Oracle and/or its affiliates 82 Conventional Data Pump export 1 Manually recreate type in target database with matching evolution 2 Recreate type without evolution before export 3 Blog post with details
  • 83.
    Issue 6 |Advanced Queueing Copyright © 2024, Oracle and/or its affiliates 83 <queue_table_name> AQ$_<queue_table_name>_E AQ$_<queue_table_name>_I AQ$_<queue_table_name>_T AQ$_<queue_table_name>_F Source database Target database <queue_table_name> AQ$_<queue_table_name>_E AQ$_<queue_table_name>_I AQ$_<queue_table_name>_T AQ$_<queue_table_name>_F AQ$_<queue_table_name>_C AQ$_<queue_table_name>_D AQ$_<queue_table_name>_G AQ$_<queue_table_name>_H AQ$_<queue_table_name>_L AQ$_<queue_table_name>_P AQ$_<queue_table_name>_S AQ$_<queue_table_name>_V Queue table Queue infrastructure
  • 84.
    Issue 6 |Advanced Queueing Queue tables and underlying objects may change during import • COMPATIBLE during creation of queue tables matters • COMPATIBLE during import matters as well • MOS Note: 2291530.1 - Understanding How AQ Objects Are Exported And Imported • Blog post: Changing data types in queue tables during import Options: • Recreate the queue tables with "old" COMPATIBLE setting • Benefit from new COMPATIBLE setting and test the application Copyright © 2024, Oracle and/or its affiliates 84
  • 85.
    • Understanding HowAdvanced Queueing (AQ) Objects Are Exported And Imported (Doc ID 2291530.1) Take into account when comparing source and target databases' object count Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 85
  • 86.
    • Manually startqueues after migration • Use DBMS_AQADM.START_QUEUE Data Pump does not start queues Oracle CloudWorld Copyright © 2024, Oracle and/or its affiliates 86
  • 87.
    Issue 7 |Default Tablespaces Due to a security fix export wants to write into the user's default tablespace • Bug 27692190 • But default tablespaces are read-only while full transportable export runs Workaround: • Change default tablespace for all users to SYSTEM • Full Transportable Export • Full Transportable Import • Revert default tablespaces back to original in target (and source) Copyright © 2024, Oracle and/or its affiliates 87
  • 88.
    Issue 8 |Exporting Statistics Copyright © 2024, Oracle and/or its affiliates 88 Exporting statistics is slow using DBMS_STATS.EXPORT_SCHEMA_STATS 8h 10046 trace reveals long runtime on: EXP_STAT$ EXP_OBJ$ Expression Statistics for Auto-Indexing Default: _column_tracking_level=53;
  • 89.
    Issue 8 |Exporting Statistics Copyright © 2024, Oracle and/or its affiliates 89 Solution? Clean up expression statistics in: EXP_STAT$ EXP_OBJ$ MOS Note: 2803002.1 DBMS_STATS.EXPORT_SCHEMA_STATS results in: ORA-00600 [qosdExpStatRead: expcnt mismatch]
  • 90.
    Issue 8 |Exporting Statistics Copyright © 2024, Oracle and/or its affiliates 90 Solution? Fix for Bug 31143146 Included since RU 19.12.0 exec dbms_optim_bundle.set_fix_controls('31143146:1','*', 'BOTH','YES'); Disabled by default DBMS_STATS.EXPORT_SCHEMA_STATS works fast and flawless _column_tracking_level=1
  • 91.
    Issue 9 |Gather Dictionary Statistics Copyright © 2024, Oracle and/or its affiliates 91 Gather dictionary / fixed objects statistics is slow? exec dbms_stats.set_global_prefs ('CONCURRENT', 'ALL'); exec dbms_stats.set_global_prefs ('DEGREE', DBMS_STATS.AUTO_DEGREE); Run it only from one instance!
  • 92.
    Issue 10 |Auditing Tablespaces containing auditing tables can't be set read-only Data Pump always unloads the audit records into the dump file • Huge audit trail will lead to a huge dump file and longer outage Options: • Export audit records, and eventually import them afterwards • Archive audit records, purge the audit trail Copyright © 2024, Oracle and/or its affiliates 92
  • 93.
    Copyright © 2024,Oracle and/or its affiliates 93 DBAs run the world Oracle
  • 94.
    YouTube | OracleDatabase Upgrades and Migrations Copyright © 2024, Oracle and/or its affiliates 94 • 300+ videos • New videos every week • No marketing • No buzzword • All tech
  • 95.
    Thank You 95 Copyright© 2024, Oracle and/or its affiliates