DAT308 – Advanced Data Migration
Techniques for Amazon RDS
Shakil Langha, Abdul Sathar Sait, Bharanendra Nallamotu
Amazon ...
Next 60 minutes …
• What is new in Amazon Relational Database
Service
• Types of data migration
• General considerations
•...
Amazon RDS Recent Releases
Oracle Transparent Data Encryption

MySQL 5.6
Amazon
RDS

MySQL Replication to RDS

CR1.8XLarge...
One-Time Migration
Periodic Migration
Ongoing Replication
General Considerations
RDS Pre-Migration Steps
•
•
•
•
•
•

Stop applications accessing the DB
Take a snapshot
Disable backups
Use Single-AZ inst...
RDS Post-migration Steps
•
•
•
•
•

Turn on backups
Turn on multi-az
Create read replicas
Tighten down security
Notificati...
Data Migration Approaches for

Oracle
Let’s Move Some Data
How about

500 GB
Migration Process
Data Pump Export
expdp demoreinv/demo full=y dumpfile=data_pump_exp1:reinvexp1%U.dmp,
data_pump_exp2:reinvexp2%U.dmp, data...
Data Pump Export Start
Data Pump Export Done
Data Pump Files
Compression Makes 500 GB to 175 GB

57+62+56 = 175 GB
Upload Files to EC2 Using UDP
Install Tsunami on both the source database server and the Amazon EC2 instance
Open port 462...
Using UDP Tool Tsunami
On the source database server, start Tsunami server
$ cd/mnt/expdisk1
$ tsunamid *

On the destinat...
Export and Upload in Parallel
• No need to wait till all 18 files are done to start upload
• Start upload as soon as the f...
Total time to upload 175 GB
Transfer Files to Amazon RDS DB instance
Amazon RDS DB instance has an externally accessible Oracle Directory
Object DATA_...
Perl Script to Transfer Files to DB Instance
# RDS instance info
my $RDS_PORT=4080;
my $RDS_HOST="myrdshost.xxx.us-east-1-...
Perl Script to Transfer Files to DB Instance
$stmt->bind_param_inout(":dirname", $dirname, 12);
$stmt->bind_param_inout(":...
Transfer Files as They Are Received
• No need to wait till all 18 files are received in the EC2 instance
• Start transfer ...
Total time to Transfer Files to RDS
Import data into the Amazon RDS
instance
•

•

Import from within Amazon RDS instance using DBMS_DATAPUMP
package
Submit a...
Import Data into the RDS DB Instance
declare
h1
NUMBER;
begin
h1 := dbms_datapump.open (operation => 'IMPORT', job_mode =>...
Total Time to Import Data into Amazon RDS
Time Taken to Migrate the Database
Optimize the Data Pump Export
• Reduce the data set to optimal size, avoid
indexes
• Use compression and parallel processi...
Optimize Data Upload
• Use Tsunami for UDP-based file transfer
• Use large Amazon EC2 instance with SSD or PIOPS
volume
• ...
Optimize Data File Upload to RDS
• Use the largest Amazon RDS DB instance possible during
the import process
• Avoid using...
Periodic Upload
• Oracle data pump network mode
• Oracle materialized views

• Custom triggers
For Small Dataset, one time load
• Oracle Import/Export
• Oracle Data Pump network mode

• Oracle SQL*Loader
• Oracle mate...
Data Migration Approaches for

MySQL
Importing from a MySQL DB Instance
Application

DB

Application

mysqldump

Staging
area

Load data

scp
Tsunami UDP
Stagi...
Importing from a MySQL DB Instance
Importing from a MySQL DB Instance
Check the Size of the Master Database
Create the Backup File
Create a DB Instance for MySQL and EC2
Create DB instance for MySQL using AWS Management Console or CLI
PROMPT>rds-create-...
Update /etc/my.cnf on the Master Server
Enable MySQL binlog
This enables bin logging, which creates a file recording the c...
Configure the Master Database
Restart the master database after /etc/my.cnf is updated
$ sudo /etc/init.d/mysqld start

Re...
Upload Files to Amazon EC2 using UDP
• Tar and compress MySQL dump file preparation to
ship to Amazon EC2 staging server.
...
Configure the Amazon RDS database
Create the database
mysql> create database bench;

Import the database that you previous...
Amazon RDS for MySQL replication status
Make Amazon RDS DB Instance the Master
Switch over to the RDS DB instance
– Stop the service/application that is pointing ...
Please give us your feedback on this
presentation

DAT308
As a thank you, we will select prize
winners daily for completed...
References
•
•
•
•
•

RDS Home Page
RDS FAQs
RDS Webinars
RDS Best Practices
Data Import Guides
– MySQL
– Oracle
– SQL Ser...
Advanced Data Migration Techniques for Amazon RDS (DAT308) | AWS re:Invent 2013
Upcoming SlideShare
Loading in...5
×

Advanced Data Migration Techniques for Amazon RDS (DAT308) | AWS re:Invent 2013

7,615

Published on

Migrating data from the existing environments to AWS is a key part of the overall migration to Amazon RDS for most customers. Moving data into Amazon RDS from existing production systems in a reliable, synchronized manner with minimum downtime requires careful planning and the use of appropriate tools and technologies. Because each migration scenario is different, in terms of source and target systems, tools, and data sizes, you need to customize your data migration strategy to achieve the best outcome. In this session, we do a deep dive into various methods, tools, and technologies that you can put to use for a successful and timely data migration to Amazon RDS.

Published in: Technology
0 Comments
5 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
7,615
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
122
Comments
0
Likes
5
Embeds 0
No embeds

No notes for slide

Advanced Data Migration Techniques for Amazon RDS (DAT308) | AWS re:Invent 2013

  1. 1. DAT308 – Advanced Data Migration Techniques for Amazon RDS Shakil Langha, Abdul Sathar Sait, Bharanendra Nallamotu Amazon Web Services November 13, 2013 © 2013 Amazon.com, Inc. and its affiliates. All rights reserved. May not be copied, modified, or distributed in whole or in part without the express consent of Amazon.com, Inc.
  2. 2. Next 60 minutes … • What is new in Amazon Relational Database Service • Types of data migration • General considerations • Advanced migration techniques for Oracle • Near-zero downtime migration for MySQL
  3. 3. Amazon RDS Recent Releases Oracle Transparent Data Encryption MySQL 5.6 Amazon RDS MySQL Replication to RDS CR1.8XLarge for MySQL 5.6 Oracle Statspack Cross-region Snapshot Copy
  4. 4. One-Time Migration
  5. 5. Periodic Migration
  6. 6. Ongoing Replication
  7. 7. General Considerations
  8. 8. RDS Pre-Migration Steps • • • • • • Stop applications accessing the DB Take a snapshot Disable backups Use Single-AZ instances Optimum instance for load performance Configure security for cross-DB traffic
  9. 9. RDS Post-migration Steps • • • • • Turn on backups Turn on multi-az Create read replicas Tighten down security Notifications via Amazon CloudWatch, DBEvents
  10. 10. Data Migration Approaches for Oracle
  11. 11. Let’s Move Some Data
  12. 12. How about 500 GB
  13. 13. Migration Process
  14. 14. Data Pump Export expdp demoreinv/demo full=y dumpfile=data_pump_exp1:reinvexp1%U.dmp, data_pump_exp2:reinvexp2%U.dmp, data_pump_exp3:reinvexp3%U.dmp filesize=20G parallel=8 logfile=data_pump_exp1:reinvexpdp.log compression=all job_name=reInvExp
  15. 15. Data Pump Export Start
  16. 16. Data Pump Export Done
  17. 17. Data Pump Files
  18. 18. Compression Makes 500 GB to 175 GB 57+62+56 = 175 GB
  19. 19. Upload Files to EC2 Using UDP Install Tsunami on both the source database server and the Amazon EC2 instance Open port 46224 for Tsunami communication $ yum -y install make yum -y install automake yum -y install gcc yum -y install autoconf yum -y install cvs wget http://sourceforge.net/projects/tsunami-udp/files/latest/download?_test=goal tar -xzf tsunami*gz cd tsunami-udp* ./recompile.sh make install
  20. 20. Using UDP Tool Tsunami On the source database server, start Tsunami server $ cd/mnt/expdisk1 $ tsunamid * On the destination database server, start Tsunami server $ cd /mnt/data_files $ tsunami $ tsunami> connect source.db.server $ tsunami> get *
  21. 21. Export and Upload in Parallel • No need to wait till all 18 files are done to start upload • Start upload as soon as the first set of 3 files are done
  22. 22. Total time to upload 175 GB
  23. 23. Transfer Files to Amazon RDS DB instance Amazon RDS DB instance has an externally accessible Oracle Directory Object DATA_PUMP_DIR Use a script to move data files to Amazon RDS DATA_PUMP_DIR
  24. 24. Perl Script to Transfer Files to DB Instance # RDS instance info my $RDS_PORT=4080; my $RDS_HOST="myrdshost.xxx.us-east-1-devo.rds-dev.amazonaws.com"; my $RDS_LOGIN="orauser/orapwd"; my $RDS_SID="myoradb"; my $dirname = "DATA_PUMP_DIR"; my $fname = $ARGV[0]; my $data = “dummy"; my $chunk = 8192; my $sql_open = "BEGIN perl_global.fh := utl_file.fopen(:dirname, :fname, 'wb', :chunk); END;"; my $sql_write = "BEGIN utl_file.put_raw(perl_global.fh, :data, true); END;"; my $sql_close = "BEGIN utl_file.fclose(perl_global.fh); END;"; my $sql_global = "create or replace package perl_global as fh utl_file.file_type; end;"; my $conn = DBI->connect('dbi:Oracle:host='.$RDS_HOST.';sid='.$RDS_SID.';port='.$RDS_PORT,$RDS_LOGIN, '') || die ( $DBI::errstr . "n") ; my $updated=$conn->do($sql_global); my $stmt = $conn->prepare ($sql_open);
  25. 25. Perl Script to Transfer Files to DB Instance $stmt->bind_param_inout(":dirname", $dirname, 12); $stmt->bind_param_inout(":fname", $fname, 12); $stmt->bind_param_inout(":chunk", $chunk, 4); $stmt->execute() || die ( $DBI::errstr . "n"); open (INF, $fname) || die "nCan't open $fname for reading: $!n"; binmode(INF); $stmt = $conn->prepare ($sql_write); my %attrib = ('ora_type,24); my $val=1; while ($val > 0) { $val = read (INF, $data, $chunk); $stmt->bind_param(":data", $data , %attrib); $stmt->execute() || die ( $DBI::errstr . "n") ; }; die "Problem copying: $!n" if $!; close INF || die "Can't close $fname: $!n"; $stmt = $conn->prepare ($sql_close); $stmt->execute() || die ( $DBI::errstr . "n") ;
  26. 26. Transfer Files as They Are Received • No need to wait till all 18 files are received in the EC2 instance • Start transfer to RDS instance as soon as the first file is received.
  27. 27. Total time to Transfer Files to RDS
  28. 28. Import data into the Amazon RDS instance • • Import from within Amazon RDS instance using DBMS_DATAPUMP package Submit a job using PL/SQL script
  29. 29. Import Data into the RDS DB Instance declare h1 NUMBER; begin h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'REINVIMP', version => 'COMPATIBLE'); dbms_datapump.set_parallel(handle => h1, degree => 8); dbms_datapump.add_file(handle => h1, filename => 'IMPORT.LOG', directory => 'DATA_PUMP_DIR', filetype => 3); dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0); dbms_datapump.add_file(handle => h1, filename => 'reinvexp1%U.dmp', directory => 'DATA_PUMP_DIR', filetype => 1); dbms_datapump.add_file(handle => h1, filename => 'reinvexp2%U.dmp', directory => 'DATA_PUMP_DIR', filetype => 1); dbms_datapump.add_file(handle => h1, filename => 'reinvexp3%U.dmp', directory => 'DATA_PUMP_DIR', filetype => 1); dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1); dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC'); dbms_datapump.set_parameter(handle => h1, name => 'REUSE_DATAFILES', value => 0); dbms_datapump.set_parameter(handle => h1, name => 'SKIP_UNUSABLE_INDEXES', value => 0); dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0); dbms_datapump.detach(handle => h1); end; /
  30. 30. Total Time to Import Data into Amazon RDS
  31. 31. Time Taken to Migrate the Database
  32. 32. Optimize the Data Pump Export • Reduce the data set to optimal size, avoid indexes • Use compression and parallel processing • Use multiple disks with independent I/O
  33. 33. Optimize Data Upload • Use Tsunami for UDP-based file transfer • Use large Amazon EC2 instance with SSD or PIOPS volume • Use multiple disks with independent I/O • You could use multiple Amazon EC2 instances for parallel upload
  34. 34. Optimize Data File Upload to RDS • Use the largest Amazon RDS DB instance possible during the import process • Avoid using Amazon RDS DB instance for any other load during this time • Provision enough storage in the Amazon RDS DB instance for the uploaded files and imported data
  35. 35. Periodic Upload • Oracle data pump network mode • Oracle materialized views • Custom triggers
  36. 36. For Small Dataset, one time load • Oracle Import/Export • Oracle Data Pump network mode • Oracle SQL*Loader • Oracle materialized views
  37. 37. Data Migration Approaches for MySQL
  38. 38. Importing from a MySQL DB Instance Application DB Application mysqldump Staging area Load data scp Tsunami UDP Staging server Replication AWS Region
  39. 39. Importing from a MySQL DB Instance
  40. 40. Importing from a MySQL DB Instance
  41. 41. Check the Size of the Master Database
  42. 42. Create the Backup File
  43. 43. Create a DB Instance for MySQL and EC2 Create DB instance for MySQL using AWS Management Console or CLI PROMPT>rds-create-db-instance mydbinstance -s 1024 -c db.m3.2xlarge -e MySQL - u <masterawsuser> -p <secretpassword> --backup-retention-period 3 Create Amazon EC2 (Staging server) using AWS Management Console or CLI aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type m3.2xlarge --key-name MyKeyPair --security-groups MySecurityGroup Create replication user on the master mysql> GRANT SELECT,REPLICATION USER,REPLICATION CLIENT ON *.* TO repluser@‘<RDS Endpoint>' IDENTIFIED BY ‘<password>';
  44. 44. Update /etc/my.cnf on the Master Server Enable MySQL binlog This enables bin logging, which creates a file recording the changes that have occurred on the master, which the slave uses to replicate the data. [mysqld] server-id = 1 binlog-do-db=mytest relay-log = /var/lib/mysql/mysql-relay-bin relay-log-index = /var/lib/mysql/mysql-relay-bin.index log-error = /var/lib/mysql/mysql.err master-info-file = /var/lib/mysql/mysql-master.info relay-log-info-file = /var/lib/mysql/mysql-relay-log.info log-bin = /var/lib/mysql/mysql-bin
  45. 45. Configure the Master Database Restart the master database after /etc/my.cnf is updated $ sudo /etc/init.d/mysqld start Record the “File” and the “Position” values. $ mysql -h localhost -u root -p mysql> show master statusG *************************** 1. row *************************** File: mysql-bin.000023 Position: 107 Binlog_Do_DB: mytest Binlog_Ignore_DB: 1 row in set (0.00 sec)
  46. 46. Upload Files to Amazon EC2 using UDP • Tar and compress MySQL dump file preparation to ship to Amazon EC2 staging server. • Update the Amazon EC2 security group to allow UDP connection from the server where the dump file is being created to your new MySQL client server. • On the Amazon EC2 staging instance, untar the tar.tgz file.
  47. 47. Configure the Amazon RDS database Create the database mysql> create database bench; Import the database that you previously exported from the master database Mysql> load data local infile '/reinvent/tables/customer_address.txt' into table customer_address fields terminated by ','; Mysql> load data local infile '/reinvent/tables/customer.txt' into table customer fields terminated by ','; Configure the slave DB instance for MySQL, and start the slave server mysql> call mysql.rds_set_external_master(‘<master server>',3306,‘<replicationuser>',‘<password>','mysql-bin.000013',107,0); mysql> call mysql.rds_start_replication;
  48. 48. Amazon RDS for MySQL replication status
  49. 49. Make Amazon RDS DB Instance the Master Switch over to the RDS DB instance – Stop the service/application that is pointing at the Master Database – Once all changes have been applied to New RDS Database. Stop replication with “call mysql.rds_stop_replication” – Point the service/application at the New RDS Database. – Once Migration is complete. “call mysql. rds_reset_external_master”
  50. 50. Please give us your feedback on this presentation DAT308 As a thank you, we will select prize winners daily for completed surveys!
  51. 51. References • • • • • RDS Home Page RDS FAQs RDS Webinars RDS Best Practices Data Import Guides – MySQL – Oracle – SQL Server
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×