Percona XtraDB 集群内部

1,041 views
941 views

Published on

Percona XtraDB 集群内部

http://www.ossez.com/forum.php?mod=viewthread&tid=26859&fromuid=426
(出处: OSSEZ)

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,041
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
15
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Percona XtraDB 集群内部

  1. 1. Percona XtraDB Cluster in a Nutshell Frédéric Descamps Seppo Jaakola Percona Live UK (London) 2012
  2. 2. www.percona.com Who are we ? Frédéric Descamps @lefred Percona Consultant http://about.me/lefred devops believer Managing MySQL since 3.23 (as far as I remember) Seppo Jaakola @codership Founder of Codership
  3. 3. www.percona.com Agenda ● What is Percona XtraDB Cluster (in few words) ● Hands on !
  4. 4. www.percona.com What is PXC ? + =
  5. 5. www.percona.com To remember Important urls http://www.codership.com/wiki/doku.php?id=galera_parameters http://www.codership.com/wiki/doku.php?id=mysql_options_0.8
  6. 6. www.percona.com Present ● Currently we have one node (production) running Percona Server
  7. 7. www.percona.com Future ● We want to have a 3 nodes PXC ● We want to have a smooth migration with minimum downtime
  8. 8. www.percona.com The Plan ● Configure PXC on node2 and node3 ● Take a backup of node1 ● Restore the backup on node2 ● Play with the 2 nodes cluster ● Setup the current production server as 3rd node
  9. 9. www.percona.com Connect to your servers ● Test the connection (ssh) to all your servers (node1, node2 and node3) login: root password: vagrant ssh -p 2221 root@127.0.0.1 (node1) ssh -p 2222 root@127.0.0.1 (node2) ssh -p 2223 root@127.0.0.1 (node3)
  10. 10. www.percona.com The production ● We have a script that simulate our production load while true do pluk.py sleep 5 done
  11. 11. www.percona.com The production ● We have a script that simulate our production load while true do pluk.py sleep 5 done Run the script (pluk.py) once on node1
  12. 12. www.percona.com Install PXC ● On node2 and node3, install Percona- XtraDB-Cluster-Server ● You should use yum (or apt) ● We will use rpm as the files are already downloaded in /usr/local/rpms
  13. 13. www.percona.com Configure PXC [mysql] prompt="node2 mysql> " [mysqld] datadir=/var/lib/mysql user=mysql log_error=node2-error.log binlog_format=ROW innodb_log_file_size=64M innodb_locks_unsafe_for_binlog=1 innodb_autoinc_lock_mode=2
  14. 14. www.percona.com Configure PXC (2) wsrep_cluster_name=pluk2k12 wsrep_cluster_address=gcomm://192. 168.70.1,192.168.70.2,192.168.70.3 wsrep_node_name=node2 wsrep_node_address=192.168.70.2 wsrep_provider=/usr/lib64/libgalera_smm.so wsrep_sst_method=rsync wsrep_slave_threads=4
  15. 15. www.percona.com To remember ● Disable selinux ● wsrep_cluster_address now supports multiple entries, wsrep_urls in [mysqld_safe] is deprecated ● SST method is defined in my.cnf ● when wsrep_node_address is used we can omit wsrep_sst_receive_address, wsrep_node_incoming_address and ist. recv_addr
  16. 16. www.percona.com Let's have a look... ● Check MySQL error log, what do we see? ● Check variables and status related to PXC ○ SHOW GLOBAL VARIABLES LIKE 'wsrep%'; ○ SHOW GLOBAL STATUS LIKE 'wsrep%'; ● Play with the cluster (follow instructor)
  17. 17. www.percona.com To remember ● wsrep = 'Write Set Replicator' ● Settings are available with SHOW GLOBAL VARIABLES LIKE 'wsrep%'; ● Status counters available with SHOW GLOBAL STATUS LIKE 'wsrep%'; ● Are important to check cluster status: ○ wsrep_local_state_comment ○ wsrep_cluster_size ○ wsrep_cluster_status ○ wsrep_connected ○ wsrep_ready
  18. 18. www.percona.com What about State Snapshot Transfer (SST) ● SST = full copy of cluster data to a specific node (from DONOR to JOINER) ● wsrep_sst_donor ● Multiple SST methods: ○ skip ○ rsync ○ mysqldump ○ xtrabackup
  19. 19. www.percona.com What about State Snapshot Transfer (SST) ● SST = full copy of cluster data to a specific node (from DONOR to JOINER) ● wsrep_sst_donor ● Multiple SST methods: ○ skip ○ rsync ○ mysqldump ○ xtrabackup Test all SST methods
  20. 20. www.percona.com What about State Snapshot Transfer (SST) ● SST = full copy of cluster data to a specific node (from DONOR to JOINER) ● wsrep_sst_donor ● Multiple SST methods: ○ skip ○ rsync ○ mysqldump ○ xtrabackup No problem with mysldump ?
  21. 21. www.percona.com To remember ● SST methods are not all the same. ● You can specify a donor per node ● Xtrabackup doesn't freeze the donor for the complete SST period ● Xtrabackup requires authentication parameter
  22. 22. www.percona.com Loadbalancer ● Let's configure a loadbalancer to connect to our cluster ○ HA proxy (needs clustercheck) ○ glb Install and configure glb on node2
  23. 23. www.percona.com Loadbalancer - glb glbd --daemon --threads 6 --control 127.0.0.1:4444 192.168.70.2:3307 192.168.70.2:3306 192.168.70.3:3306
  24. 24. www.percona.com Quorum and split brain ● PXC checks for Quorum to avoid split brain situation
  25. 25. www.percona.com Quorum and split brain ● PXC checks for Quorum to avoid split brain situation stop the communication between node2 and node3
  26. 26. www.percona.com Quorum and split brain ● BAD solution :-( wsrep_provider_options = “pc. ignore_quorum = true” ● and the GOOD solution.... next slide !
  27. 27. www.percona.com Quorum and split brain (2) ● Galera Arbitration Daemon (garbd)
  28. 28. www.percona.com Quorum and split brain (2) ● Galera Arbitration Daemon (garbd) run garbd on node1 Test the following : ● Stop mysql on node3: what's happening ? ● Stop garbd on node1: what's happening ? ● Start garbd on node1 and mysql on node3, block communication between node2 and node3, what's happening this time ? ● Block communication between node1 and node3: what's happening ?
  29. 29. www.percona.com To remember ● 3 nodes is the minimum recommended ! ● odd numbers of nodes are always better ● you can use a "fake" node (garbd) even to replicate through it !
  30. 30. www.percona.com Incremental State Transfer (IST) ● Used to avoid full SST (using gcache) ● gcache.size can be specified using wsrep_provider_options ● Now works even after a crash if the state is consistent
  31. 31. www.percona.com Incremental State Transfer (IST) ● Used to avoid full SST (using gcache) ● gcache.size can be specified using wsrep_provider_options ● Now works even after a crash if the state is consistent stop mysql on node3, run pluk.py on node2, restart node3
  32. 32. www.percona.com Production Migration ● Backup the production server and restore the production on node2 ● Run pluk.py on node1
  33. 33. www.percona.com Production Migration (2) ● Start node3 ● Run pluk.py on node1 ● Start the async replication of node1 to node2 ● What about node3 ? ● Run pluk.py on node1
  34. 34. www.percona.com Production Migration (2)
  35. 35. www.percona.com Production Migration (3) ● Configure pluk.py to connect to the loadbalancer ● Run pluk.py ● Scratch data on node1 and install PXC ● Configure PXC on node1 ● Start the cluster on node1 (SST should be done with node3) ● Run pluk.py and check data on all nodes
  36. 36. www.percona.com Production Migration (3)
  37. 37. www.percona.com Production Migration (4)
  38. 38. www.percona.com Online Schema Changes ● Total Order Isolation (TOI) ● Rolling Schema Upgrade (RSU) wsrep_OSU_method ● pt-online-schema-change
  39. 39. www.percona.com Online Schema Changes (2) ● Create a large table to modify CREATE database pluk; use pluk; CREATE TABLE `actor` ( `actor_id` int unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY, `first_name` varchar(45) NOT NULL, `last_name` varchar(45) NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP) ENGINE=InnoDB; INSERT INTO actor (first_name, last_name) SELECT REPEAT('a', 45), REPEAT('b', 45) FROM dual; INSERT INTO actor (first_name, last_name) SELECT REPEAT('a', 45), REPEAT('b', 45) FROM actor; repeat this step until it takes 10 sec+
  40. 40. www.percona.com Online Schema Changes (3) ● Use all three methods while running pluk.py against your new database and add each time a new column ● Check pluk.py output
  41. 41. www.percona.com Thank you ! Visit: http://www.percona.com/live/london-2012/
  42. 42. www.percona.com Annual Percona Live MySQL Conference and Expo The Hyatt Regency Hotel, Santa Clara, CA April 22nd-25th, 2013 Registration and Call for Papers are Open! Visit: http://www.percona.com/live/mysql-conference-2013/

×