Percona XtraDB Cluster
Installation and setup
(basics)
Peter Boros
Consultant
www.percona.com
Agenda
● Installing the first node of the cluster
● Connecting subsequent nodes to the cluster
● Installin...
www.percona.com
Agenda
● Installing the first node of the cluster
● Connecting subsequent nodes to the cluster
● Installin...
www.percona.com
Overview
● The goal of this talk to show Percona XtraDB
Cluster basics with a hands-on approach.
● We will...
www.percona.com
Percona XtraDB cluster at a
glance
● All nodes are equal
● All nodes have all the data
● Replication is (v...
www.percona.com
Packages on all database nodes
● Add Percona's yum repository
# rpm -Uhv http://www.percona.com/downloads/...
www.percona.com
Packages installed
Installing:
Percona-Server-shared-compat
replacing mysql-libs.x86_64 5.1.61-4.el6
Perco...
www.percona.com
Configuring the nodes
● wsrep_cluster_address=gcomm://
● Initializes a new cluster, new nodes can connect ...
www.percona.com
Configuring the first node
[mysqld]
server_id=1
binlog_format=ROW
log_bin=mysql-bin
wsrep_cluster_address=...
www.percona.com
Configuring subsequent nodes
[mysqld]
server_id=1
binlog_format=ROW
log_bin=mysql-bin
wsrep_cluster_addres...
www.percona.com
Additional configuration for demo
● iptables disabled
● service iptables stop
● chkconfig –del iptables
● ...
Demo: building the cluster
and destroying it
and building it again
What we saw...
www.percona.com
State transfer
● SST (Snapshot State Transfer)
● Copies the whole data set
● Different methods: xtrabackup...
www.percona.com
Split Brain
● When only 1 node was up, the data was not usable
● Using 3 nodes guarantees that you can los...
www.percona.com
What pxc1 saw
This does not necessarily mean pxc2 and pxc3 are dead...
www.percona.com
Possibilities
www.percona.com
Continuing operation in split
brain mode
● The users (application servers) who can access
pxc1 will write ...
www.percona.com
Load balancing
● Some application can use driver level load
balancing by connecting to all nodes (JDBC)
● ...
www.percona.com
HaProxy configuration
backend pxc-back
mode tcp
balance leastconn
option httpchk
server pxc1 192.168.56.41...
www.percona.com
Application server node prepared
for demo
● CentOS 6 base installation
● EPEL repo added
● HaProxy install...
Load balancing demo
Q&A
Percona XtraDB 集群安装与配置
Upcoming SlideShare
Loading in …5
×

Percona XtraDB 集群安装与配置

1,083 views
1,009 views

Published on

Percona XtraDB 集群安装与配置

http://www.ossez.com/forum.php?mod=viewthread&tid=26860&fromuid=426
(出处: OSSEZ)

Published in: Technology
0 Comments
5 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,083
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
8
Comments
0
Likes
5
Embeds 0
No embeds

No notes for slide

Percona XtraDB 集群安装与配置

  1. 1. Percona XtraDB Cluster Installation and setup (basics) Peter Boros Consultant
  2. 2. www.percona.com Agenda ● Installing the first node of the cluster ● Connecting subsequent nodes to the cluster ● Installing HaProxy on the application server ● Testing with a real-world application: sysbench ● Breaking and fixing the cluster
  3. 3. www.percona.com Agenda ● Installing the first node of the cluster ● Connecting subsequent nodes to the cluster ● Installing HaProxy on the application server ● Testing with a real-world application: sysbench ● Breaking and fixing the cluster
  4. 4. www.percona.com Overview ● The goal of this talk to show Percona XtraDB Cluster basics with a hands-on approach. ● We will use freshly installed CentOS 6 machines. Those are vanilla installations. ● We will cover load balancing using HaProxy.
  5. 5. www.percona.com Percona XtraDB cluster at a glance ● All nodes are equal ● All nodes have all the data ● Replication is (virtually) synchronous ● Completely different from asynchronous mysql replication ● At least 3 nodes (or 2 + arbitrator)
  6. 6. www.percona.com Packages on all database nodes ● Add Percona's yum repository # rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm ● Install PXC packages # yum -y install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client Percona-Server-shared-compat percona-xtrabackup
  7. 7. www.percona.com Packages installed Installing: Percona-Server-shared-compat replacing mysql-libs.x86_64 5.1.61-4.el6 Percona-XtraDB-Cluster-client Percona-XtraDB-Cluster-server percona-xtrabackup Installing for dependencies: Percona-XtraDB-Cluster-galera libaio nc perl perl-Module-Pluggable perl-Pod-Escapes perl-Pod-Simple perl-libs perl-version rsync
  8. 8. www.percona.com Configuring the nodes ● wsrep_cluster_address=gcomm:// ● Initializes a new cluster, new nodes can connect to this one ● wsrep_cluster_address=gcomm://<IP addr>:4567 ● Starts a new node, which will try to connect to the node specificed ● wsrep_urls ● Option for [mysqld_safe] section, not for [mysqld], will set wsrep_cluster_address to a usable item on this list. ● Example: wsrep_urls=gcomm://node1:4567,gcomm://node2:4567, gcomm://node3:4567
  9. 9. www.percona.com Configuring the first node [mysqld] server_id=1 binlog_format=ROW log_bin=mysql-bin wsrep_cluster_address=gcomm:// wsrep_provider=/usr/lib64/libgalera_smm .so datadir=/var/lib/mysql wsrep_slave_threads=4 wsrep_cluster_name=pxctest wsrep_sst_method=xtrabackup wsrep_node_name=pxc1 log_slave_updates innodb_locks_unsafe_for_binlog=1 innodb_autoinc_lock_mode=2 We are starting a new cluster with node 'pxc1' as primary
  10. 10. www.percona.com Configuring subsequent nodes [mysqld] server_id=1 binlog_format=ROW log_bin=mysql-bin wsrep_cluster_address=gcomm://192.168.56.41 wsrep_provider=/usr/lib64/libgalera_smm.so datadir=/var/lib/mysql wsrep_slave_threads=4 wsrep_cluster_name=pxctest wsrep_sst_method=xtrabackup wsrep_node_name=pxc2 log_slave_updates innodb_locks_unsafe_for_binlog=1 innodb_autoinc_lock_mode=2 The other nodes are joining to 'pxc1'
  11. 11. www.percona.com Additional configuration for demo ● iptables disabled ● service iptables stop ● chkconfig –del iptables ● SELinux disabled in /etc/selinux/config
  12. 12. Demo: building the cluster and destroying it and building it again
  13. 13. What we saw...
  14. 14. www.percona.com State transfer ● SST (Snapshot State Transfer) ● Copies the whole data set ● Different methods: xtrabackup, rsync, etc... ● IST (Incremental State Transfer) ● Transactions incrementally replayed from gcache ● You can use a manual backup created with xtrabackup using --galera-info option of innobackupex http://www.mysqlperformanceblog.com/2012/08/02/avoiding- sst-when-adding-new-percona-xtradb-cluster-node/
  15. 15. www.percona.com Split Brain ● When only 1 node was up, the data was not usable ● Using 3 nodes guarantees that you can lose 1 node ● A node has to be able to access the majority of cluster node to serve the data
  16. 16. www.percona.com What pxc1 saw This does not necessarily mean pxc2 and pxc3 are dead...
  17. 17. www.percona.com Possibilities
  18. 18. www.percona.com Continuing operation in split brain mode ● The users (application servers) who can access pxc1 will write pxc1 ● The users (application server) who can access pxc2 and pxc3 will write to that cluster ● This can be prevented by shutting down the node if it's not part of the group in majority ● garbd: galera arbitrator daemon, used in voting, but doesn't store data
  19. 19. www.percona.com Load balancing ● Some application can use driver level load balancing by connecting to all nodes (JDBC) ● For the rest, external solution necessary ● LVS ● HaProxy – We will cover this ● Any kind of load balancing software is usable, if it can balance TCP connections at least
  20. 20. www.percona.com HaProxy configuration backend pxc-back mode tcp balance leastconn option httpchk server pxc1 192.168.56.41:3306 check port 9200 inter 12000 rise 3 fall 3 server pxc2 192.168.56.42:3306 check port 9200 inter 12000 rise 3 fall 3 server pxc3 192.168.56.43:3306 check port 9200 inter 12000 rise 3 fall 3 backend pxc-onenode-back mode tcp balance leastconn option httpchk server pxc1 192.168.56.41:3306 check port 9200 inter 12000 rise 3 fall 3 server pxc2 192.168.56.42:3306 check port 9200 inter 12000 rise 3 fall 3 backup server pxc3 192.168.56.43:3306 check port 9200 inter 12000 rise 3 fall 3 backup
  21. 21. www.percona.com Application server node prepared for demo ● CentOS 6 base installation ● EPEL repo added ● HaProxy installed from EPEL repo ● Sysbench 0.5 package made by Frederic Descamps
  22. 22. Load balancing demo
  23. 23. Q&A

×