Introduction toIntroduction to
Percona XtraDB ClusterPercona XtraDB Cluster
and HAProxyand HAProxy
2014.04.122014.04.12
Bo-Yi WuBo-Yi Wu
appleboyappleboy
22
About meAbout me
Github: @appleboyGithub: @appleboy
Twitter: @appleboyTwitter: @appleboy
Blog: http://blog.wu-boy.comBlog: http://blog.wu-boy.com
33
AgendaAgenda
 About Percona XtraDB ClusterAbout Percona XtraDB Cluster
 Install the first node of the clusterInstall the first node of the cluster
 Install subsequent nodes to the clusterInstall subsequent nodes to the cluster
 Install HAProxy on the application serverInstall HAProxy on the application server
 Testing with a real-world applicationTesting with a real-world application
4
Why useWhy use
Percona XtraDB Cluster?Percona XtraDB Cluster?
5
MySQL ReplicationMySQL Replication
vsvs
Percona XtraDB ClusterPercona XtraDB Cluster
6
Async vs SyncAsync vs Sync
77
MySQL Replication:MySQL Replication: AsyncAsync
1...10...sec delay
88
MySQL Replication:MySQL Replication: AsyncAsync
99
syncsync
Event
Event confirm
10
Percona XtraDB ClusterPercona XtraDB Cluster
Free and Open SourceFree and Open Source
1111
Percona XtraDB ClusterPercona XtraDB Cluster
Group Communication
1212
Percona XtraDB ClusterPercona XtraDB Cluster
 Synchronous replicationSynchronous replication
 Multi-master replicationMulti-master replication
 Parallel applying on slavesParallel applying on slaves
 Data consistencyData consistency
 Automatic node provisioningAutomatic node provisioning
13
SynchronousSynchronous
replicationreplication
1414
Virtually synchronousVirtually synchronous
15
Multi-master replicationMulti-master replication
1616
Multi-master: MySQLMulti-master: MySQL
MySQL Replication
Write Fail
1717
Multi-master: XtraDB ClusterMulti-master: XtraDB Cluster
XtraDB Cluster
WriteWrite
Write
18
Parallel applying on slavesParallel applying on slaves
1919
Parallel apply: MySQLParallel apply: MySQL
Write N threads
Apply 1 thread
2020
Write N threads
Apply N thread
Parallel apply: XtraDB ClusterParallel apply: XtraDB Cluster
21
Data consistencyData consistency
2222
XtraDB Cluster data consistencyXtraDB Cluster data consistency
== ==
23
Automatic node provisioningAutomatic node provisioning
24
Group Communication
Copy Data
Join Cluster
25
How many nodes should I have?How many nodes should I have?
26
3 nodes is the minimal3 nodes is the minimal
recommended configurationrecommended configuration
>=3 nodes for quorum purpose
2727
Network Failure
Split brain
50% is not a quorum
28
Network Failure
XtraDB Cluster:
Data consistency
29
garbdgarbd
Galera Abitrator DaemonGalera Abitrator Daemon
30
Percona XtraDB ClusterPercona XtraDB Cluster
LimitationsLimitations
31
Only Support InnoDB TableOnly Support InnoDB Table
MyISAM support is limitedMyISAM support is limited
32
write performance?write performance?
limited by weakest nodelimited by weakest node
33
Joing ProcessJoing Process
34
Group Communication
Copy Data
Join Cluster
SST
1TB take long time
3535
State TransferState Transfer
 Full data SSTFull data SST
– New nodeNew node
– Node long time disconnectedNode long time disconnected
 Incremental ISTIncremental IST
– Node disconnected short timeNode disconnected short time
3636
Snapshot State TransferSnapshot State Transfer
 MysqldumpMysqldump
– Small databasesSmall databases
 RsyncRsync
– Donor disconnected for copy timeDonor disconnected for copy time
– fasterfaster
 XtraBackupXtraBackup
– Donor disconnected for short timeDonor disconnected for short time
– slowerslower
3737
Incremental State TransferIncremental State Transfer
 Node was in clusterNode was in cluster
– Disconnected for maintenanceDisconnected for maintenance
– Node CrashedNode Crashed
38
Install viaInstall via
Percona's yum repositoryPercona's yum repository
39
$ yum -y install $ yum -y install 
Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-server 
Percona-XtraDB-Cluster-client Percona-XtraDB-Cluster-client 
Percona-Server-shared-compat Percona-Server-shared-compat 
percona-xtrabackuppercona-xtrabackup
40
Configuring the nodesConfiguring the nodes
41
 wsrep_cluster_address=gcomm://wsrep_cluster_address=gcomm://
– Initializes a new cluster for first nodeInitializes a new cluster for first node
 wsrep_cluster_address=gcomm://<IP addr>,wsrep_cluster_address=gcomm://<IP addr>,
<IP addr>, <IP addr><IP addr>, <IP addr>
– Default port: 4567Default port: 4567
42
Don’t use wsrep_urlsDon’t use wsrep_urls
wsrep_urls is deprecated since version wsrep_urls is deprecated since version 5.5.285.5.28
4343
Configuring the first nodeConfiguring the first node
 [mysqld][mysqld]
 wsrep_provider=/usr/lib64/libgalera_smm.sowsrep_provider=/usr/lib64/libgalera_smm.so
 wsrep_cluster_address = "wsrep_cluster_address = "gcomm://gcomm://""
 wsrep_sst_auth=username:passwordwsrep_sst_auth=username:password
 wsrep_provider_options="gcache.size=4G"wsrep_provider_options="gcache.size=4G"
 wsrep_cluster_name=Perconawsrep_cluster_name=Percona
 wsrep_sst_method=xtrabackupwsrep_sst_method=xtrabackup
 wsrep_node_name=db_01wsrep_node_name=db_01
 wsrep_slave_threads=4wsrep_slave_threads=4
 log_slave_updateslog_slave_updates
 innodb_locks_unsafe_for_binlog=1innodb_locks_unsafe_for_binlog=1
 innodb_autoinc_lock_mode=2innodb_autoinc_lock_mode=2
4444
Configuring subsequent nodesConfiguring subsequent nodes
 [mysqld][mysqld]
 wsrep_provider=/usr/lib64/libgalera_smm.sowsrep_provider=/usr/lib64/libgalera_smm.so
 wsrep_cluster_address = "wsrep_cluster_address = "gcomm://xxxx,xxxxgcomm://xxxx,xxxx""
 wsrep_sst_auth=username:passwordwsrep_sst_auth=username:password
 wsrep_provider_options="gcache.size=4G"wsrep_provider_options="gcache.size=4G"
 wsrep_cluster_name=Perconawsrep_cluster_name=Percona
 wsrep_sst_method=xtrabackupwsrep_sst_method=xtrabackup
 wsrep_node_name=db_01wsrep_node_name=db_01
 wsrep_slave_threads=4wsrep_slave_threads=4
 log_slave_updateslog_slave_updates
 innodb_locks_unsafe_for_binlog=1innodb_locks_unsafe_for_binlog=1
 innodb_autoinc_lock_mode=2innodb_autoinc_lock_mode=2
45
Monitoring MySQL StatusMonitoring MySQL Status
show global status like 'show global status like 'wsrep%wsrep%';';
4646
Cluster integrityCluster integrity
 wsrep_cluster_sizewsrep_cluster_size
– Configuration versionConfiguration version
 wsrep_conf_idwsrep_conf_id
– Number of active nodesNumber of active nodes
 wsrep_cluster_statuswsrep_cluster_status
– Should be “Primary”Should be “Primary”
4747
Node StatusNode Status
 wsrep_readywsrep_ready
– Should be “On”Should be “On”
 wsrep_local_state_commentwsrep_local_state_comment
– Status messageStatus message
 wsep_local_send_q_avgwsep_local_send_q_avg
– Possible network bottleneckPossible network bottleneck
 wsrep_flow_control_pausedwsrep_flow_control_paused
– Replication lagReplication lag
48
Realtime Wsrep StatusRealtime Wsrep Status
https://github.com/jayjanssen/myq_gadgets
4949
Realtime Wsrep StatusRealtime Wsrep Status
Percona / db_03 / Galera 2.8(r165)Percona / db_03 / Galera 2.8(r165)
Wsrep Cluster Node Queue Ops Bytes Flow Conflct PApply CommitWsrep Cluster Node Queue Ops Bytes Flow Conflct PApply Commit
time P cnf # cmt sta Up Dn Up Dn Up Dn p_ms snt lcf bfa dst oooe oool windtime P cnf # cmt sta Up Dn Up Dn Up Dn p_ms snt lcf bfa dst oooe oool wind
11:47:39 P 73 3 Sync T/T 0 0 5 356 30K 149K 0.0 0 0 0 125 0 0 011:47:39 P 73 3 Sync T/T 0 0 5 356 30K 149K 0.0 0 0 0 125 0 0 0
11:47:40 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:40 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0
11:47:41 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:41 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0
11:47:42 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:42 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0
11:47:43 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:43 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0
11:47:44 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:44 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0
11:47:45 P 73 3 Sync T/T 0 0 0 3 0 1.1K 0.0 0 0 0 126 67 0 111:47:45 P 73 3 Sync T/T 0 0 0 3 0 1.1K 0.0 0 0 0 126 67 0 1
11:47:46 P 73 3 Sync T/T 0 0 0 2 0 994 0.0 0 0 0 126 0 0 011:47:46 P 73 3 Sync T/T 0 0 0 2 0 994 0.0 0 0 0 126 0 0 0
./myq_status -t 1 -h 127.0.0.1 wsrep
50
Application / ClusterApplication / Cluster
51
How SynchronousHow Synchronous
Writes workWrites work
52
Source NodeSource Node
pessimistic lockingpessimistic locking
InnoDB transaction locking
5353
Cluster replicationCluster replication
 Before source return commitsBefore source return commits
– Certify trx on all other nodesCertify trx on all other nodes
 Nodes reject on locking conflictsNodes reject on locking conflicts
 Commit successfully if no conflicts onCommit successfully if no conflicts on
any nodeany node
54
Node 1
Tx Source
Node 2
Accepted
Node 3
Certify Fails
Client 2
Client 1
Update t set col = '12' where id = '1'
Update t set col = '12'
where id = '1'
Update t set col = '12' where id = '1'
55
Application Care?Application Care?
56
Write to all nodesWrite to all nodes
Increase of deadlock errorsIncrease of deadlock errors
57
How to avoid deadlockHow to avoid deadlock
on all nodes?on all nodes?
5858
How to avoid deadlockHow to avoid deadlock
 Writing to only one nodeWriting to only one node
– All pessimistic locking happens on one nodeAll pessimistic locking happens on one node
 Different nodes can handle writes forDifferent nodes can handle writes for
different datasetsdifferent datasets
– Different database, tables, rows etc.Different database, tables, rows etc.
59
Application to cluster connectsApplication to cluster connects
6060
Application to clusterApplication to cluster
 For writesFor writes
– Best practice: single nodeBest practice: single node
 For readsFor reads
– All nodes load balancedAll nodes load balanced
 glbd – Galera Load Balancerglbd – Galera Load Balancer
 HaproxyHaproxy
61
192.168.1.100 192.168.1.101 192.168.1.102
HAProxy Load BalancerHAProxy Load Balancer
Read/Write Read Read
62
HAProxy Load balancingHAProxy Load balancing
63
Read and WriteRead and Write
on the same porton the same port
64
 frontend pxc-frontfrontend pxc-front
 bind *:3307bind *:3307
 mode tcpmode tcp
 default_backend pxc-backdefault_backend pxc-back
 backend pxc-backbackend pxc-back
 mode tcpmode tcp
 balance leastconnbalance leastconn
 option httpchkoption httpchk
 server db1 192.168.1.100:3306 check port 9200 interserver db1 192.168.1.100:3306 check port 9200 inter
12000 rise 3 fall 312000 rise 3 fall 3
 server db2 192.168.1.101:3306 check port 9200 interserver db2 192.168.1.101:3306 check port 9200 inter
12000 rise 3 fall 312000 rise 3 fall 3
 server db3 192.168.1.102:3306 check port 9200 interserver db3 192.168.1.102:3306 check port 9200 inter
12000 rise 3 fall 312000 rise 3 fall 3
65
Read and WriteRead and Write
on different porton different port
66
 frontend pxc-onenode-frontfrontend pxc-onenode-front
 bind *:3308bind *:3308
 mode tcpmode tcp
 default_backend pxc-onenode-backdefault_backend pxc-onenode-back
 backend pxc-onenode-backbackend pxc-onenode-back
 mode tcpmode tcp
 balance leastconnbalance leastconn
 option httpchkoption httpchk
 server db1 192.168.1.100:3306 check port 9200 interserver db1 192.168.1.100:3306 check port 9200 inter
12000 rise 3 fall 312000 rise 3 fall 3
 server db2 192.168.1.101:3306 check port 9200 interserver db2 192.168.1.101:3306 check port 9200 inter
12000 rise 3 fall 312000 rise 3 fall 3 backupbackup
 server db3 192.168.1.102:3306 check port 9200 interserver db3 192.168.1.102:3306 check port 9200 inter
12000 rise 3 fall 312000 rise 3 fall 3 backupbackup
6767
Application serverApplication server
 CentOS 6 base installationCentOS 6 base installation
 EPEL repo addedEPEL repo added
 HaProxy installed from EPEL repoHaProxy installed from EPEL repo
 Sysbench 0.5 packageSysbench 0.5 package
68
Live DemoLive Demo
69
Thank youThank you

2014 OSDC Talk: Introduction to Percona XtraDB Cluster and HAProxy

  • 1.
    Introduction toIntroduction to PerconaXtraDB ClusterPercona XtraDB Cluster and HAProxyand HAProxy 2014.04.122014.04.12 Bo-Yi WuBo-Yi Wu appleboyappleboy
  • 2.
    22 About meAbout me Github:@appleboyGithub: @appleboy Twitter: @appleboyTwitter: @appleboy Blog: http://blog.wu-boy.comBlog: http://blog.wu-boy.com
  • 3.
    33 AgendaAgenda  About PerconaXtraDB ClusterAbout Percona XtraDB Cluster  Install the first node of the clusterInstall the first node of the cluster  Install subsequent nodes to the clusterInstall subsequent nodes to the cluster  Install HAProxy on the application serverInstall HAProxy on the application server  Testing with a real-world applicationTesting with a real-world application
  • 4.
    4 Why useWhy use PerconaXtraDB Cluster?Percona XtraDB Cluster?
  • 5.
    5 MySQL ReplicationMySQL Replication vsvs PerconaXtraDB ClusterPercona XtraDB Cluster
  • 6.
  • 7.
    77 MySQL Replication:MySQL Replication:AsyncAsync 1...10...sec delay
  • 8.
  • 9.
  • 10.
    10 Percona XtraDB ClusterPerconaXtraDB Cluster Free and Open SourceFree and Open Source
  • 11.
    1111 Percona XtraDB ClusterPerconaXtraDB Cluster Group Communication
  • 12.
    1212 Percona XtraDB ClusterPerconaXtraDB Cluster  Synchronous replicationSynchronous replication  Multi-master replicationMulti-master replication  Parallel applying on slavesParallel applying on slaves  Data consistencyData consistency  Automatic node provisioningAutomatic node provisioning
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
    1717 Multi-master: XtraDB ClusterMulti-master:XtraDB Cluster XtraDB Cluster WriteWrite Write
  • 18.
    18 Parallel applying onslavesParallel applying on slaves
  • 19.
    1919 Parallel apply: MySQLParallelapply: MySQL Write N threads Apply 1 thread
  • 20.
    2020 Write N threads ApplyN thread Parallel apply: XtraDB ClusterParallel apply: XtraDB Cluster
  • 21.
  • 22.
    2222 XtraDB Cluster dataconsistencyXtraDB Cluster data consistency == ==
  • 23.
  • 24.
  • 25.
    25 How many nodesshould I have?How many nodes should I have?
  • 26.
    26 3 nodes isthe minimal3 nodes is the minimal recommended configurationrecommended configuration >=3 nodes for quorum purpose
  • 27.
  • 28.
  • 29.
  • 30.
    30 Percona XtraDB ClusterPerconaXtraDB Cluster LimitationsLimitations
  • 31.
    31 Only Support InnoDBTableOnly Support InnoDB Table MyISAM support is limitedMyISAM support is limited
  • 32.
    32 write performance?write performance? limitedby weakest nodelimited by weakest node
  • 33.
  • 34.
    34 Group Communication Copy Data JoinCluster SST 1TB take long time
  • 35.
    3535 State TransferState Transfer Full data SSTFull data SST – New nodeNew node – Node long time disconnectedNode long time disconnected  Incremental ISTIncremental IST – Node disconnected short timeNode disconnected short time
  • 36.
    3636 Snapshot State TransferSnapshotState Transfer  MysqldumpMysqldump – Small databasesSmall databases  RsyncRsync – Donor disconnected for copy timeDonor disconnected for copy time – fasterfaster  XtraBackupXtraBackup – Donor disconnected for short timeDonor disconnected for short time – slowerslower
  • 37.
    3737 Incremental State TransferIncrementalState Transfer  Node was in clusterNode was in cluster – Disconnected for maintenanceDisconnected for maintenance – Node CrashedNode Crashed
  • 38.
    38 Install viaInstall via Percona'syum repositoryPercona's yum repository
  • 39.
    39 $ yum -yinstall $ yum -y install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client Percona-XtraDB-Cluster-client Percona-Server-shared-compat Percona-Server-shared-compat percona-xtrabackuppercona-xtrabackup
  • 40.
  • 41.
    41  wsrep_cluster_address=gcomm://wsrep_cluster_address=gcomm:// – Initializesa new cluster for first nodeInitializes a new cluster for first node  wsrep_cluster_address=gcomm://<IP addr>,wsrep_cluster_address=gcomm://<IP addr>, <IP addr>, <IP addr><IP addr>, <IP addr> – Default port: 4567Default port: 4567
  • 42.
    42 Don’t use wsrep_urlsDon’tuse wsrep_urls wsrep_urls is deprecated since version wsrep_urls is deprecated since version 5.5.285.5.28
  • 43.
    4343 Configuring the firstnodeConfiguring the first node  [mysqld][mysqld]  wsrep_provider=/usr/lib64/libgalera_smm.sowsrep_provider=/usr/lib64/libgalera_smm.so  wsrep_cluster_address = "wsrep_cluster_address = "gcomm://gcomm://""  wsrep_sst_auth=username:passwordwsrep_sst_auth=username:password  wsrep_provider_options="gcache.size=4G"wsrep_provider_options="gcache.size=4G"  wsrep_cluster_name=Perconawsrep_cluster_name=Percona  wsrep_sst_method=xtrabackupwsrep_sst_method=xtrabackup  wsrep_node_name=db_01wsrep_node_name=db_01  wsrep_slave_threads=4wsrep_slave_threads=4  log_slave_updateslog_slave_updates  innodb_locks_unsafe_for_binlog=1innodb_locks_unsafe_for_binlog=1  innodb_autoinc_lock_mode=2innodb_autoinc_lock_mode=2
  • 44.
    4444 Configuring subsequent nodesConfiguringsubsequent nodes  [mysqld][mysqld]  wsrep_provider=/usr/lib64/libgalera_smm.sowsrep_provider=/usr/lib64/libgalera_smm.so  wsrep_cluster_address = "wsrep_cluster_address = "gcomm://xxxx,xxxxgcomm://xxxx,xxxx""  wsrep_sst_auth=username:passwordwsrep_sst_auth=username:password  wsrep_provider_options="gcache.size=4G"wsrep_provider_options="gcache.size=4G"  wsrep_cluster_name=Perconawsrep_cluster_name=Percona  wsrep_sst_method=xtrabackupwsrep_sst_method=xtrabackup  wsrep_node_name=db_01wsrep_node_name=db_01  wsrep_slave_threads=4wsrep_slave_threads=4  log_slave_updateslog_slave_updates  innodb_locks_unsafe_for_binlog=1innodb_locks_unsafe_for_binlog=1  innodb_autoinc_lock_mode=2innodb_autoinc_lock_mode=2
  • 45.
    45 Monitoring MySQL StatusMonitoringMySQL Status show global status like 'show global status like 'wsrep%wsrep%';';
  • 46.
    4646 Cluster integrityCluster integrity wsrep_cluster_sizewsrep_cluster_size – Configuration versionConfiguration version  wsrep_conf_idwsrep_conf_id – Number of active nodesNumber of active nodes  wsrep_cluster_statuswsrep_cluster_status – Should be “Primary”Should be “Primary”
  • 47.
    4747 Node StatusNode Status wsrep_readywsrep_ready – Should be “On”Should be “On”  wsrep_local_state_commentwsrep_local_state_comment – Status messageStatus message  wsep_local_send_q_avgwsep_local_send_q_avg – Possible network bottleneckPossible network bottleneck  wsrep_flow_control_pausedwsrep_flow_control_paused – Replication lagReplication lag
  • 48.
    48 Realtime Wsrep StatusRealtimeWsrep Status https://github.com/jayjanssen/myq_gadgets
  • 49.
    4949 Realtime Wsrep StatusRealtimeWsrep Status Percona / db_03 / Galera 2.8(r165)Percona / db_03 / Galera 2.8(r165) Wsrep Cluster Node Queue Ops Bytes Flow Conflct PApply CommitWsrep Cluster Node Queue Ops Bytes Flow Conflct PApply Commit time P cnf # cmt sta Up Dn Up Dn Up Dn p_ms snt lcf bfa dst oooe oool windtime P cnf # cmt sta Up Dn Up Dn Up Dn p_ms snt lcf bfa dst oooe oool wind 11:47:39 P 73 3 Sync T/T 0 0 5 356 30K 149K 0.0 0 0 0 125 0 0 011:47:39 P 73 3 Sync T/T 0 0 5 356 30K 149K 0.0 0 0 0 125 0 0 0 11:47:40 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:40 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:41 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:41 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:42 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:42 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:43 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:43 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:44 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:44 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 0 11:47:45 P 73 3 Sync T/T 0 0 0 3 0 1.1K 0.0 0 0 0 126 67 0 111:47:45 P 73 3 Sync T/T 0 0 0 3 0 1.1K 0.0 0 0 0 126 67 0 1 11:47:46 P 73 3 Sync T/T 0 0 0 2 0 994 0.0 0 0 0 126 0 0 011:47:46 P 73 3 Sync T/T 0 0 0 2 0 994 0.0 0 0 0 126 0 0 0 ./myq_status -t 1 -h 127.0.0.1 wsrep
  • 50.
  • 51.
  • 52.
    52 Source NodeSource Node pessimisticlockingpessimistic locking InnoDB transaction locking
  • 53.
    5353 Cluster replicationCluster replication Before source return commitsBefore source return commits – Certify trx on all other nodesCertify trx on all other nodes  Nodes reject on locking conflictsNodes reject on locking conflicts  Commit successfully if no conflicts onCommit successfully if no conflicts on any nodeany node
  • 54.
    54 Node 1 Tx Source Node2 Accepted Node 3 Certify Fails Client 2 Client 1 Update t set col = '12' where id = '1' Update t set col = '12' where id = '1' Update t set col = '12' where id = '1'
  • 55.
  • 56.
    56 Write to allnodesWrite to all nodes Increase of deadlock errorsIncrease of deadlock errors
  • 57.
    57 How to avoiddeadlockHow to avoid deadlock on all nodes?on all nodes?
  • 58.
    5858 How to avoiddeadlockHow to avoid deadlock  Writing to only one nodeWriting to only one node – All pessimistic locking happens on one nodeAll pessimistic locking happens on one node  Different nodes can handle writes forDifferent nodes can handle writes for different datasetsdifferent datasets – Different database, tables, rows etc.Different database, tables, rows etc.
  • 59.
    59 Application to clusterconnectsApplication to cluster connects
  • 60.
    6060 Application to clusterApplicationto cluster  For writesFor writes – Best practice: single nodeBest practice: single node  For readsFor reads – All nodes load balancedAll nodes load balanced  glbd – Galera Load Balancerglbd – Galera Load Balancer  HaproxyHaproxy
  • 61.
    61 192.168.1.100 192.168.1.101 192.168.1.102 HAProxyLoad BalancerHAProxy Load Balancer Read/Write Read Read
  • 62.
  • 63.
    63 Read and WriteReadand Write on the same porton the same port
  • 64.
    64  frontend pxc-frontfrontendpxc-front  bind *:3307bind *:3307  mode tcpmode tcp  default_backend pxc-backdefault_backend pxc-back  backend pxc-backbackend pxc-back  mode tcpmode tcp  balance leastconnbalance leastconn  option httpchkoption httpchk  server db1 192.168.1.100:3306 check port 9200 interserver db1 192.168.1.100:3306 check port 9200 inter 12000 rise 3 fall 312000 rise 3 fall 3  server db2 192.168.1.101:3306 check port 9200 interserver db2 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 312000 rise 3 fall 3  server db3 192.168.1.102:3306 check port 9200 interserver db3 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 312000 rise 3 fall 3
  • 65.
    65 Read and WriteReadand Write on different porton different port
  • 66.
    66  frontend pxc-onenode-frontfrontendpxc-onenode-front  bind *:3308bind *:3308  mode tcpmode tcp  default_backend pxc-onenode-backdefault_backend pxc-onenode-back  backend pxc-onenode-backbackend pxc-onenode-back  mode tcpmode tcp  balance leastconnbalance leastconn  option httpchkoption httpchk  server db1 192.168.1.100:3306 check port 9200 interserver db1 192.168.1.100:3306 check port 9200 inter 12000 rise 3 fall 312000 rise 3 fall 3  server db2 192.168.1.101:3306 check port 9200 interserver db2 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 312000 rise 3 fall 3 backupbackup  server db3 192.168.1.102:3306 check port 9200 interserver db3 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 312000 rise 3 fall 3 backupbackup
  • 67.
    6767 Application serverApplication server CentOS 6 base installationCentOS 6 base installation  EPEL repo addedEPEL repo added  HaProxy installed from EPEL repoHaProxy installed from EPEL repo  Sysbench 0.5 packageSysbench 0.5 package
  • 68.
  • 69.