Percona XtraDB Cluster
Kenny Gryp
@gryp

07 03 2017 1
Agenda
• Default asynchronous MySQL replication
• Percona XtraDB Cluster:
• Introduction / Features / Load Balancing
• Use Cases:
• High Availability / WAN Replication / Read Scaling
• Limitations
2
Percona
• Percona is the oldest and largest independent
MySQL Support, Consulting, Remote DBA,
Training, and Software Development company
with a global, 24x7 staff of over 100 serving more
than 2,000 customers in 50+ countries since 2006
• Our contributions to the MySQL community include:
• Percona Server for MySQL, Percona XtraDB Cluster
• Percona Server for MongoDB
• Percona XtraBackup: online backup
• Percona Toolkit
• Percona Monitoring & Management…
3
Agenda
• Default asynchronous MySQL
Replication
• Percona XtraDB Cluster:
• Introduction / Features / Load Balancing
• Use Cases:
• High Availability / WAN Replication / Read Scaling
• Limitations
4
MySQL Replication
If your HA is based on MySQL Replication -
You may be playing a dangerous game !
5
Traditional Replication Approach
Server 1 Server 2
replication stream
“master” “slave”
6
MySQL Replication 7
1 2
3 4
5 6
• Common Topologies:
• Master-Master (Only 1 active master)
• 1 or more layers of replication
• Slaves can be used for reads:
• asynchronous, stale data is the rule
• data loss possible (*semi-sync)
MySQL Replication 8
1 2
3 4
5 6
• non-trivial:
• external monitoring
• scripts for failover
• add node == restore backup
• much better in 

MySQL 5.6: GTIDs
MySQL Replication
1 2
3 4
5 6
9
Agenda
• Default asynchronous MySQL Replication
• Percona XtraDB Cluster:
• Introduction / Features / Load Balancing
• Use Cases:
• High Availability / WAN Replication / Read Scaling
• Limitations
10
11Data Centric
DATA
Server 1 Server 2 Server 3 Server N...
12Percona XtraDB Cluster
13Percona XtraDB Cluster
• All nodes have a full copy of the data
• Every node is equal
• No central management, no SPOF
14Understanding Galera
38
The cluster can be
seen as a meeting !
PXC (Galera cluster) is a meeting
bfb912e5-f560-11e2-0800-1eefab05e57d
15
PXC (Galera cluster) is a meeting
bfb912e5-f560-11e2-0800-1eefab05e57d
16
PXC (Galera cluster) is a meeting
bfb912e5-f560-11e2-0800-1eefab05e57d
17
PXC (Galera cluster) is a meeting
bfb912e5-f560-11e2-0800-1eefab05e57d
18
PXC (Galera cluster) is a meeting
bfb912e5-f560-11e2-0800-1eefab05e57d
19
PXC (Galera cluster) is a meeting
bfb912e5-f560-11e2-0800-1eefab05e57d
20
PXC (Galera cluster) is a meeting
bfb912e5-f560-11e2-0800-1eefab05e57d
21
PXC (Galera cluster) is a meeting 22
PXC (Galera cluster) is a meeting
???
23
PXC (Galera cluster) is a meeting
4fd8824d-ad5b-11e2-0800-73d6929be5cf
New meeting !
24
What is Percona XtraDB Cluster ?
• Percona Server
• + WSREP patches
• + Galera library
• + Utilities (init, SST and cluster check
scripts)
• + Percona Changes to Galera
25
26
• This is a free open source solution, Percona Server
is a MySQL alternative which offers breakthrough
performance, scalability, features, and
instrumentation. Self-tuning algorithms and support
for extremely high-performance hardware make it the
clear choice for organisations that demand excellent
performance and reliability from their MySQL
database server.
Percona Server
WSREP and Galera
• WSREP API is a project to develop generic
replication plugin interface for databases (WriteSet
Replication)
• Galera is a wsrep provider that implements multi-
master, synchronous replication
• Other 'Similar' Implementations:
• MariaDB Galera Cluster
• Galera Cluster For MySQL
27
What is Percona XtraDB Cluster ?
Full
compatibility
with existing
systems
28
What is Percona XtraDB Cluster ?
Minimal efforts
to migrate
29
What is Percona XtraDB Cluster ?
Minimal efforts
to return back
to MySQL
30
31Features
• 'Synchronous' Replication
• Multi Master
• Parallel Applying
• Quorum Based
• Certification/Optimistic Locking
• Automatic Node Provisioning
• PXC Specific Features
32Features
• 'Synchronous' Replication
• Multi Master
• Parallel Applying
• Quorum Based
• Certification/Optimistic Locking
• Automatic Node Provisioning
• PXC Specific Features
33(Virtual) Synchronous Replication
• Writesets (transactions) are replicated to all
available nodes on commit (and queued
on each)
• Writesets are individually “certified” on
every node, deterministically.Either it is
committed on all nodes or no node at all
(NO 2PC)
• Queued writesets are applied on those
nodes independently and asynchronously
• Flow Control avoids too much ‘lag’
34(Virtual) Synchronous Replication
35(Virtual) Synchronous Replication
• Reads can read old data
• Flow Control (by default 16 trx) avoids lag
• wsrep_sync_wait=1 can be enabled to
ensure full synchronous reads
• Latency: writes are fast, only at COMMIT,
communication with other nodes happen
36Stale Reads
37Latency
38Features
• 'Synchronous' Replication
• Multi Master
• Parallel Applying
• Quorum Based
• Certification/Optimistic Locking
• Automatic Node Provisioning
• PXC Specific Features
Multi-Master Replication
• You can write to any node in your cluster*
• Writes are ordered inside the cluster
writes
writes
writes
39
40Features
• 'Synchronous' Replication
• Multi Master
• Parallel Applying
• Quorum Based
• Certification/Optimistic Locking
• Automatic Node Provisioning
• PXC Specific Features
Parallel Replication
• Standard MySQL
Application 

writes N threads
Apply 1 thread
(MySQL 5.6:
max 1 thread
per schema,
5.7: Interval Based)
41
Parallel Replication
• PXC / Galera
Application 

writes N threads Apply M threads 

(wsrep_slave_threads)
42
43Features
• 'Synchronous' Replication
• Multi Master
• Parallel Applying
• Quorum Based
• Certification/Optimistic Locking
• Automatic Node Provisioning
• PXC Specific Features
Quorum Based
• If a node does not see more than 50% of the total
amount of nodes: reads/writes are not accepted.
• Split brain is prevented
• This requires at least 3 nodes to be effective
• a node can be an arbitrator (garbd), joining the
communication, but not having any MySQL
running
• Can be disabled (but be warned!)
44
• Loss of connectivity
Quorum Based 45
Network Problem
Does not accept Reads & Writes
• 4 Nodes
• 2 DC
Quorum Based 46
• Default quorum configuration:

4 Nodes, 0 Nodes have quorum
Quorum Based 47
Network Problem
• With garbd Arbitrator:
Quorum Based Arbitrator 48
Network Problem
garbd
(no data)
49Features
• 'Synchronous' Replication
• Multi Master
• Parallel Applying
• Quorum Based
• Certification/Optimistic Locking
• Automatic Node Provisioning
• PXC Specific Features
50Certification
51Optimistic Locking
• Communication to the other nodes of the
cluster only happens during COMMIT, this
affects locking behavior.
• Optimistic Locking is done:
• InnoDB Locking happens local to the
node
• During COMMIT/Certification, the other
nodes bring deadlocks
52Optimistic Locking
• Some Characteristics:
• also COMMIT and SELECT’s can fail on
deadlock
• Might require application changes:

Not all applications handle this properly
53Traditional InnoDB Locking
system 1
Transaction 1
Transaction 2
BEGIN
Transaction1
BEGIN
UPDATE t WHERE id=14
UPDATE t WHERE id=14
...
COMMIT
Waits on COMMIT in trx 1
54InnoDB Locking With PXC
system 1
Transaction 1
Transaction 2
BEGIN
Transaction1
BEGIN
UPDATE t WHERE id=14
UPDATE t WHERE id=14
...
COMMIT
DML,COMMIT or SELECT
ERROR due row conflict
system 2
...
55InnoDB Locking With PXC
system 1
Transaction 1
Transaction 2
BEGIN
Transaction1
BEGIN
UPDATE t WHERE id=14
UPDATE t WHERE id=14
...
COMMIT
COMMIT
ERROR due row conflict
system 2
...ERROR 1213 (40001): Deadlock found when trying
to get lock; try restarting transaction
56Optimistic Locking
57Features
• 'Synchronous' Replication
• Multi Master
• Parallel Applying
• Quorum Based
• Certification/Optimistic Locking
• Automatic Node Provisioning
• PXC Specific Features
Automatic Node Provisioning
• When a node joins the cluster:
• the data is automatically copied
• when finished: the new node is automatically
ready and accepting connections
• 2 different types of joining:
– SST (state snapshot transfer): full copy of the
data
– IST (incremental state transfer): send only the
missing writesets (if available)
58
StateTransfer Summary
Full data
SST
Incremental
IST
New node
Node long
time
disconnected
Node
disconnected
short time
59
Snapshot State Transfer
mysqldump
Small
databases
rsync
Donor
disconnected
for copy time
Faster
XtraBackup
Donor
available
Slower
60
Incremental State Transfer
Node was
in the cluster
Disconnected
for maintenance
Node
crashed
61
Automatic Node Provisioning
writes
writes
writes
new node joining
data is copied via SST or IST
62
Automatic Node Provisioning
writes
writes
writes
new node joiningwhen ready
writes
63
64Features
• 'Synchronous' Replication
• Multi Master
• Parallel Applying
• Quorum Based
• Certification/Optimistic Locking
• Automatic Node Provisioning
• PXC Specific Features
65PXC Specific Features
• PXC Strict mode
• Extended PFS support
• ProxySQL integration
• SST/XtraBackup Changes
• Bug-Fixes
66PXC Strict Mode
Prevent experimental/unsupported features:
• Only Allow InnoDB Operations
• Prevent Changing binlog_format!=ROW
• Require Primary Key on tables
• Disable Unsupported Features:
• GET_LOCK, LOCK TABLES, CTAS
• FLUSH TABLES <tables> WITH READ LOCK
• tx_isolation=SERIALIZABLE
67PERFORMANCE_SCHEMA Support
• Improve troubleshooting/debugging using
Performance Schema
• Threads
• Files
• Stages
• Mutexes
• https://www.percona.com/live/plam16/sessions/exploring-
percona-xtradb-cluster-using-performance-schema
• https://www.percona.com/doc/percona-xtradb-cluster/5.7/
manual/performance_schema_instrumentation.html
68ProxySQL
• Layer 7 MySQL Load Balancer
• Open Source
• R/W Splitting
• Connection management
• Transparent Reconnect
• Query Caching (TTL)
• 'Sharding' (User, Schema or Regex)
69ProxySQL & PXC Integration
• PXC includes ProxySQL as load balancer:
• proxysql-admin
• ProxySQL schedulers:
• Health Checks
• Reconfigures Nodes
• PXC Maintenance Mode
• Tell Load Balancer to rebalance load
ProxySQL
• Different possible setups:
• be a dedicated layer
• integrated at application layer
• integrated at database layer
70
Dedicated shared ProxySQL
application server 1 application server 2 application server 3
PXC node 1 PXC node 2 PXC node 3
ProxySQL
71
Dedicated shared ProxySQL
application server 1 application server 2 application server 3
PXC node 1 PXC node 2 PXC node 3
ProxySQL
72
ProxySQL on application side
application server 1 application server 2 application server 3
PXC node 1 PXC node 2 PXC node 3
73
ProxySQL ProxySQL ProxySQL
ProxySQL 'Endless' Possibilities
application server 1 application server 2 application server 3
PXC node 1 PXC node 2 PXC node 3
74
ProxySQL ProxySQL ProxySQL
ProxySQL ProxySQL
75Load Balancers Alternatives
• ProxySQL (L7)
• HAProxy (L4)
• ScaleArc (L7, $$)
• MySQL Router (L7, quite new)
• MaxScale (L7, $ BS-License)
Agenda
• Default asynchronous MySQL Replication
• Percona XtraDB Cluster:
• Introduction / Features / Load Balancing
• Use Cases:
• High Availability / WAN Replication / Read
Scaling
• Limitations
76
77Use Cases
• High Availability
• WAN Replication
• Read Scaling
High Availability 78
• Each node is the same (no master-slave)
• Consistency ensured, no data loss
• Quorum avoids split-brain
• Cluster issues are immediately handled on
• no ‘failover’ necessary
• no external scripts, no SPOF
79WAN replication
MySQL
MySQL
MySQL
• No impact on reads
• No impact within a trx
• Communication only happens during

COMMIT (or if autocommit=1)
• Use higher timeouts and

send windows
• Beware of increased latency
• Within EUROPE EC2
• COMMIT: 0.005100 sec
• EUROPE <-> JAPAN EC2
• COMMIT: 0.275642 sec
80WAN replication - latency
MySQL
MySQL
MySQL
81
WAN replication with MySQL
asynchronous replication
MySQL
MySQL
MySQL
• You can mix both types of replication
• Good option on slow WAN link
• Requires more nodes
MySQL
MySQL
MySQL
MySQL
MySQL MySQL
Cluster Segmentation - WAN 82
83Read Scaling
• No Read/Write Splitting necessary in App.
• Write to all nodes(*)
Example Architecture
Application Layer
ProxySQL
84
PXC
Agenda
• Default asynchronous MySQL Replication
• Percona XtraDB Cluster:
• Introduction / Features / Load Balancing
• Use Cases:
• High Availability / WAN Replication / Read Scaling
• Limitations
85
86Limitations
• Supports only InnoDB tables
• MyISAM support will most likely stay in
alpha.
• The weakest node limits write
performance
• All tables must have a Primary Key!
87Limitations
• Large Transactions are not
recommended if you write on all nodes
simultaneously
• Long Running Transactions
• If the workload has a hotspot then
(frequently writing to the same rows
across multiple nodes)
• Solution: Write to only 1 node
88Limitations
• By Default DDL is done in TOI
• Blocks ALL writes during schema change
• For certain operations:

wsrep_osu_method=RSU
• pt-online-schema-change
Summary
• Default asynchronous MySQL Replication
• Percona XtraDB Cluster:
• Introduction / Features / Load Balancing
• Use Cases:
• High Availability / WAN Replication / Read Scaling
• Limitations
89
90Resources
• Percona XtraDB Cluster website: 

http://www.percona.com/software/percona-xtradb-cluster/
• PXC articles on Percona Blog: 

https://www.percona.com/blog/category/percona-xtradb-cluster/
• Slideshare: 

https://www.slideshare.net/Grypyrg/pxc-introduction-34774802
Questions?
Kenny Gryp - @gryp

Percona XtraDB Cluster

  • 1.
    Percona XtraDB Cluster KennyGryp @gryp
 07 03 2017 1
  • 2.
    Agenda • Default asynchronousMySQL replication • Percona XtraDB Cluster: • Introduction / Features / Load Balancing • Use Cases: • High Availability / WAN Replication / Read Scaling • Limitations 2
  • 3.
    Percona • Percona isthe oldest and largest independent MySQL Support, Consulting, Remote DBA, Training, and Software Development company with a global, 24x7 staff of over 100 serving more than 2,000 customers in 50+ countries since 2006 • Our contributions to the MySQL community include: • Percona Server for MySQL, Percona XtraDB Cluster • Percona Server for MongoDB • Percona XtraBackup: online backup • Percona Toolkit • Percona Monitoring & Management… 3
  • 4.
    Agenda • Default asynchronousMySQL Replication • Percona XtraDB Cluster: • Introduction / Features / Load Balancing • Use Cases: • High Availability / WAN Replication / Read Scaling • Limitations 4
  • 5.
    MySQL Replication If yourHA is based on MySQL Replication - You may be playing a dangerous game ! 5
  • 6.
    Traditional Replication Approach Server1 Server 2 replication stream “master” “slave” 6
  • 7.
    MySQL Replication 7 12 3 4 5 6 • Common Topologies: • Master-Master (Only 1 active master) • 1 or more layers of replication
  • 8.
    • Slaves canbe used for reads: • asynchronous, stale data is the rule • data loss possible (*semi-sync) MySQL Replication 8 1 2 3 4 5 6
  • 9.
    • non-trivial: • externalmonitoring • scripts for failover • add node == restore backup • much better in 
 MySQL 5.6: GTIDs MySQL Replication 1 2 3 4 5 6 9
  • 10.
    Agenda • Default asynchronousMySQL Replication • Percona XtraDB Cluster: • Introduction / Features / Load Balancing • Use Cases: • High Availability / WAN Replication / Read Scaling • Limitations 10
  • 11.
    11Data Centric DATA Server 1Server 2 Server 3 Server N...
  • 12.
  • 13.
    13Percona XtraDB Cluster •All nodes have a full copy of the data • Every node is equal • No central management, no SPOF
  • 14.
    14Understanding Galera 38 The clustercan be seen as a meeting !
  • 15.
    PXC (Galera cluster)is a meeting bfb912e5-f560-11e2-0800-1eefab05e57d 15
  • 16.
    PXC (Galera cluster)is a meeting bfb912e5-f560-11e2-0800-1eefab05e57d 16
  • 17.
    PXC (Galera cluster)is a meeting bfb912e5-f560-11e2-0800-1eefab05e57d 17
  • 18.
    PXC (Galera cluster)is a meeting bfb912e5-f560-11e2-0800-1eefab05e57d 18
  • 19.
    PXC (Galera cluster)is a meeting bfb912e5-f560-11e2-0800-1eefab05e57d 19
  • 20.
    PXC (Galera cluster)is a meeting bfb912e5-f560-11e2-0800-1eefab05e57d 20
  • 21.
    PXC (Galera cluster)is a meeting bfb912e5-f560-11e2-0800-1eefab05e57d 21
  • 22.
    PXC (Galera cluster)is a meeting 22
  • 23.
    PXC (Galera cluster)is a meeting ??? 23
  • 24.
    PXC (Galera cluster)is a meeting 4fd8824d-ad5b-11e2-0800-73d6929be5cf New meeting ! 24
  • 25.
    What is PerconaXtraDB Cluster ? • Percona Server • + WSREP patches • + Galera library • + Utilities (init, SST and cluster check scripts) • + Percona Changes to Galera 25
  • 26.
    26 • This isa free open source solution, Percona Server is a MySQL alternative which offers breakthrough performance, scalability, features, and instrumentation. Self-tuning algorithms and support for extremely high-performance hardware make it the clear choice for organisations that demand excellent performance and reliability from their MySQL database server. Percona Server
  • 27.
    WSREP and Galera •WSREP API is a project to develop generic replication plugin interface for databases (WriteSet Replication) • Galera is a wsrep provider that implements multi- master, synchronous replication • Other 'Similar' Implementations: • MariaDB Galera Cluster • Galera Cluster For MySQL 27
  • 28.
    What is PerconaXtraDB Cluster ? Full compatibility with existing systems 28
  • 29.
    What is PerconaXtraDB Cluster ? Minimal efforts to migrate 29
  • 30.
    What is PerconaXtraDB Cluster ? Minimal efforts to return back to MySQL 30
  • 31.
    31Features • 'Synchronous' Replication •Multi Master • Parallel Applying • Quorum Based • Certification/Optimistic Locking • Automatic Node Provisioning • PXC Specific Features
  • 32.
    32Features • 'Synchronous' Replication •Multi Master • Parallel Applying • Quorum Based • Certification/Optimistic Locking • Automatic Node Provisioning • PXC Specific Features
  • 33.
    33(Virtual) Synchronous Replication •Writesets (transactions) are replicated to all available nodes on commit (and queued on each) • Writesets are individually “certified” on every node, deterministically.Either it is committed on all nodes or no node at all (NO 2PC) • Queued writesets are applied on those nodes independently and asynchronously • Flow Control avoids too much ‘lag’
  • 34.
  • 35.
    35(Virtual) Synchronous Replication •Reads can read old data • Flow Control (by default 16 trx) avoids lag • wsrep_sync_wait=1 can be enabled to ensure full synchronous reads • Latency: writes are fast, only at COMMIT, communication with other nodes happen
  • 36.
  • 37.
  • 38.
    38Features • 'Synchronous' Replication •Multi Master • Parallel Applying • Quorum Based • Certification/Optimistic Locking • Automatic Node Provisioning • PXC Specific Features
  • 39.
    Multi-Master Replication • Youcan write to any node in your cluster* • Writes are ordered inside the cluster writes writes writes 39
  • 40.
    40Features • 'Synchronous' Replication •Multi Master • Parallel Applying • Quorum Based • Certification/Optimistic Locking • Automatic Node Provisioning • PXC Specific Features
  • 41.
    Parallel Replication • StandardMySQL Application 
 writes N threads Apply 1 thread (MySQL 5.6: max 1 thread per schema, 5.7: Interval Based) 41
  • 42.
    Parallel Replication • PXC/ Galera Application 
 writes N threads Apply M threads 
 (wsrep_slave_threads) 42
  • 43.
    43Features • 'Synchronous' Replication •Multi Master • Parallel Applying • Quorum Based • Certification/Optimistic Locking • Automatic Node Provisioning • PXC Specific Features
  • 44.
    Quorum Based • Ifa node does not see more than 50% of the total amount of nodes: reads/writes are not accepted. • Split brain is prevented • This requires at least 3 nodes to be effective • a node can be an arbitrator (garbd), joining the communication, but not having any MySQL running • Can be disabled (but be warned!) 44
  • 45.
    • Loss ofconnectivity Quorum Based 45 Network Problem Does not accept Reads & Writes
  • 46.
    • 4 Nodes •2 DC Quorum Based 46
  • 47.
    • Default quorumconfiguration:
 4 Nodes, 0 Nodes have quorum Quorum Based 47 Network Problem
  • 48.
    • With garbdArbitrator: Quorum Based Arbitrator 48 Network Problem garbd (no data)
  • 49.
    49Features • 'Synchronous' Replication •Multi Master • Parallel Applying • Quorum Based • Certification/Optimistic Locking • Automatic Node Provisioning • PXC Specific Features
  • 50.
  • 51.
    51Optimistic Locking • Communicationto the other nodes of the cluster only happens during COMMIT, this affects locking behavior. • Optimistic Locking is done: • InnoDB Locking happens local to the node • During COMMIT/Certification, the other nodes bring deadlocks
  • 52.
    52Optimistic Locking • SomeCharacteristics: • also COMMIT and SELECT’s can fail on deadlock • Might require application changes:
 Not all applications handle this properly
  • 53.
    53Traditional InnoDB Locking system1 Transaction 1 Transaction 2 BEGIN Transaction1 BEGIN UPDATE t WHERE id=14 UPDATE t WHERE id=14 ... COMMIT Waits on COMMIT in trx 1
  • 54.
    54InnoDB Locking WithPXC system 1 Transaction 1 Transaction 2 BEGIN Transaction1 BEGIN UPDATE t WHERE id=14 UPDATE t WHERE id=14 ... COMMIT DML,COMMIT or SELECT ERROR due row conflict system 2 ...
  • 55.
    55InnoDB Locking WithPXC system 1 Transaction 1 Transaction 2 BEGIN Transaction1 BEGIN UPDATE t WHERE id=14 UPDATE t WHERE id=14 ... COMMIT COMMIT ERROR due row conflict system 2 ...ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
  • 56.
  • 57.
    57Features • 'Synchronous' Replication •Multi Master • Parallel Applying • Quorum Based • Certification/Optimistic Locking • Automatic Node Provisioning • PXC Specific Features
  • 58.
    Automatic Node Provisioning •When a node joins the cluster: • the data is automatically copied • when finished: the new node is automatically ready and accepting connections • 2 different types of joining: – SST (state snapshot transfer): full copy of the data – IST (incremental state transfer): send only the missing writesets (if available) 58
  • 59.
    StateTransfer Summary Full data SST Incremental IST Newnode Node long time disconnected Node disconnected short time 59
  • 60.
    Snapshot State Transfer mysqldump Small databases rsync Donor disconnected forcopy time Faster XtraBackup Donor available Slower 60
  • 61.
    Incremental State Transfer Nodewas in the cluster Disconnected for maintenance Node crashed 61
  • 62.
    Automatic Node Provisioning writes writes writes newnode joining data is copied via SST or IST 62
  • 63.
  • 64.
    64Features • 'Synchronous' Replication •Multi Master • Parallel Applying • Quorum Based • Certification/Optimistic Locking • Automatic Node Provisioning • PXC Specific Features
  • 65.
    65PXC Specific Features •PXC Strict mode • Extended PFS support • ProxySQL integration • SST/XtraBackup Changes • Bug-Fixes
  • 66.
    66PXC Strict Mode Preventexperimental/unsupported features: • Only Allow InnoDB Operations • Prevent Changing binlog_format!=ROW • Require Primary Key on tables • Disable Unsupported Features: • GET_LOCK, LOCK TABLES, CTAS • FLUSH TABLES <tables> WITH READ LOCK • tx_isolation=SERIALIZABLE
  • 67.
    67PERFORMANCE_SCHEMA Support • Improvetroubleshooting/debugging using Performance Schema • Threads • Files • Stages • Mutexes • https://www.percona.com/live/plam16/sessions/exploring- percona-xtradb-cluster-using-performance-schema • https://www.percona.com/doc/percona-xtradb-cluster/5.7/ manual/performance_schema_instrumentation.html
  • 68.
    68ProxySQL • Layer 7MySQL Load Balancer • Open Source • R/W Splitting • Connection management • Transparent Reconnect • Query Caching (TTL) • 'Sharding' (User, Schema or Regex)
  • 69.
    69ProxySQL & PXCIntegration • PXC includes ProxySQL as load balancer: • proxysql-admin • ProxySQL schedulers: • Health Checks • Reconfigures Nodes • PXC Maintenance Mode • Tell Load Balancer to rebalance load
  • 70.
    ProxySQL • Different possiblesetups: • be a dedicated layer • integrated at application layer • integrated at database layer 70
  • 71.
    Dedicated shared ProxySQL applicationserver 1 application server 2 application server 3 PXC node 1 PXC node 2 PXC node 3 ProxySQL 71
  • 72.
    Dedicated shared ProxySQL applicationserver 1 application server 2 application server 3 PXC node 1 PXC node 2 PXC node 3 ProxySQL 72
  • 73.
    ProxySQL on applicationside application server 1 application server 2 application server 3 PXC node 1 PXC node 2 PXC node 3 73 ProxySQL ProxySQL ProxySQL
  • 74.
    ProxySQL 'Endless' Possibilities applicationserver 1 application server 2 application server 3 PXC node 1 PXC node 2 PXC node 3 74 ProxySQL ProxySQL ProxySQL ProxySQL ProxySQL
  • 75.
    75Load Balancers Alternatives •ProxySQL (L7) • HAProxy (L4) • ScaleArc (L7, $$) • MySQL Router (L7, quite new) • MaxScale (L7, $ BS-License)
  • 76.
    Agenda • Default asynchronousMySQL Replication • Percona XtraDB Cluster: • Introduction / Features / Load Balancing • Use Cases: • High Availability / WAN Replication / Read Scaling • Limitations 76
  • 77.
    77Use Cases • HighAvailability • WAN Replication • Read Scaling
  • 78.
    High Availability 78 •Each node is the same (no master-slave) • Consistency ensured, no data loss • Quorum avoids split-brain • Cluster issues are immediately handled on • no ‘failover’ necessary • no external scripts, no SPOF
  • 79.
    79WAN replication MySQL MySQL MySQL • Noimpact on reads • No impact within a trx • Communication only happens during
 COMMIT (or if autocommit=1) • Use higher timeouts and
 send windows
  • 80.
    • Beware ofincreased latency • Within EUROPE EC2 • COMMIT: 0.005100 sec • EUROPE <-> JAPAN EC2 • COMMIT: 0.275642 sec 80WAN replication - latency MySQL MySQL MySQL
  • 81.
    81 WAN replication withMySQL asynchronous replication MySQL MySQL MySQL • You can mix both types of replication • Good option on slow WAN link • Requires more nodes MySQL MySQL MySQL MySQL MySQL MySQL
  • 82.
  • 83.
    83Read Scaling • NoRead/Write Splitting necessary in App. • Write to all nodes(*)
  • 84.
  • 85.
    Agenda • Default asynchronousMySQL Replication • Percona XtraDB Cluster: • Introduction / Features / Load Balancing • Use Cases: • High Availability / WAN Replication / Read Scaling • Limitations 85
  • 86.
    86Limitations • Supports onlyInnoDB tables • MyISAM support will most likely stay in alpha. • The weakest node limits write performance • All tables must have a Primary Key!
  • 87.
    87Limitations • Large Transactionsare not recommended if you write on all nodes simultaneously • Long Running Transactions • If the workload has a hotspot then (frequently writing to the same rows across multiple nodes) • Solution: Write to only 1 node
  • 88.
    88Limitations • By DefaultDDL is done in TOI • Blocks ALL writes during schema change • For certain operations:
 wsrep_osu_method=RSU • pt-online-schema-change
  • 89.
    Summary • Default asynchronousMySQL Replication • Percona XtraDB Cluster: • Introduction / Features / Load Balancing • Use Cases: • High Availability / WAN Replication / Read Scaling • Limitations 89
  • 90.
    90Resources • Percona XtraDBCluster website: 
 http://www.percona.com/software/percona-xtradb-cluster/ • PXC articles on Percona Blog: 
 https://www.percona.com/blog/category/percona-xtradb-cluster/ • Slideshare: 
 https://www.slideshare.net/Grypyrg/pxc-introduction-34774802 Questions? Kenny Gryp - @gryp