2© The Pythian Group Inc., 2018
AllThingsOpen, Raleigh, NC, USA
October 15, 2019
Matthias Crauwels
Implementing MySQL
Database-as-a-Service
using Open Source tools
3© The Pythian Group Inc., 2018
Who am I?
4© The Pythian Group Inc., 2018 4© The Pythian Group Inc., 2019
Matthias Crauwels
● Living in Ghent, Belgium
● Bachelor Computer Science
● ~20 years Linux user / admin
● ~10 years PHP developer
● ~8 years MySQL DBA
● 3rd year at Pythian
● Currently Lead Database Consultant
● Father of Leander
5© The Pythian Group Inc., 2018 5© The Pythian Group Inc., 2018
Helping businesses
use data to compete
and win
6© The Pythian Group Inc., 2018
AGENDA
6© The Pythian Group Inc., 2019
Introduction and history
DBaaS: frontend
DBaaS: backend
Communication
7© The Pythian Group Inc., 2018
Let's get started!
8© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 8
You start a new application, in many cases on a LAMP stack
● Linux
● Apache
● MySQL
● PHP
Everything on a single server!
History
9© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 9
Your application grows… What do you do?
You buy a bigger server!
History
10© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 10
You application grows even more. Yay!
You buy more servers and split your infrastructure.
History
Database
Web server File server
11© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 11
Your application grows even more!
● You scale up the components
● Web servers are easy, just add more and load balance
● File servers are easy, get more/bigger disks, implement RAID solutions, ...
● What about the database??
■ More servers?
● Ok but what about the data?
■ I want all my web servers to see the same data.
● Writing it on all the servers? Overhead!
History
12© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 12
MySQL replication
● Writing to master
● Reading from replica’s (slaves)
History
13© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 13
Million dollar questions
● How do we know what server is the master?
● How do we know which servers are the replica’s?
● How do we manage this replication topology?
● What if the master goes down?
● What about maintenance?
● …
History
14© The Pythian Group Inc., 2018
Database-as-a-Service
Frontend Solution
15© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 15
ProxySQL
16© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 16
ProxySQL is a high performance layer 7 proxy application for MySQL.
● It provides ‘intelligent’ load balancing of application requests onto
multiple databases
● It understands the MySQL traffic that passes through it, and can split
reads from writes.
● It understands the underlying database topology, whether the
instances are up or down
● It shields applications from the complexity of the underlying
database topology, as well as any changes to it
● ...
ProxySQL: What?
17© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 17
● Hostgroup
All backend MySQL servers are grouped into hostgroups. These “hostgroups” will be used
for query routing.
● Query rules
Query rules are used for routing, mirroring, rewriting or blocking queries. They are at the
heart of ProxySQL’s functionalities
● MySQL users and servers
These are configuration items which the proxy uses to operate
ProxySQL: terminology
18© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 18
ProxySQL: Basic design (1)
19© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 19
ProxySQL: Basic design (2)
20© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 20
ProxySQL: Internals
21© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 21
ProxySQL will be configured to share configuration values with its peers.
Currently, all instances are equal and can be used to reconfigure, there is
no “master” or “leader”. This is a feature on the roadmap
(https://github.com/sysown/proxysql/wiki/ProxySQL-Cluster#roadmap).
Helps to:
● Avoid your ProxySQL instance to be the single point of failure
● Avoid having to reconfigure every ProxySQL instance on the
application server
● Helps to (auto-)scale the ProxySQL infrastructure
ProxySQL: Clustering
22© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 22
ProxySQL exists between the application and the database.
● It hides the complexity of the database topology to the application
● It knows which server is the master and which are the slaves
● It will not make changes to the topology so topology management
is not solved with this product.
● It has support for gracefully taking a server out of service
● It is easy to configure
● It can be clustered for not being a single-point-of-failure
ProxySQL: Conclusions
23© The Pythian Group Inc., 2018
Question about
ProxySQL?
24© The Pythian Group Inc., 2018
Database-as-a-Service
Backend management
25© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 25
Orchestrator
26© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 26
Orchestrator is a High Availability and replication management tool.
It can be used for:
● Discovery of a topology
● Visualisation of a topology
● Refactoring of a topology
● Recovery of a topology
Orchestrator: What?
27© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 27
Orchestrator can (and will) discover your entire replication technology as
soon as you connect it to a single server in the topology.
It will use SHOW SLAVE HOSTS, SHOW PROCESSLIST, SHOW
SLAVE STATUS to try and connect to the other servers in the topology.
Requirement: the orchestrator_topology_userneeds to be created
on every server in the cluster so it can connect.
Orchestrator: Discovery
28© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 28
Orchestrator comes with a web interface that visualizes the servers in the
topology.
Orchestrator: Visualization
29© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 29
Orchestrator can be used to refactor the topology.
This can be done from the command line tool, via the API or even via the
web interface by dragging and dropping.
You can do things like
● Repoint a slave to a new master
● Promote a server to a (co-)master
● Start / Stop slave
● ...
Orchestrator: Refactoring
30© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 30
All of these features are nice, but they still require a human to execute
them. This doesn’t help you much when your master goes down at 3AM
and you get paged to resolve this.
Orchestrator can be configured to automatically recover your topology
from an outage.
Orchestrator: Recovery
31© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 31
To be able to perform a recovery, Orchestrator first needs to detect a
failure.
As indicated before Orchestrator connects to every server in the topology
and gathers information from each of the instances.
Orchestrator uses this information to make decisions on the best action to
take. They call this the holistic approach.
Orchestrator: How recovery works?
32© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 32
Orchestrator: Failure detection example
33© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 33
Orchestrator was written with High Availability as a basic concept.
You can easily run multiple Orchestrator instances with a shared MySQL
backend. All instances will collect all information but they will allow only
one instance to be the “active node” and to make changes to the
topology.
To eliminate a single-point-of-failure in the database backend you can
use either master-master replication (2 nodes) or Galera synchronous
replication (3 nodes).
Orchestrator High Availability
34© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 34
Since version 3.x of Orchestrator there is “Orchestrator-on-Raft”.
Orchestrator now implements the ‘raft consensus protocol’. This will
● Ensure that a leader node is elected from the available nodes
● Ensure that the leader node has a quorum (majority) at all times
● Allow to run Orchestrator without a shared database backend
● Allow to run without a MySQL backend but use a sqlite backend
Orchestrator High Availability
35© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 35
A common example of a High Availability setup
● 3 Orchestrator nodes in different DC’s
● Often one primary DC, one backup DC and one “arbitrator” node in a
cloud DC.
● Orchestrator developers have made changes to raft protocol to allow
■ leader to step down
■ other nodes to yield to a certain node to become the leader
● Shlomi Noach from GitHub will definitely go into more detail on how
they implemented this at GitHub.
Orchestrator High Availability
36© The Pythian Group Inc., 2018
Questions about
Orchestrator?
37© The Pythian Group Inc., 2018
Overview
38© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 38
Architecture overview
APP(S)
Leader
39© The Pythian Group Inc., 2018
Communication
Default behaviour
40© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 40
● Using the read only flag monitoring in ProxySQL by adding a hostgroup-pair to
mysql_replication_hostgroupstable
Admin> SHOW CREATE TABLE mysql_replication_hostgroupsG
*************************** 1. row ***************************
table: mysql_replication_hostgroups
Create Table: CREATE TABLE mysql_replication_hostgroups (
writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY,
reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND reader_hostgroup>0),
comment VARCHAR,
UNIQUE (reader_hostgroup)
)
1 row in set (0.00 sec)
● Requires monitoring user to be configured correctly
ProxySQL read only flag monitoring
41© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 41
● Orchestrator will flip the read-only flag on master failover
● Setting ApplyMySQLPromotionAfterMasterFailover
● default value was false (Orchestrator version < 3.0.12)
● since 3.0.12 default is true
● Recommendation has always been to enable this.
● Configure MySQL to be read-only by default (best practise)
Orchestrator ApplyMySQLPromotionAfterMasterFailover
42© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 42
● What happens on network partitions?
● Orchestrator sees master being unavailable and promotes a new
● Old master still is writeable (Orchestrator can not reach it to toggle the
flag)
● ProxySQL will move the new master (writable) to the writer hostgroup
● ProxySQL will place old master as SHUNNED.
● When network partition gets resolved it will still be writable so it will
return to ONLINE.
● this will lead to split brain
Default behaviour: Caveats
43© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 43
● Solutions to prevent this split brain scenario
● STONITH (shoot the other node in the head)
● Run script in ProxySQL scheduler that deletes any SHUNNED writers
from the configuration (both from the writer and reader hostgroups)
Default behaviour: Caveats / workarounds
44© The Pythian Group Inc., 2018
Communication
Orchestrator hooks
45© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 45
● Orchestrator implements hooks on various stages of the recovery
process
● These "hooks" are like events that will be called and you can
configure your own scripts to run
● This makes Orchestrator highly customisable and scriptable
● Default (naive) configuration will echo text to /tmp/recovery.log
● Use the hooks! If not for scripting then for alerting / notifying you
that something happened
Orchestrator hooks: What?
46© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 46
● Instead of relying on ProxySQL's monitoring of the read-only flag we
can now actively push changes to ProxySQL using the hooks.
● Whenever a planned or unplanned master change takes place we
will update the ProxySQL.
● Pre-failover:
■ Remove {failedHost} from the writer hostgroup
● Post-failover:
■ If the recovery was successful: Insert {successorHost} in the writer
hostgroup
● WARNING: test test test test test test !!!!!
(before enabling automated failovers in production)
Orchestrator hooks: Why?
47© The Pythian Group Inc., 2018
Communication
Decouple communication (between ProxySQL and Orchestrator)
48© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 48
● Orchestrator hooks are great but...
● ... what happens if there is no communication possible between
Orchestrator and ProxySQL?
● Hooks are only fired once
● What if ProxySQL is not reachable? Stop failover?
● You need ProxySQL admin credentials available on Orchestrator
The problem
49© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 49
● Decouple Orchestrator and ProxySQL
● Use Consul as key-value store in between both
● Orchestrator has built-in support to update master coordinates in the
K/V store (both for Zookeeper and Consul)
● Configuration settings
● "KVClusterMasterPrefix": "mysql/master",
● "ConsulAddress": "127.0.0.1:8500",
● "ZkAddress": "srv-a,srv-b:12181,srv-c",
The solution
50© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 50
● KVClusterMasterPrefixis the prefix to use for master discovery
entries. As example, your cluster alias is mycluster and the master
host is some.host-17.comthen you will expect an entry where:
● The Key is mysql/master/mycluster
● The Value is some.host-17.com:3306
● Additionally following key/values will be available automatically
● mysql/master/mycluster/hostname , value is some.host-17.com
● mysql/master/mycluster/port , value is 3306
● mysql/master/mycluster/ipv4 , value is 192.168.0.1
● mysql/master/mycluster/ipv6 , value is <whatever>
Which keys and values?
51© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 51
● Recommended setup for Orchestrator is to run 3 nodes with their
own local datastore (MySQL or SQLite)
● Communication between nodes happens using the RAFT protocol.
● This is also the preferred setup for the Consul K/V store
● We install Consul "server" on each Orchestrator nodes
● Consul "server" comes also with an "agent"
● We let the Orchestrator leader send it's updates to the local Consul
agent.
● Consul agent updates the Consul leader node and the leader
distributes the data to all 3 nodes using the RAFT protocol.
Avoiding single-point-of-failures (1)
52© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 52
● We now have our HA for Orchestrator and Consul.
● We have avoided network partitioning
● Majority vote is required to be the leader on both applications
● If our local Consul agent is unable to reach the Consul leader node, then
Orchestrator will not be able to reach its peers and thus not be the
Leader node.
● Optional: Orchestrator extends RAFT to implement a yield option to
yield to a specific leader. We could implement a cronjob for
Orchestrator to always yield Orchestrator leadership to the Consul
leader for faster updates but this not a requirement.
Avoiding single-point-of-failures (2)
53© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 53
● Orchestrator really doesn't care all that much for slaves
● Masters are important for HA
● the native support for the K/V store ends with updating the masters to it
("KVClusterMasterPrefix": "mysql/master" )
● API to the rescue!
● We can create a fairly simple script that runs in a cron
● pull ALL the servers from the API (get JSON response)
● compare the slave entries with values in Consul (for example keys
starting with mysql/slaves)
● update Consul if needed
What about the slaves?
54© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 54
● Now Orchestrator is updating Consul K/V (master via native support,
slaves via our script)
● Let's install a Consul "agent" on every ProxySQL machine.
● We can now query Consul data via this local agent
root@proxysql-1:~ $ consul members
Node Address Status Type Build Protocol DC Segment
orchestrator-1 10.0.1.2:8301 alive server 1.4.3 2 default <all>
orchestrator-2 10.0.2.2:8301 alive server 1.4.3 2 default <all>
orchestrator-3 10.0.3.2:8301 alive server 1.4.3 2 default <all>
proxysql-1 10.0.1.3:8301 alive client 1.4.3 2 default <default>
proxysql-2 10.0.2.3:8301 alive client 1.4.3 2 default <default>
How to configure ProxySQL?
55© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 55
● First option is to use the scripted approach
● Run a script in a cronjob or in the ProxySQL scheduler
● Crawl the Consul K/V store
● Update ProxySQL config
How to configure ProxySQL?
Pro Con
Fairly easy A lot of wasted CPU cycles
Fairly quick (ProxySQL scheduler works on
a millisecond base)
56© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 56
● Use consul-template
● Registers as listener to the Consul values
● Every time a value is changed it will re-generate a file from a template
● Example:
{{ if keyExists "mysql/master/testcluster/hostname" }}
DELETE FROM mysql_servers where hostgroup_id = 0;
REPLACE into mysql_servers (hostgroup_id, hostname) values ( 0, "{{ key
"mysql/master/testcluster/hostname" }}" );
{{ end }}
{{ range tree "mysql/slave/testcluster" }}
REPLACE into mysql_servers (hostgroup_id, hostname) values ( 1, "{{ .Key }}{{ .Value }}" );
{{ end }}
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;
How to configure ProxySQL?
57© The Pythian Group Inc., 2018© The Pythian Group Inc., 2019 57
Architecture
58© The Pythian Group Inc., 2018
Questions?
59© The Pythian Group Inc., 2018
Contact
Matthias Crauwels
crauwels@pythian.com
+1 (613) 565-8696 ext. 1215
Twitter @mcrauwel
We're hiring!!
https://pythian.com/careers

Implementing MySQL Database-as-a-Service using open source tools

  • 1.
    2© The PythianGroup Inc., 2018 AllThingsOpen, Raleigh, NC, USA October 15, 2019 Matthias Crauwels Implementing MySQL Database-as-a-Service using Open Source tools
  • 2.
    3© The PythianGroup Inc., 2018 Who am I?
  • 3.
    4© The PythianGroup Inc., 2018 4© The Pythian Group Inc., 2019 Matthias Crauwels ● Living in Ghent, Belgium ● Bachelor Computer Science ● ~20 years Linux user / admin ● ~10 years PHP developer ● ~8 years MySQL DBA ● 3rd year at Pythian ● Currently Lead Database Consultant ● Father of Leander
  • 4.
    5© The PythianGroup Inc., 2018 5© The Pythian Group Inc., 2018 Helping businesses use data to compete and win
  • 5.
    6© The PythianGroup Inc., 2018 AGENDA 6© The Pythian Group Inc., 2019 Introduction and history DBaaS: frontend DBaaS: backend Communication
  • 6.
    7© The PythianGroup Inc., 2018 Let's get started!
  • 7.
    8© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 8 You start a new application, in many cases on a LAMP stack ● Linux ● Apache ● MySQL ● PHP Everything on a single server! History
  • 8.
    9© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 9 Your application grows… What do you do? You buy a bigger server! History
  • 9.
    10© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 10 You application grows even more. Yay! You buy more servers and split your infrastructure. History Database Web server File server
  • 10.
    11© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 11 Your application grows even more! ● You scale up the components ● Web servers are easy, just add more and load balance ● File servers are easy, get more/bigger disks, implement RAID solutions, ... ● What about the database?? ■ More servers? ● Ok but what about the data? ■ I want all my web servers to see the same data. ● Writing it on all the servers? Overhead! History
  • 11.
    12© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 12 MySQL replication ● Writing to master ● Reading from replica’s (slaves) History
  • 12.
    13© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 13 Million dollar questions ● How do we know what server is the master? ● How do we know which servers are the replica’s? ● How do we manage this replication topology? ● What if the master goes down? ● What about maintenance? ● … History
  • 13.
    14© The PythianGroup Inc., 2018 Database-as-a-Service Frontend Solution
  • 14.
    15© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 15 ProxySQL
  • 15.
    16© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 16 ProxySQL is a high performance layer 7 proxy application for MySQL. ● It provides ‘intelligent’ load balancing of application requests onto multiple databases ● It understands the MySQL traffic that passes through it, and can split reads from writes. ● It understands the underlying database topology, whether the instances are up or down ● It shields applications from the complexity of the underlying database topology, as well as any changes to it ● ... ProxySQL: What?
  • 16.
    17© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 17 ● Hostgroup All backend MySQL servers are grouped into hostgroups. These “hostgroups” will be used for query routing. ● Query rules Query rules are used for routing, mirroring, rewriting or blocking queries. They are at the heart of ProxySQL’s functionalities ● MySQL users and servers These are configuration items which the proxy uses to operate ProxySQL: terminology
  • 17.
    18© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 18 ProxySQL: Basic design (1)
  • 18.
    19© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 19 ProxySQL: Basic design (2)
  • 19.
    20© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 20 ProxySQL: Internals
  • 20.
    21© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 21 ProxySQL will be configured to share configuration values with its peers. Currently, all instances are equal and can be used to reconfigure, there is no “master” or “leader”. This is a feature on the roadmap (https://github.com/sysown/proxysql/wiki/ProxySQL-Cluster#roadmap). Helps to: ● Avoid your ProxySQL instance to be the single point of failure ● Avoid having to reconfigure every ProxySQL instance on the application server ● Helps to (auto-)scale the ProxySQL infrastructure ProxySQL: Clustering
  • 21.
    22© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 22 ProxySQL exists between the application and the database. ● It hides the complexity of the database topology to the application ● It knows which server is the master and which are the slaves ● It will not make changes to the topology so topology management is not solved with this product. ● It has support for gracefully taking a server out of service ● It is easy to configure ● It can be clustered for not being a single-point-of-failure ProxySQL: Conclusions
  • 22.
    23© The PythianGroup Inc., 2018 Question about ProxySQL?
  • 23.
    24© The PythianGroup Inc., 2018 Database-as-a-Service Backend management
  • 24.
    25© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 25 Orchestrator
  • 25.
    26© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 26 Orchestrator is a High Availability and replication management tool. It can be used for: ● Discovery of a topology ● Visualisation of a topology ● Refactoring of a topology ● Recovery of a topology Orchestrator: What?
  • 26.
    27© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 27 Orchestrator can (and will) discover your entire replication technology as soon as you connect it to a single server in the topology. It will use SHOW SLAVE HOSTS, SHOW PROCESSLIST, SHOW SLAVE STATUS to try and connect to the other servers in the topology. Requirement: the orchestrator_topology_userneeds to be created on every server in the cluster so it can connect. Orchestrator: Discovery
  • 27.
    28© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 28 Orchestrator comes with a web interface that visualizes the servers in the topology. Orchestrator: Visualization
  • 28.
    29© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 29 Orchestrator can be used to refactor the topology. This can be done from the command line tool, via the API or even via the web interface by dragging and dropping. You can do things like ● Repoint a slave to a new master ● Promote a server to a (co-)master ● Start / Stop slave ● ... Orchestrator: Refactoring
  • 29.
    30© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 30 All of these features are nice, but they still require a human to execute them. This doesn’t help you much when your master goes down at 3AM and you get paged to resolve this. Orchestrator can be configured to automatically recover your topology from an outage. Orchestrator: Recovery
  • 30.
    31© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 31 To be able to perform a recovery, Orchestrator first needs to detect a failure. As indicated before Orchestrator connects to every server in the topology and gathers information from each of the instances. Orchestrator uses this information to make decisions on the best action to take. They call this the holistic approach. Orchestrator: How recovery works?
  • 31.
    32© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 32 Orchestrator: Failure detection example
  • 32.
    33© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 33 Orchestrator was written with High Availability as a basic concept. You can easily run multiple Orchestrator instances with a shared MySQL backend. All instances will collect all information but they will allow only one instance to be the “active node” and to make changes to the topology. To eliminate a single-point-of-failure in the database backend you can use either master-master replication (2 nodes) or Galera synchronous replication (3 nodes). Orchestrator High Availability
  • 33.
    34© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 34 Since version 3.x of Orchestrator there is “Orchestrator-on-Raft”. Orchestrator now implements the ‘raft consensus protocol’. This will ● Ensure that a leader node is elected from the available nodes ● Ensure that the leader node has a quorum (majority) at all times ● Allow to run Orchestrator without a shared database backend ● Allow to run without a MySQL backend but use a sqlite backend Orchestrator High Availability
  • 34.
    35© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 35 A common example of a High Availability setup ● 3 Orchestrator nodes in different DC’s ● Often one primary DC, one backup DC and one “arbitrator” node in a cloud DC. ● Orchestrator developers have made changes to raft protocol to allow ■ leader to step down ■ other nodes to yield to a certain node to become the leader ● Shlomi Noach from GitHub will definitely go into more detail on how they implemented this at GitHub. Orchestrator High Availability
  • 35.
    36© The PythianGroup Inc., 2018 Questions about Orchestrator?
  • 36.
    37© The PythianGroup Inc., 2018 Overview
  • 37.
    38© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 38 Architecture overview APP(S) Leader
  • 38.
    39© The PythianGroup Inc., 2018 Communication Default behaviour
  • 39.
    40© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 40 ● Using the read only flag monitoring in ProxySQL by adding a hostgroup-pair to mysql_replication_hostgroupstable Admin> SHOW CREATE TABLE mysql_replication_hostgroupsG *************************** 1. row *************************** table: mysql_replication_hostgroups Create Table: CREATE TABLE mysql_replication_hostgroups ( writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY, reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND reader_hostgroup>0), comment VARCHAR, UNIQUE (reader_hostgroup) ) 1 row in set (0.00 sec) ● Requires monitoring user to be configured correctly ProxySQL read only flag monitoring
  • 40.
    41© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 41 ● Orchestrator will flip the read-only flag on master failover ● Setting ApplyMySQLPromotionAfterMasterFailover ● default value was false (Orchestrator version < 3.0.12) ● since 3.0.12 default is true ● Recommendation has always been to enable this. ● Configure MySQL to be read-only by default (best practise) Orchestrator ApplyMySQLPromotionAfterMasterFailover
  • 41.
    42© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 42 ● What happens on network partitions? ● Orchestrator sees master being unavailable and promotes a new ● Old master still is writeable (Orchestrator can not reach it to toggle the flag) ● ProxySQL will move the new master (writable) to the writer hostgroup ● ProxySQL will place old master as SHUNNED. ● When network partition gets resolved it will still be writable so it will return to ONLINE. ● this will lead to split brain Default behaviour: Caveats
  • 42.
    43© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 43 ● Solutions to prevent this split brain scenario ● STONITH (shoot the other node in the head) ● Run script in ProxySQL scheduler that deletes any SHUNNED writers from the configuration (both from the writer and reader hostgroups) Default behaviour: Caveats / workarounds
  • 43.
    44© The PythianGroup Inc., 2018 Communication Orchestrator hooks
  • 44.
    45© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 45 ● Orchestrator implements hooks on various stages of the recovery process ● These "hooks" are like events that will be called and you can configure your own scripts to run ● This makes Orchestrator highly customisable and scriptable ● Default (naive) configuration will echo text to /tmp/recovery.log ● Use the hooks! If not for scripting then for alerting / notifying you that something happened Orchestrator hooks: What?
  • 45.
    46© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 46 ● Instead of relying on ProxySQL's monitoring of the read-only flag we can now actively push changes to ProxySQL using the hooks. ● Whenever a planned or unplanned master change takes place we will update the ProxySQL. ● Pre-failover: ■ Remove {failedHost} from the writer hostgroup ● Post-failover: ■ If the recovery was successful: Insert {successorHost} in the writer hostgroup ● WARNING: test test test test test test !!!!! (before enabling automated failovers in production) Orchestrator hooks: Why?
  • 46.
    47© The PythianGroup Inc., 2018 Communication Decouple communication (between ProxySQL and Orchestrator)
  • 47.
    48© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 48 ● Orchestrator hooks are great but... ● ... what happens if there is no communication possible between Orchestrator and ProxySQL? ● Hooks are only fired once ● What if ProxySQL is not reachable? Stop failover? ● You need ProxySQL admin credentials available on Orchestrator The problem
  • 48.
    49© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 49 ● Decouple Orchestrator and ProxySQL ● Use Consul as key-value store in between both ● Orchestrator has built-in support to update master coordinates in the K/V store (both for Zookeeper and Consul) ● Configuration settings ● "KVClusterMasterPrefix": "mysql/master", ● "ConsulAddress": "127.0.0.1:8500", ● "ZkAddress": "srv-a,srv-b:12181,srv-c", The solution
  • 49.
    50© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 50 ● KVClusterMasterPrefixis the prefix to use for master discovery entries. As example, your cluster alias is mycluster and the master host is some.host-17.comthen you will expect an entry where: ● The Key is mysql/master/mycluster ● The Value is some.host-17.com:3306 ● Additionally following key/values will be available automatically ● mysql/master/mycluster/hostname , value is some.host-17.com ● mysql/master/mycluster/port , value is 3306 ● mysql/master/mycluster/ipv4 , value is 192.168.0.1 ● mysql/master/mycluster/ipv6 , value is <whatever> Which keys and values?
  • 50.
    51© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 51 ● Recommended setup for Orchestrator is to run 3 nodes with their own local datastore (MySQL or SQLite) ● Communication between nodes happens using the RAFT protocol. ● This is also the preferred setup for the Consul K/V store ● We install Consul "server" on each Orchestrator nodes ● Consul "server" comes also with an "agent" ● We let the Orchestrator leader send it's updates to the local Consul agent. ● Consul agent updates the Consul leader node and the leader distributes the data to all 3 nodes using the RAFT protocol. Avoiding single-point-of-failures (1)
  • 51.
    52© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 52 ● We now have our HA for Orchestrator and Consul. ● We have avoided network partitioning ● Majority vote is required to be the leader on both applications ● If our local Consul agent is unable to reach the Consul leader node, then Orchestrator will not be able to reach its peers and thus not be the Leader node. ● Optional: Orchestrator extends RAFT to implement a yield option to yield to a specific leader. We could implement a cronjob for Orchestrator to always yield Orchestrator leadership to the Consul leader for faster updates but this not a requirement. Avoiding single-point-of-failures (2)
  • 52.
    53© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 53 ● Orchestrator really doesn't care all that much for slaves ● Masters are important for HA ● the native support for the K/V store ends with updating the masters to it ("KVClusterMasterPrefix": "mysql/master" ) ● API to the rescue! ● We can create a fairly simple script that runs in a cron ● pull ALL the servers from the API (get JSON response) ● compare the slave entries with values in Consul (for example keys starting with mysql/slaves) ● update Consul if needed What about the slaves?
  • 53.
    54© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 54 ● Now Orchestrator is updating Consul K/V (master via native support, slaves via our script) ● Let's install a Consul "agent" on every ProxySQL machine. ● We can now query Consul data via this local agent root@proxysql-1:~ $ consul members Node Address Status Type Build Protocol DC Segment orchestrator-1 10.0.1.2:8301 alive server 1.4.3 2 default <all> orchestrator-2 10.0.2.2:8301 alive server 1.4.3 2 default <all> orchestrator-3 10.0.3.2:8301 alive server 1.4.3 2 default <all> proxysql-1 10.0.1.3:8301 alive client 1.4.3 2 default <default> proxysql-2 10.0.2.3:8301 alive client 1.4.3 2 default <default> How to configure ProxySQL?
  • 54.
    55© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 55 ● First option is to use the scripted approach ● Run a script in a cronjob or in the ProxySQL scheduler ● Crawl the Consul K/V store ● Update ProxySQL config How to configure ProxySQL? Pro Con Fairly easy A lot of wasted CPU cycles Fairly quick (ProxySQL scheduler works on a millisecond base)
  • 55.
    56© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 56 ● Use consul-template ● Registers as listener to the Consul values ● Every time a value is changed it will re-generate a file from a template ● Example: {{ if keyExists "mysql/master/testcluster/hostname" }} DELETE FROM mysql_servers where hostgroup_id = 0; REPLACE into mysql_servers (hostgroup_id, hostname) values ( 0, "{{ key "mysql/master/testcluster/hostname" }}" ); {{ end }} {{ range tree "mysql/slave/testcluster" }} REPLACE into mysql_servers (hostgroup_id, hostname) values ( 1, "{{ .Key }}{{ .Value }}" ); {{ end }} LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; How to configure ProxySQL?
  • 56.
    57© The PythianGroup Inc., 2018© The Pythian Group Inc., 2019 57 Architecture
  • 57.
    58© The PythianGroup Inc., 2018 Questions?
  • 58.
    59© The PythianGroup Inc., 2018 Contact Matthias Crauwels crauwels@pythian.com +1 (613) 565-8696 ext. 1215 Twitter @mcrauwel We're hiring!! https://pythian.com/careers