SlideShare a Scribd company logo
1 of 64
Introduction:
A short presentation with a focus on architecture, maintenance and
support of MySQL Cluster database (Portal database).
Agenda:
 MySQL Network Database Architecture
Network Database Overview
Features
Structure
Fault Tolerance
Configuration File
Partitioning
 Maintenance
Basic Admin functions
Monitoring
Backup and restore
 Conclusion
MySQL Cluster: Core architecture
 What is it?
 MySQL Cluster; MySQL nodes run mysql, data is stored on data nodes which
run network database daemons (ndbd). Clients connect to any of the MySQL
nodes to run a query. MySQL nodes connect to the data nodes to store and
retrieve the necessary data.
MySQL Cluster: Features
 MySQL Cluster!
 A high-availability, high redundancy version of MySQL server for distributed
environments. A tool to partition and synchronize data on multiple servers.
 These network of database resources provide such advantages as redundancy,
zero down time and prevents single-point-of failure.
 How is Clustering achieved?
 Data is split-up among multiple servers so that a single server failure cannot
cause downtime. Data distribution is managed by a network of db’s (ndb).
 Multiple mysql servers connect to an ndb cluster to read/write. If one node
(server) goes down the other nodes (servers) services clients requests.
MySQL Cluster: Features
 High Availability
 MySQL Cluster (NDB Cluster) is a storage engine that offers 99.9% availability
 Scalability
 Throughput up to 200M queries per second
 Designed for many parallel and short transactions
 High Performance
 Primary Key based queries, insertions and short index scans
 Low partition latency
 Allows human-unnoticeable delays between an input being processed and the
corresponding output.
 Self-healing (node recovery and cluster recovery)
 Failed nodes automatically restart and resync with other nodes before joining the
cluster. With complete application transparency
MySQL Cluster: Features
 Synchronous Replication –
 Data within each data node is synchronously replicated to another data node
 Failed nodes are automatically and transparently updated with current data
 Automatic Failover –
 MySQL cluster heartbeating mechanism instantly detects node failures and
automatically fails over to other nodes within the cluster, without interrupting
service to clients
 Shared Nothing, no single-point-of failure –
 Each node has its own disk and memory, the risk of a failure caused by shared
components such as storage is eliminated.
 If all nodes in partition fails cluster shuts down
 In ndbd partitioning Data Consistency is preferred over Availability (CAPS
theorem)
MySQL Cluster: Structure
 NDB is configured with a range of load-balancing and failover options.
 Each part of the cluster is considered a node (process)
 Cluster nodes include;
 Management Nodes (ndb_mgm)
 Manages the other nodes within the cluster (config data, start and stop nodes, running
backups etc). Manages cluster configuration file.
 This node is started first before other nodes
 Data Nodes (ndbd); (two or more nodes)
 Stores and manages data, txn mg’t and query execution. (> ndbd)
 MySQL servers Nodes;
 This node accesses the cluster data, (> mysqld –ndbcluster)
 MySQL Client Node;
 Access for client login
 Management Client;
 Provides admin functions such as cluster status, starting backups etc
MySQL Cluster: Model
MGM Node MGM Node
mysql nodemysql node
Application Nodes
LDAP
Server 2Server 1
Node Group
Data Node 1 Data Node 2
MySQL Cluster: Fault Tolerance
MGM Node MGM Node
mysql nodemysql node
Application Nodes
LDAP
Server 2Server 1
Node Group
Data Node 1 Data Node 2
MySQL Cluster: Fault Tolerance
MGM Node MGM Node
mysql nodemysql node
Application Nodes
LDAP
Server 2Server 1
Node Group
Data Node 1 Data Node 2
MySQL Cluster: Fault Tolerance
MGM Node MGM Node
mysql nodemysql node
Application Nodes
LDAP
Server 2Server 1
Node Group
Data Node 1 Data Node 2
Portal db Server: Structure
MySQL Server Database IP
PROD Blackhole 192.168.82.83
PROD Wormhole 192.168.82.82
PPRD Cosmicray 192.168.82.80
PPRD Dimension 192.168.82.81
TEST Quantum 192.168.82.68
TEST Supernova 192.168.82.69
Portal db Server: Struct-dimension/cosmicray
Node Type Machine Name IP Server
Management - ndb_mgmd(MGM) node id=1 192.168.82.80 Cosmicray
Management - ndb_mgmd(MGM) node id=6 192.168.82.81 Dimensions
Data/Storage - ndbd(NDB) node id=2 192.168.82.80 Cosmicray
Data/Storage - ndbd(NDB) node id=3 192.168.82.81 Dimensions
SQL/API node id=4 192.168.82.80 Cosmicray
SQL/API node id=5 192.168.82.81 Dimensions
SQL/API node id=7 Not connected
Configuration Files
MySQL Cluster: Configuration files
 To operate nodes in the cluster information about cluster
environment is stored in the cluster configuration file.
 Cluster config file sets all parameters needed
 There are two configuration files
 Local Config file residing on each node (/etc/my.cnf)
 This file provides information on how to nodes connect to the cluster
 Global Config file on management node (/l01/mysql/config.ini)
 Provides information about the cluster as a whole
 Used by management node to start cluster, receive nodes connections
MySQL Cluster: Configuration files
 {Server}> /l01/mysql/config.ini
 [ndb_mgmd]
 NodeId=x => node ID usually in numerals
 datadir=/l01/mysql/logs => directory for this mgmt nodes log files
 hostname=192.168.92.80 => hostname or IP address
 [ndbd] => One [ndbd] section per data node
 NodeId=y => node ID usually in numerals
 datadir=/l01/mysql/data => directory for this data nodes data files
 hostname=192.168.92.80 => hostname or IP address
 [mysqld]
 NodeId=z
 Hostname=192.168.92.80
 [tcp default]
 Portnumber=2202 => This is the default however you can use any that is free for all hosts in the cluster
 [ndb default]
 NoOfReplicas=2 => Number of replicas
 DataMemory=500M => Memory allocated for data storage
 IndexMemory=100M => Memory allocated for index storage
 BackupDataDir=/l01/mysql => location of backups
Cluster Partitioning
MySQL Cluster: Demo Partitioning
 Mysql Cluster data partitioning is a feature that allow data to be
divided and stored on different replication from one node to the
other.
 This involves horizontal partitioning, where rows in a table are divided
horizontally. Cluster uses an internal algorithm to implement this
partitioning so as to have same amount of table roles.
 This ensures evenly balanced memory requirements across all the
nodes.
 Number of cluster partitioning equal number of data nodes in each
cluster group. Each node group has same number of nodes.
Data Node 1
Data Node 2
Table T1
MySQL Cluster: Diag-1 Partitioning
P1
P2
Data Node 1
Data Node 2
Table T1
F1
MySQL Cluster: Diag-2 Partitioning
P1
P2
FxPrimary Fragment
Data Node 1
Data Node 2
F1
Table T1
F1
MySQL Cluster: Diag-3 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
Data Node 1
Data Node 2
F1
Table T1
F1
F2
MySQL Cluster: Diag-4 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
Data Node 1
Data Node 2
F2
F1
Table T1
F1
F2
MySQL Cluster: Diag-5 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
Data Node 1
Data Node 2
F2
F1
Table T1
F1
F2
MySQL Cluster: Diag-6 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
Node Group
Server 2
Server 1
Data Node 1
Data Node 2
F2
F1
Table T1
F1
F2
MySQL Cluster: Diag-7 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
Node Group
Server 2
Server 1
Data Node 1
Data Node 2
F2
F1
Table T1
F1
F2
MySQL Cluster: Diag-8 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
Node Group
Server 2
Server 1
Data Node 1
Data Node 2
F2
F1
Table T1
F1
F2
MySQL Cluster: Diag-9 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
Node Group
Server 2
Server 1
Basic Admin Functions
MySQL Cluster: Basic Admin Functions
 Using Management Client to maintain nodes
 Login to client;
 Shell > ndb_mgm
 Ndb_mgm> help
 Maintain nodes;
 ndb_mgm> SHOW
 ndb_mgm> <id> STATUS/START/STOP
 ndb_mgm> <id> START
 Cluster Status;
 Shell > ndb_mgm -e "SHOW“
MySQL Cluster: Basic Admin Functions
 START & STOP Cluster: 1-[Mg’t Nodes], 2.[Data Nodes], 3.[Mysql Nodes]
 HELP/USAGE;
 Shell > sudo /l01/mysql/bin/ndb_startup.init.sh usage
 Shell > sudo /l01/mysql/bin/ndb_startup.init.sh help
 START Management Nodes;
 Shell > ndb_mgm –e “SHOW”
 Shell > sudo /l01/mysql/bin/ndb_startup.init.sh status
 Shell > sudo /l01/mysql/bin/ndb_startup.init.sh start_ndb_mgmd (start mg’mt node)
 Shell > sudo /l01/mysql/bin/ndb_startup.init.sh stop_ndb_mgmd (stop mg’mt node)
 Shell > sudo /l01/mysql/bin/ndb_startup.init.sh start (start)
 START Data Nodes;
 Shell > ndb_mgm –e “SHOW”
 Shell > sudo /l01/mysql/bin/ndb_startup.init.sh status
 Shell > sudo /l01/mysql/bin/ndb_startup.init.sh start_ndb (start data node)
 Shell > sudo /l01/mysql/bin/ndb_startup.init.sh stop_ndb (stop data node)
 Shell > ps –ef | grep ndb (verify data node)
 START Mysqld Nodes
 Shell > sudo mysqld_safe &
 Shell > sudo /etc/init.d/mysql.server restart (start/restart/stop)
 Shell > ps –ef | grep mysqld (verify mysqld node)
 STOP Cluster; login to mg’t node and shutdown;
 ndb_mgm > shutdown (shutdown)
 Shell > ndb_mgm –e shutdown
MySQL Cluster: Shutdown/Re-Start Cluster
 STOP Cluster:
 shell> ndb_mgm –e shutdown (shutdown Cluster)
 START Cluster:
 shell> sudo /l01/mysql/bin/ndb_startup.init.sh start_ndb_mgmd (start mg’mt)
 shell> ndb_mgm -e "SHOW“
 shell> sudo /l01/mysql/bin/ndb_startup.init.sh start_ndb (start data node)
 shell> sudo /l01/mysql/bin/ndb_startup.init.sh start (start mg’mt & data)
 shell> sudo mysqld_safe & (start mysql nodes)
 shell> ndb_mgm -e "SHOW“ (verify)
 shell> ps –ef | grep ndb (verify ndb)
 shell> ps –ef | grep mysqld (verify mysqld)
 START mysqld:
 shell> sudo /etc/init.d/mysql.server restart (stop/start/restart/)
MySQLd: Automatic/Re-Start
 Upon Server Reboot Automatic restart is initiated:
 shell> /etc/rc.d/init.d/mysql.server
 shell> /etc/rc.d/init.d/mysql-cluster
 START MySQLd:
 shell> ps –ef | grep mysqld
 shell> sudo service mysql.server status
 shell> sudo service mysql.server start
 shell> sudo service mysql.server status
MySQL Cluster: Demo Cluster Replication
 Demo data replication on Dimension  Cosmicray
 server> mysql –u root –p
 mysql> select @@hostname;
 mysql> show databases;
 mysql> create database ClusterDemo_db;
 mysql> show database;
 mysql> use ClusterDemo_db;
 mysql> create table demotbl (ID int(10), Name char(35), Zip int(10));
 mysql> desc demotbl;
 mysql> insert into demotbl values(001, ‘Kabul’,23508);
 mysql> insert into demotbl values(002, ‘Qandahar’, 20365);
 mysql> insert into demotbl values(003, ‘Herat’, 54231);
 mysql> select * from demotbl;
MySQL Cluster: Demo Cluster Replication
 Demo data replication on Dimension  Cosmicray
 mysql> select @@hostname;
 mysql> show databases;
 mysql> use ClusterDemo_db;
 mysql> show tables from ClusterDemo_db;
 mysql> select * from demotbl;
 mysql> alter table demotbl ENGINE=NDBCLUSTER;
 mysql> create table demo (ID int(10), Name char(35), Zip int(10)) ENGINE=NDBCLUSTER;
 mysql> show tables from ClusterDemo_db;
 mysql> select * from demotbl;
 mysql> drop table demotbl;
 mysql> drop database ClusterDemo_db;
 Is data equally replicated/redistributed on all nodes?
 ndb_mgm> all report memory
MySQL Cluster: Portal database Logs
 Demo data replication on Dimension  Cosmicray
 shell> cd /l01/mysql/logs
 [shell]> more ndb_3_trace.log.next {}
 [shell]> more ndb_3_error.log {}
 [shell]> more ndb_3_trace.log.5 {}
 [shell]> more ndb_6_out.log {}
 [shell]> more ndb_6_cluster.log {}
 [shell]> more mysqld.log {}
 [shell]> more ndb_3_out.log {}
Backup & Restore
MySQL Cluster: Backup & Restore
 Backup creates a snapshot of all NDB nodes on the cluster at a given
point in time. This consists of;
 Metadata => Backup-backup_id.node_id.ctl
 table records and => Backup-backup_id.node_id.Data
 transaction logs => Backup-backup_id.node_id.log
 Backup is executed with the “START BACKUP” command and restore
with the command “ndb_restore” via the management client node.
 ndb_mgm > START BACKUP NOWAIT;
 ndb_mgm > ALL REPORT BACKUPSTATUS;
 ndb_mgm > ALL REPORT MEMORY;
MySQL Cluster: Backup & Restore
 ndb_restore; the cluster restore program is implemented as a separate
command-line utility with the following options;
 Server> ndb_restore -c 192.168.82.80 -b 1 -n 3 -r /backup-cluster/BACKUP/BACKUP-1
• -c Connecting string
 -b Backup Id = 1
 -n Nodeid = 3
 -r Restore Data
 -m Restore Meta Data
 -Backup location = /backup-cluster/BACKUP/BACKUP-1
Monitoring
MySQL Cluster: Monitoring
 By continuous monitoring we ensure that databases are up and
servers have enough resources (memory, ram and cpu) to perform
functions efficiently.
 Ensuring that databases are always up and all jobs actively running.
 MySQL monitoring tools include
a. Nagios
b. Crontab/Scripts,
c. OEM
d. Emails
MySQL Cluster: Monitoring-Nagios
 Nagios enables distributed monitoring of all Cluster nodes
 disk, load, ndbd, mgmd, telnet and processes
MySQL Cluster: Monitoring-Scripts
Scheduled cronjobs enable us to monitor several processes
 Check process script “check_mysql.sh” runs every five (5) mins to ensure that processes
are running. If down “start_mysql.sh” is invoked and email notification sent.
 Check daemon ndbd every 5 mins
 Check daemon ndb_mgm every five mins
 Diskmon process “diskmon.sh” runs every five (5) mins to check the size of disk space.
Notifications are generated when disk space grows beyond the warning and threshold
sizes.
 Backup script “mysql_backup.sh” runs the ff. schedule
 Schedule backups include;
 Monitor if ndb and mgm daemons are up and active
 Monitor if database instance is up
 Monitor diskspace warning (85%) and alert (90%)
Conclusion:
 MySQL NDB Cluster is designed to deliver a high availability, fault
tolerant database where no single failure results in loss of service.
 Cluster database provides automatic failover, self healing, shared
nothing architecture and no single point of failure.
 Fault tolerance of cluster databases depends on the following factors;
 Architectural choice of the configuration, placement of nodes on hosts and
resources hosts are dependent on.
 Eliminating single point of failure that could result in either hosts being lost.
 Hosting management node on separate hosts to improve fault-tolerant
solution.
QUESTIONS
References:
 https://www.digitalocean.com/community/tutorials/how-to-create-a-multi-node-mysql-
cluster-on-ubuntu-16-04
 http://www.clusterdb.com/mysql-cluster/deploying-mysql-cluster-over-multiple-hosts
 https://research.euranova.eu/wp-content/uploads/MySQL-Cluster-Design-and-Architecture-
Principles.compressed.pdf
 http://www.agileload.com/agileload/blog/2013/03/11/optimizing-the-performance-of-
mysql-cluster
 http://messagepassing.blogspot.com/2012/03/cap-theorem-and-mysql-cluster.html
 http://www.clusterdb.com/mysql-cluster/mysql-cluster-fault-tolerance-impact-of-
deployment-decisions
 https://dev.mysql.com/doc/mysql-cluster-excerpt/5.7/en/faqs-mysql-
cluster.html#qandaitem-A-1-40
 https://dev.mysql.com/doc/mysql-cluster-excerpt/5.7/en/mysql-cluster-params-ndbd.html
Two node Group Partitioning
Data Node 1
Data Node 2
Data Node 3
Data Node 4
Table T1
P2
P3
P4
P1
MySQL Cluster: Demo Partitioning
Data Node 1
Data Node 2
F1
Data Node 3
Data Node 4
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
Data Node 1
Data Node 2
F1
F1
Data Node 3
Data Node 4
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
Data Node 1
Data Node 2
F1
F3 F1
Data Node 3
Data Node 4
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2
F2
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2
F4 F2
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
Data Node 1
Data Node 2
F3
F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Table T1
F1
F3
MySQL Cluster: Demo Partitioning
Server 1
P2
P3
P4
P1
Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
MySQL Cluster: Demo Partitioning
Server 1
Server 2
P2
P3
P4
P1
Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
MySQL Cluster: Demo Partitioning
Server 1
Server 2
P2
P3
P4
P1
Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
P
MySQL Cluster: Demo Partitioning
Server 1
Server 2
P2
P3
P4
P1
Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
MySQL Cluster: Demo Partitioning
Server 1
Server 2
P2
P3
P4
P1
Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
P2
P3
P4
P1
MySQL Cluster: Demo Partitioning
Server 1
Server 2
Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
P2
P3
P4
P1
MySQL Cluster: Demo Partitioning
Server 1
Server 2

More Related Content

What's hot

Los Angeles R users group - Dec 14 2010 - Part 2
Los Angeles R users group - Dec 14 2010 - Part 2Los Angeles R users group - Dec 14 2010 - Part 2
Los Angeles R users group - Dec 14 2010 - Part 2
rusersla
 

What's hot (20)

MySQL Indexierung CeBIT 2014
MySQL Indexierung CeBIT 2014MySQL Indexierung CeBIT 2014
MySQL Indexierung CeBIT 2014
 
Mongodb replication
Mongodb replicationMongodb replication
Mongodb replication
 
Plmce 14 be a_hero_16x9_final
Plmce 14 be a_hero_16x9_finalPlmce 14 be a_hero_16x9_final
Plmce 14 be a_hero_16x9_final
 
MySQL database replication
MySQL database replicationMySQL database replication
MySQL database replication
 
MariaDB 10.5 binary install (바이너리 설치)
MariaDB 10.5 binary install (바이너리 설치)MariaDB 10.5 binary install (바이너리 설치)
MariaDB 10.5 binary install (바이너리 설치)
 
NoSQL Database
NoSQL DatabaseNoSQL Database
NoSQL Database
 
Performence tuning
Performence tuningPerformence tuning
Performence tuning
 
HA with Galera
HA with GaleraHA with Galera
HA with Galera
 
Optimizing Parallel Reduction in CUDA : NOTES
Optimizing Parallel Reduction in CUDA : NOTESOptimizing Parallel Reduction in CUDA : NOTES
Optimizing Parallel Reduction in CUDA : NOTES
 
Los Angeles R users group - Dec 14 2010 - Part 2
Los Angeles R users group - Dec 14 2010 - Part 2Los Angeles R users group - Dec 14 2010 - Part 2
Los Angeles R users group - Dec 14 2010 - Part 2
 
Mysql database basic user guide
Mysql database basic user guideMysql database basic user guide
Mysql database basic user guide
 
JSON improvements in MySQL 8.0
JSON improvements in MySQL 8.0JSON improvements in MySQL 8.0
JSON improvements in MySQL 8.0
 
Oracle: Binding versus caging
Oracle: Binding versus cagingOracle: Binding versus caging
Oracle: Binding versus caging
 
How to configure the cluster based on Multi-site (WAN) configuration
How to configure the clusterbased on Multi-site (WAN) configurationHow to configure the clusterbased on Multi-site (WAN) configuration
How to configure the cluster based on Multi-site (WAN) configuration
 
Introduction to Cassandra
Introduction to CassandraIntroduction to Cassandra
Introduction to Cassandra
 
mongodb tutorial
mongodb tutorialmongodb tutorial
mongodb tutorial
 
Mongo db roma replication and sharding
Mongo db roma replication and shardingMongo db roma replication and sharding
Mongo db roma replication and sharding
 
Ten Reasons Why You Should Prefer PostgreSQL to MySQL
Ten Reasons Why You Should Prefer PostgreSQL to MySQLTen Reasons Why You Should Prefer PostgreSQL to MySQL
Ten Reasons Why You Should Prefer PostgreSQL to MySQL
 
My sql tutorial-oscon-2012
My sql tutorial-oscon-2012My sql tutorial-oscon-2012
My sql tutorial-oscon-2012
 
Sql material
Sql materialSql material
Sql material
 

Similar to My SQL Portal Database (Cluster)

MySQL cluster 72 in the Cloud
MySQL cluster 72 in the CloudMySQL cluster 72 in the Cloud
MySQL cluster 72 in the Cloud
Marco Tusa
 
MySQL replication & cluster
MySQL replication & clusterMySQL replication & cluster
MySQL replication & cluster
elliando dias
 
Hands on MapR -- Viadea
Hands on MapR -- ViadeaHands on MapR -- Viadea
Hands on MapR -- Viadea
viadea
 

Similar to My SQL Portal Database (Cluster) (20)

MySQL Cluster Basics
MySQL Cluster BasicsMySQL Cluster Basics
MySQL Cluster Basics
 
MySQL cluster 72 in the Cloud
MySQL cluster 72 in the CloudMySQL cluster 72 in the Cloud
MySQL cluster 72 in the Cloud
 
MySQL HA
MySQL HAMySQL HA
MySQL HA
 
MySQL cluster workshop
MySQL cluster workshopMySQL cluster workshop
MySQL cluster workshop
 
MySQL replication & cluster
MySQL replication & clusterMySQL replication & cluster
MySQL replication & cluster
 
The OSSCube MySQL High Availability Tutorial
The OSSCube MySQL High Availability TutorialThe OSSCube MySQL High Availability Tutorial
The OSSCube MySQL High Availability Tutorial
 
Drupal MySQL Cluster
Drupal MySQL ClusterDrupal MySQL Cluster
Drupal MySQL Cluster
 
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-DeviceSUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
 
MySQL 101 PHPTek 2017
MySQL 101 PHPTek 2017MySQL 101 PHPTek 2017
MySQL 101 PHPTek 2017
 
Mysql high availability and scalability
Mysql high availability and scalabilityMysql high availability and scalability
Mysql high availability and scalability
 
My First 100 days with a MySQL DBMS (WP)
My First 100 days with a MySQL DBMS (WP)My First 100 days with a MySQL DBMS (WP)
My First 100 days with a MySQL DBMS (WP)
 
MySQL Galera 集群
MySQL Galera 集群MySQL Galera 集群
MySQL Galera 集群
 
MySQL-adv.ppt
MySQL-adv.pptMySQL-adv.ppt
MySQL-adv.ppt
 
Hands on MapR -- Viadea
Hands on MapR -- ViadeaHands on MapR -- Viadea
Hands on MapR -- Viadea
 
Breakthrough performance with MySQL Cluster (2012)
Breakthrough performance with MySQL Cluster (2012)Breakthrough performance with MySQL Cluster (2012)
Breakthrough performance with MySQL Cluster (2012)
 
MariaDB Auto-Clustering, Vertical and Horizontal Scaling within Jelastic PaaS
MariaDB Auto-Clustering, Vertical and Horizontal Scaling within Jelastic PaaSMariaDB Auto-Clustering, Vertical and Horizontal Scaling within Jelastic PaaS
MariaDB Auto-Clustering, Vertical and Horizontal Scaling within Jelastic PaaS
 
RAC - The Savior of DBA
RAC - The Savior of DBARAC - The Savior of DBA
RAC - The Savior of DBA
 
NoSQL with MySQL
NoSQL with MySQLNoSQL with MySQL
NoSQL with MySQL
 
MYSQL
MYSQLMYSQL
MYSQL
 
Oracle 12.2 Domain Services Cluster
Oracle 12.2 Domain Services ClusterOracle 12.2 Domain Services Cluster
Oracle 12.2 Domain Services Cluster
 

Recently uploaded

怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
vexqp
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Riyadh +966572737505 get cytotec
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
nirzagarg
 
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
vexqp
 
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
wsppdmt
 
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
vexqp
 
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
nirzagarg
 
Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1
ranjankumarbehera14
 
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
nirzagarg
 
Cytotec in Jeddah+966572737505) get unwanted pregnancy kit Riyadh
Cytotec in Jeddah+966572737505) get unwanted pregnancy kit RiyadhCytotec in Jeddah+966572737505) get unwanted pregnancy kit Riyadh
Cytotec in Jeddah+966572737505) get unwanted pregnancy kit Riyadh
Abortion pills in Riyadh +966572737505 get cytotec
 

Recently uploaded (20)

怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
 
Switzerland Constitution 2002.pdf.........
Switzerland Constitution 2002.pdf.........Switzerland Constitution 2002.pdf.........
Switzerland Constitution 2002.pdf.........
 
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book nowVadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
 
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
怎样办理伦敦大学毕业证(UoL毕业证书)成绩单学校原版复制
 
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
 
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
 
Aspirational Block Program Block Syaldey District - Almora
Aspirational Block Program Block Syaldey District - AlmoraAspirational Block Program Block Syaldey District - Almora
Aspirational Block Program Block Syaldey District - Almora
 
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
 
Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1
 
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangePredicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
 
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24  Building Real-Time Pipelines With FLaNKDATA SUMMIT 24  Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
 
Harnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptxHarnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptx
 
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
 
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptxThe-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
 
Cytotec in Jeddah+966572737505) get unwanted pregnancy kit Riyadh
Cytotec in Jeddah+966572737505) get unwanted pregnancy kit RiyadhCytotec in Jeddah+966572737505) get unwanted pregnancy kit Riyadh
Cytotec in Jeddah+966572737505) get unwanted pregnancy kit Riyadh
 
Data Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdfData Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdf
 

My SQL Portal Database (Cluster)

  • 1.
  • 2. Introduction: A short presentation with a focus on architecture, maintenance and support of MySQL Cluster database (Portal database).
  • 3. Agenda:  MySQL Network Database Architecture Network Database Overview Features Structure Fault Tolerance Configuration File Partitioning  Maintenance Basic Admin functions Monitoring Backup and restore  Conclusion
  • 4. MySQL Cluster: Core architecture  What is it?  MySQL Cluster; MySQL nodes run mysql, data is stored on data nodes which run network database daemons (ndbd). Clients connect to any of the MySQL nodes to run a query. MySQL nodes connect to the data nodes to store and retrieve the necessary data.
  • 5. MySQL Cluster: Features  MySQL Cluster!  A high-availability, high redundancy version of MySQL server for distributed environments. A tool to partition and synchronize data on multiple servers.  These network of database resources provide such advantages as redundancy, zero down time and prevents single-point-of failure.  How is Clustering achieved?  Data is split-up among multiple servers so that a single server failure cannot cause downtime. Data distribution is managed by a network of db’s (ndb).  Multiple mysql servers connect to an ndb cluster to read/write. If one node (server) goes down the other nodes (servers) services clients requests.
  • 6. MySQL Cluster: Features  High Availability  MySQL Cluster (NDB Cluster) is a storage engine that offers 99.9% availability  Scalability  Throughput up to 200M queries per second  Designed for many parallel and short transactions  High Performance  Primary Key based queries, insertions and short index scans  Low partition latency  Allows human-unnoticeable delays between an input being processed and the corresponding output.  Self-healing (node recovery and cluster recovery)  Failed nodes automatically restart and resync with other nodes before joining the cluster. With complete application transparency
  • 7. MySQL Cluster: Features  Synchronous Replication –  Data within each data node is synchronously replicated to another data node  Failed nodes are automatically and transparently updated with current data  Automatic Failover –  MySQL cluster heartbeating mechanism instantly detects node failures and automatically fails over to other nodes within the cluster, without interrupting service to clients  Shared Nothing, no single-point-of failure –  Each node has its own disk and memory, the risk of a failure caused by shared components such as storage is eliminated.  If all nodes in partition fails cluster shuts down  In ndbd partitioning Data Consistency is preferred over Availability (CAPS theorem)
  • 8. MySQL Cluster: Structure  NDB is configured with a range of load-balancing and failover options.  Each part of the cluster is considered a node (process)  Cluster nodes include;  Management Nodes (ndb_mgm)  Manages the other nodes within the cluster (config data, start and stop nodes, running backups etc). Manages cluster configuration file.  This node is started first before other nodes  Data Nodes (ndbd); (two or more nodes)  Stores and manages data, txn mg’t and query execution. (> ndbd)  MySQL servers Nodes;  This node accesses the cluster data, (> mysqld –ndbcluster)  MySQL Client Node;  Access for client login  Management Client;  Provides admin functions such as cluster status, starting backups etc
  • 9. MySQL Cluster: Model MGM Node MGM Node mysql nodemysql node Application Nodes LDAP Server 2Server 1 Node Group Data Node 1 Data Node 2
  • 10. MySQL Cluster: Fault Tolerance MGM Node MGM Node mysql nodemysql node Application Nodes LDAP Server 2Server 1 Node Group Data Node 1 Data Node 2
  • 11. MySQL Cluster: Fault Tolerance MGM Node MGM Node mysql nodemysql node Application Nodes LDAP Server 2Server 1 Node Group Data Node 1 Data Node 2
  • 12. MySQL Cluster: Fault Tolerance MGM Node MGM Node mysql nodemysql node Application Nodes LDAP Server 2Server 1 Node Group Data Node 1 Data Node 2
  • 13. Portal db Server: Structure MySQL Server Database IP PROD Blackhole 192.168.82.83 PROD Wormhole 192.168.82.82 PPRD Cosmicray 192.168.82.80 PPRD Dimension 192.168.82.81 TEST Quantum 192.168.82.68 TEST Supernova 192.168.82.69
  • 14. Portal db Server: Struct-dimension/cosmicray Node Type Machine Name IP Server Management - ndb_mgmd(MGM) node id=1 192.168.82.80 Cosmicray Management - ndb_mgmd(MGM) node id=6 192.168.82.81 Dimensions Data/Storage - ndbd(NDB) node id=2 192.168.82.80 Cosmicray Data/Storage - ndbd(NDB) node id=3 192.168.82.81 Dimensions SQL/API node id=4 192.168.82.80 Cosmicray SQL/API node id=5 192.168.82.81 Dimensions SQL/API node id=7 Not connected
  • 16. MySQL Cluster: Configuration files  To operate nodes in the cluster information about cluster environment is stored in the cluster configuration file.  Cluster config file sets all parameters needed  There are two configuration files  Local Config file residing on each node (/etc/my.cnf)  This file provides information on how to nodes connect to the cluster  Global Config file on management node (/l01/mysql/config.ini)  Provides information about the cluster as a whole  Used by management node to start cluster, receive nodes connections
  • 17. MySQL Cluster: Configuration files  {Server}> /l01/mysql/config.ini  [ndb_mgmd]  NodeId=x => node ID usually in numerals  datadir=/l01/mysql/logs => directory for this mgmt nodes log files  hostname=192.168.92.80 => hostname or IP address  [ndbd] => One [ndbd] section per data node  NodeId=y => node ID usually in numerals  datadir=/l01/mysql/data => directory for this data nodes data files  hostname=192.168.92.80 => hostname or IP address  [mysqld]  NodeId=z  Hostname=192.168.92.80  [tcp default]  Portnumber=2202 => This is the default however you can use any that is free for all hosts in the cluster  [ndb default]  NoOfReplicas=2 => Number of replicas  DataMemory=500M => Memory allocated for data storage  IndexMemory=100M => Memory allocated for index storage  BackupDataDir=/l01/mysql => location of backups
  • 19. MySQL Cluster: Demo Partitioning  Mysql Cluster data partitioning is a feature that allow data to be divided and stored on different replication from one node to the other.  This involves horizontal partitioning, where rows in a table are divided horizontally. Cluster uses an internal algorithm to implement this partitioning so as to have same amount of table roles.  This ensures evenly balanced memory requirements across all the nodes.  Number of cluster partitioning equal number of data nodes in each cluster group. Each node group has same number of nodes.
  • 20. Data Node 1 Data Node 2 Table T1 MySQL Cluster: Diag-1 Partitioning P1 P2
  • 21. Data Node 1 Data Node 2 Table T1 F1 MySQL Cluster: Diag-2 Partitioning P1 P2 FxPrimary Fragment
  • 22. Data Node 1 Data Node 2 F1 Table T1 F1 MySQL Cluster: Diag-3 Partitioning P1 P2 Fx FxSecondary Fragment Primary Fragment
  • 23. Data Node 1 Data Node 2 F1 Table T1 F1 F2 MySQL Cluster: Diag-4 Partitioning P1 P2 Fx FxSecondary Fragment Primary Fragment
  • 24. Data Node 1 Data Node 2 F2 F1 Table T1 F1 F2 MySQL Cluster: Diag-5 Partitioning P1 P2 Fx FxSecondary Fragment Primary Fragment
  • 25. Data Node 1 Data Node 2 F2 F1 Table T1 F1 F2 MySQL Cluster: Diag-6 Partitioning P1 P2 Fx FxSecondary Fragment Primary Fragment Node Group Server 2 Server 1
  • 26. Data Node 1 Data Node 2 F2 F1 Table T1 F1 F2 MySQL Cluster: Diag-7 Partitioning P1 P2 Fx FxSecondary Fragment Primary Fragment Node Group Server 2 Server 1
  • 27. Data Node 1 Data Node 2 F2 F1 Table T1 F1 F2 MySQL Cluster: Diag-8 Partitioning P1 P2 Fx FxSecondary Fragment Primary Fragment Node Group Server 2 Server 1
  • 28. Data Node 1 Data Node 2 F2 F1 Table T1 F1 F2 MySQL Cluster: Diag-9 Partitioning P1 P2 Fx FxSecondary Fragment Primary Fragment Node Group Server 2 Server 1
  • 30. MySQL Cluster: Basic Admin Functions  Using Management Client to maintain nodes  Login to client;  Shell > ndb_mgm  Ndb_mgm> help  Maintain nodes;  ndb_mgm> SHOW  ndb_mgm> <id> STATUS/START/STOP  ndb_mgm> <id> START  Cluster Status;  Shell > ndb_mgm -e "SHOW“
  • 31. MySQL Cluster: Basic Admin Functions  START & STOP Cluster: 1-[Mg’t Nodes], 2.[Data Nodes], 3.[Mysql Nodes]  HELP/USAGE;  Shell > sudo /l01/mysql/bin/ndb_startup.init.sh usage  Shell > sudo /l01/mysql/bin/ndb_startup.init.sh help  START Management Nodes;  Shell > ndb_mgm –e “SHOW”  Shell > sudo /l01/mysql/bin/ndb_startup.init.sh status  Shell > sudo /l01/mysql/bin/ndb_startup.init.sh start_ndb_mgmd (start mg’mt node)  Shell > sudo /l01/mysql/bin/ndb_startup.init.sh stop_ndb_mgmd (stop mg’mt node)  Shell > sudo /l01/mysql/bin/ndb_startup.init.sh start (start)  START Data Nodes;  Shell > ndb_mgm –e “SHOW”  Shell > sudo /l01/mysql/bin/ndb_startup.init.sh status  Shell > sudo /l01/mysql/bin/ndb_startup.init.sh start_ndb (start data node)  Shell > sudo /l01/mysql/bin/ndb_startup.init.sh stop_ndb (stop data node)  Shell > ps –ef | grep ndb (verify data node)  START Mysqld Nodes  Shell > sudo mysqld_safe &  Shell > sudo /etc/init.d/mysql.server restart (start/restart/stop)  Shell > ps –ef | grep mysqld (verify mysqld node)  STOP Cluster; login to mg’t node and shutdown;  ndb_mgm > shutdown (shutdown)  Shell > ndb_mgm –e shutdown
  • 32. MySQL Cluster: Shutdown/Re-Start Cluster  STOP Cluster:  shell> ndb_mgm –e shutdown (shutdown Cluster)  START Cluster:  shell> sudo /l01/mysql/bin/ndb_startup.init.sh start_ndb_mgmd (start mg’mt)  shell> ndb_mgm -e "SHOW“  shell> sudo /l01/mysql/bin/ndb_startup.init.sh start_ndb (start data node)  shell> sudo /l01/mysql/bin/ndb_startup.init.sh start (start mg’mt & data)  shell> sudo mysqld_safe & (start mysql nodes)  shell> ndb_mgm -e "SHOW“ (verify)  shell> ps –ef | grep ndb (verify ndb)  shell> ps –ef | grep mysqld (verify mysqld)  START mysqld:  shell> sudo /etc/init.d/mysql.server restart (stop/start/restart/)
  • 33. MySQLd: Automatic/Re-Start  Upon Server Reboot Automatic restart is initiated:  shell> /etc/rc.d/init.d/mysql.server  shell> /etc/rc.d/init.d/mysql-cluster  START MySQLd:  shell> ps –ef | grep mysqld  shell> sudo service mysql.server status  shell> sudo service mysql.server start  shell> sudo service mysql.server status
  • 34. MySQL Cluster: Demo Cluster Replication  Demo data replication on Dimension  Cosmicray  server> mysql –u root –p  mysql> select @@hostname;  mysql> show databases;  mysql> create database ClusterDemo_db;  mysql> show database;  mysql> use ClusterDemo_db;  mysql> create table demotbl (ID int(10), Name char(35), Zip int(10));  mysql> desc demotbl;  mysql> insert into demotbl values(001, ‘Kabul’,23508);  mysql> insert into demotbl values(002, ‘Qandahar’, 20365);  mysql> insert into demotbl values(003, ‘Herat’, 54231);  mysql> select * from demotbl;
  • 35. MySQL Cluster: Demo Cluster Replication  Demo data replication on Dimension  Cosmicray  mysql> select @@hostname;  mysql> show databases;  mysql> use ClusterDemo_db;  mysql> show tables from ClusterDemo_db;  mysql> select * from demotbl;  mysql> alter table demotbl ENGINE=NDBCLUSTER;  mysql> create table demo (ID int(10), Name char(35), Zip int(10)) ENGINE=NDBCLUSTER;  mysql> show tables from ClusterDemo_db;  mysql> select * from demotbl;  mysql> drop table demotbl;  mysql> drop database ClusterDemo_db;  Is data equally replicated/redistributed on all nodes?  ndb_mgm> all report memory
  • 36. MySQL Cluster: Portal database Logs  Demo data replication on Dimension  Cosmicray  shell> cd /l01/mysql/logs  [shell]> more ndb_3_trace.log.next {}  [shell]> more ndb_3_error.log {}  [shell]> more ndb_3_trace.log.5 {}  [shell]> more ndb_6_out.log {}  [shell]> more ndb_6_cluster.log {}  [shell]> more mysqld.log {}  [shell]> more ndb_3_out.log {}
  • 38. MySQL Cluster: Backup & Restore  Backup creates a snapshot of all NDB nodes on the cluster at a given point in time. This consists of;  Metadata => Backup-backup_id.node_id.ctl  table records and => Backup-backup_id.node_id.Data  transaction logs => Backup-backup_id.node_id.log  Backup is executed with the “START BACKUP” command and restore with the command “ndb_restore” via the management client node.  ndb_mgm > START BACKUP NOWAIT;  ndb_mgm > ALL REPORT BACKUPSTATUS;  ndb_mgm > ALL REPORT MEMORY;
  • 39. MySQL Cluster: Backup & Restore  ndb_restore; the cluster restore program is implemented as a separate command-line utility with the following options;  Server> ndb_restore -c 192.168.82.80 -b 1 -n 3 -r /backup-cluster/BACKUP/BACKUP-1 • -c Connecting string  -b Backup Id = 1  -n Nodeid = 3  -r Restore Data  -m Restore Meta Data  -Backup location = /backup-cluster/BACKUP/BACKUP-1
  • 41. MySQL Cluster: Monitoring  By continuous monitoring we ensure that databases are up and servers have enough resources (memory, ram and cpu) to perform functions efficiently.  Ensuring that databases are always up and all jobs actively running.  MySQL monitoring tools include a. Nagios b. Crontab/Scripts, c. OEM d. Emails
  • 42. MySQL Cluster: Monitoring-Nagios  Nagios enables distributed monitoring of all Cluster nodes  disk, load, ndbd, mgmd, telnet and processes
  • 43. MySQL Cluster: Monitoring-Scripts Scheduled cronjobs enable us to monitor several processes  Check process script “check_mysql.sh” runs every five (5) mins to ensure that processes are running. If down “start_mysql.sh” is invoked and email notification sent.  Check daemon ndbd every 5 mins  Check daemon ndb_mgm every five mins  Diskmon process “diskmon.sh” runs every five (5) mins to check the size of disk space. Notifications are generated when disk space grows beyond the warning and threshold sizes.  Backup script “mysql_backup.sh” runs the ff. schedule  Schedule backups include;  Monitor if ndb and mgm daemons are up and active  Monitor if database instance is up  Monitor diskspace warning (85%) and alert (90%)
  • 44. Conclusion:  MySQL NDB Cluster is designed to deliver a high availability, fault tolerant database where no single failure results in loss of service.  Cluster database provides automatic failover, self healing, shared nothing architecture and no single point of failure.  Fault tolerance of cluster databases depends on the following factors;  Architectural choice of the configuration, placement of nodes on hosts and resources hosts are dependent on.  Eliminating single point of failure that could result in either hosts being lost.  Hosting management node on separate hosts to improve fault-tolerant solution.
  • 46. References:  https://www.digitalocean.com/community/tutorials/how-to-create-a-multi-node-mysql- cluster-on-ubuntu-16-04  http://www.clusterdb.com/mysql-cluster/deploying-mysql-cluster-over-multiple-hosts  https://research.euranova.eu/wp-content/uploads/MySQL-Cluster-Design-and-Architecture- Principles.compressed.pdf  http://www.agileload.com/agileload/blog/2013/03/11/optimizing-the-performance-of- mysql-cluster  http://messagepassing.blogspot.com/2012/03/cap-theorem-and-mysql-cluster.html  http://www.clusterdb.com/mysql-cluster/mysql-cluster-fault-tolerance-impact-of- deployment-decisions  https://dev.mysql.com/doc/mysql-cluster-excerpt/5.7/en/faqs-mysql- cluster.html#qandaitem-A-1-40  https://dev.mysql.com/doc/mysql-cluster-excerpt/5.7/en/mysql-cluster-params-ndbd.html
  • 47.
  • 48. Two node Group Partitioning
  • 49. Data Node 1 Data Node 2 Data Node 3 Data Node 4 Table T1 P2 P3 P4 P1 MySQL Cluster: Demo Partitioning
  • 50. Data Node 1 Data Node 2 F1 Data Node 3 Data Node 4 Table T1 MySQL Cluster: Demo Partitioning P2 P3 P4 P1
  • 51. Data Node 1 Data Node 2 F1 F1 Data Node 3 Data Node 4 Table T1 MySQL Cluster: Demo Partitioning P2 P3 P4 P1
  • 52. Data Node 1 Data Node 2 F1 F3 F1 Data Node 3 Data Node 4 Table T1 MySQL Cluster: Demo Partitioning P2 P3 P4 P1
  • 53. Data Node 1 Data Node 2 F1 F3 F3 F1 Data Node 3 Data Node 4 Table T1 MySQL Cluster: Demo Partitioning P2 P3 P4 P1
  • 54. Data Node 1 Data Node 2 F1 F3 F3 F1 Data Node 3 Data Node 4 F2 Table T1 MySQL Cluster: Demo Partitioning P2 P3 P4 P1
  • 55. Data Node 1 Data Node 2 F1 F3 F3 F1 Data Node 3 Data Node 4 F2 F2 Table T1 MySQL Cluster: Demo Partitioning P2 P3 P4 P1
  • 56. Data Node 1 Data Node 2 F1 F3 F3 F1 Data Node 3 Data Node 4 F2 F4 F2 Table T1 MySQL Cluster: Demo Partitioning P2 P3 P4 P1
  • 57. Data Node 1 Data Node 2 F1 F3 F3 F1 Data Node 3 Data Node 4 F2 F4 F4 F2 Table T1 MySQL Cluster: Demo Partitioning P2 P3 P4 P1
  • 58. Data Node 1 Data Node 2 F3 F1 Data Node 3 Data Node 4 F2 F4 F4 F2 Node Group 1 Table T1 F1 F3 MySQL Cluster: Demo Partitioning Server 1 P2 P3 P4 P1
  • 59. Data Node 1 Data Node 2 F1 F3 F3 F1 Data Node 3 Data Node 4 F2 F4 F4 F2 Node Group 1 Node Group 2 Table T1 MySQL Cluster: Demo Partitioning Server 1 Server 2 P2 P3 P4 P1
  • 60. Data Node 1 Data Node 2 F1 F3 F3 F1 Data Node 3 Data Node 4 F2 F4 F4 F2 Node Group 1 Node Group 2 Table T1 MySQL Cluster: Demo Partitioning Server 1 Server 2 P2 P3 P4 P1
  • 61. Data Node 1 Data Node 2 F1 F3 F3 F1 Data Node 3 Data Node 4 F2 F4 F4 F2 Node Group 1 Node Group 2 Table T1 P MySQL Cluster: Demo Partitioning Server 1 Server 2 P2 P3 P4 P1
  • 62. Data Node 1 Data Node 2 F1 F3 F3 F1 Data Node 3 Data Node 4 F2 F4 F4 F2 Node Group 1 Node Group 2 Table T1 MySQL Cluster: Demo Partitioning Server 1 Server 2 P2 P3 P4 P1
  • 63. Data Node 1 Data Node 2 F1 F3 F3 F1 Data Node 3 Data Node 4 F2 F4 F4 F2 Node Group 1 Node Group 2 Table T1 P2 P3 P4 P1 MySQL Cluster: Demo Partitioning Server 1 Server 2
  • 64. Data Node 1 Data Node 2 F1 F3 F3 F1 Data Node 3 Data Node 4 F2 F4 F4 F2 Node Group 1 Node Group 2 Table T1 P2 P3 P4 P1 MySQL Cluster: Demo Partitioning Server 1 Server 2

Editor's Notes

  1. The CAP Theorem states that, in a distributed system (a collection of interconnected nodes that share data.), you can only have two out of the following three guarantees across a write/read pair: Consistency, Availability, and Partition Tolerance - one of them must be sacrificed. Consistency means that data is the same across the cluster, so you can read or write to/from any node and get the same data. Availability means the ability to access the cluster even if a node in the cluster goes down. Partition Tolerance means that the cluster continues to function even if there is a "partition" (communications break) between two nodes (both nodes are up, but can't communicate). In order to get both availability and partition tolerance, you have to give up consistency. Consider if you have two nodes, X and Y, in a master-master setup. Now, there is a break between network comms in X and Y, so they can't synch updates. At this point you can either: A) Allow the nodes to get out of sync (giving up consistency), or B) Consider the cluster to be "down" (giving up availability) All the combinations available are: CA - data is consistent between all nodes - as long as all nodes are online - and you can read/write from any node and be sure that the data is the same, but if you ever develop a partition between nodes, the data will be out of sync (and won't re-sync once the partition is resolved). CP - data is consistent between all nodes, and maintains partition tolerance (preventing data desync) by becoming unavailable when a node goes down. AP - nodes remain online even if they can't communicate with each other and will resync data once the partition is resolved, but you aren't guaranteed that all nodes will have the same data (either during or after the partition)
  2. In our configuration we are hosting one management node, one data node and one mysql node on each server. Not only this both data-nodes form one node group. Adv: Loss of one mg’t cluster will still be up.
  3. When we lose one management node Cluster will still be up
  4. When we lost a data node Cluster will still be up
  5. However when we lose both data nodes then cluster will be down and we cannot provide service to our custmers.
  6. An example of our in-house Network database; dimension and cosmicray
  7. 1. The following figure shows how MySQL Cluster creates primary and secondary fragments of each partition. We have configured the cluster to use two physical data nodes with a replicas. 2, All of the data nodes responsible for the same fragments form a Node Group (NG). 3. The cluster automatically creates one “node group” from the number of replicas and data nodes specified. Updates are synchronously replicated between members of the node group to protect against data loss and enable failover in the event of a node failure. 4. Data held within the Cluster is partitioned, with each node group being responsible for 2 or more fragments. 5. If any single data node is lost the other data nodes within its node group will continue to provide service. 6. The management node (ndb_mgmd process) is required when adding nodes to the cluster. 7. A heart-beat protocol is used between the data nodes in order to identify when a node has been lost. The diag illustrates a typical small configuration with one or more data nodes from different node groups being stored on two different physical hosts and a management node on same server. If any single node (process) or physical host is lost then service can continue.
  8. Fragment P1 is stored on data node 1 and a primary replica (fragment) is stored on data node 2; Redundancy/mirrored (RAID-1) 1. The following figure shows how MySQL Cluster creates primary and secondary fragments of each partition. We have configured the cluster to use two physical data nodes with a replicas. 2, All of the data nodes responsible for the same fragments form a Node Group (NG). 3. The cluster automatically creates one “node group” from the number of replicas and data nodes specified. Updates are synchronously replicated between members of the node group to protect against data loss and enable failover in the event of a node failure. 4. Data held within the Cluster is partitioned, with each node group being responsible for 2 or more fragments. 5. If any single data node is lost the other data nodes within its node group will continue to provide service. 6. The management node (ndb_mgmd process) is required when adding nodes to the cluster. 7. A heart-beat protocol is used between the data nodes in order to identify when a node has been lost. The diag illustrates a typical small configuration with one or more data nodes from different node groups being stored on two different physical hosts and a management node on same server. If any single node (process) or physical host is lost then service can continue.
  9. Fragment P1 is stored on data node 1 and a primary replica (fragment) is stored on data node 2; Redundancy/mirrored (RAID-1) 1. The following figure shows how MySQL Cluster creates primary and secondary fragments of each partition. We have configured the cluster to use two physical data nodes with a replicas. 2, All of the data nodes responsible for the same fragments form a Node Group (NG). 3. The cluster automatically creates one “node group” from the number of replicas and data nodes specified. Updates are synchronously replicated between members of the node group to protect against data loss and enable failover in the event of a node failure. 4. Data held within the Cluster is partitioned, with each node group being responsible for 2 or more fragments. 5. If any single data node is lost the other data nodes within its node group will continue to provide service. 6. The management node (ndb_mgmd process) is required when adding nodes to the cluster. 7. A heart-beat protocol is used between the data nodes in order to identify when a node has been lost. The diag illustrates a typical small configuration with one or more data nodes from different node groups being stored on two different physical hosts and a management node on same server. If any single node (process) or physical host is lost then service can continue.
  10. Fragment P2 is stored on data node 2 and a primary replica (fragment) is stored on data node 1; Redundancy/mirrored (RAID-1)
  11. Fragment P2 is stored on data node 2 and a primary replica (fragment) is stored on data node 1; Redundancy/mirrored (RAID-1)
  12. Scenario 1 In the case where node 1 fails the cluster will still be up by failing over to data nodes 2. We are still able to maintain Data Consistency, Partitioning Latency and Availability Data Consistency  Data Partitioning  Data Availability 
  13. Scenario 2 In the case where node 2 fails the cluster will still be up by failing over to data nodes 1. We are still able to maintain Data Consistency, Partitioning Latency and Availability Data Consistency  Data Partitioning  Data Availability  1. The following figure shows how MySQL Cluster creates primary and secondary fragments of each partition. We have configured the cluster to use two physical data nodes with a replicas. 2, All of the data nodes responsible for the same fragments form a Node Group (NG). 3. The cluster automatically creates one “node group” from the number of replicas and data nodes specified. Updates are synchronously replicated between members of the node group to protect against data loss and enable failover in the event of a node failure. 4. Data held within the Cluster is partitioned, with each node group being responsible for 2 or more fragments. 5. If any single data node is lost the other data nodes within its node group will continue to provide service. 6. The management node (ndb_mgmd process) is required when adding nodes to the cluster. 7. A heart-beat protocol is used between the data nodes in order to identify when a node has been lost. The diag illustrates a typical small configuration with one or more data nodes from different node groups being stored on two different physical hosts and a management node on same server. If any single node (process) or physical host is lost then service can continue.
  14. Scenario 3 In the case where both nodes 1 and 2 fails the cluster will fail and shut down. Because data will not be consistent and service will not be available to customers We are still able to maintain Data Consistency, Partitioning Latency and Availability Consistency  Partitioning  Availability  1. The following figure shows how MySQL Cluster creates primary and secondary fragments of each partition. We have configured the cluster to use two physical data nodes with a replicas. 2, All of the data nodes responsible for the same fragments form a Node Group (NG). 3. The cluster automatically creates one “node group” from the number of replicas and data nodes specified. Updates are synchronously replicated between members of the node group to protect against data loss and enable failover in the event of a node failure. 4. Data held within the Cluster is partitioned, with each node group being responsible for 2 or more fragments. 5. If any single data node is lost the other data nodes within its node group will continue to provide service. 6. The management node (ndb_mgmd process) is required when adding nodes to the cluster. 7. A heart-beat protocol is used between the data nodes in order to identify when a node has been lost. The diag illustrates a typical small configuration with one or more data nodes from different node groups being stored on two different physical hosts and a management node on same server. If any single node (process) or physical host is lost then service can continue.
  15. Using Management Client to maintain nodes Login to client; server> ndb_mgm Maintain nodes; ndb_mgm> SHOW ndb_mgm> <id> STATUS/START/STOP ndb_mgm> <id> START Find out status of cluster; Server> ndb_mgm -e "SHOW“
  16. Start nodes with sudo. Note that mysql does not have sudo rights, you therefore have to start from appadm. Start management node Start data nodes Start mysql nodes Two processes run when the cluster database is started. These are the primary and the angel process. The angel process: This process monitors and attempt to restart the data node process When the primary process is killed the angel process is able to start up the primary process. [appadm]Server> ps –ef | grep ndb Ndbd process showing 0 memor and 0 CPU usage is the angel process. (The ndbd process showing 0 memory and CPU usage is the angel process. It actually does use a very small amount of each, of course. It simply checks to see if the main ndbd process (the primary data node process that actually handles the data) is running. If permitted to do so (for example, if the StopOnError configuration parameter is set to false—see Section 5.2.1, “NDB Cluster Data Node Configuration Parameters”), the angel process tries to restart the primary data node process.)
  17. SHUTDOWN CLUSTER: shell> ndb_mgm –e shutdown RESTART CLUSTER: Mgm’t Node: shell> sudo /l01/mysql/bin/ndb_startup.init.sh start_ndb_mgmd Data Node: shell> sudo /l01/mysql/bin/ndb_startup.init.sh start_ndb SQL Host: shell> sudo mysqld_safe & shell> ps –ef | grep ndb START: (Starts mg’t and data nodes) shell> sudo /l01/mysql/bin/ndb_startup.init.sh start ( shell> sudo ./ndb_startup.init.sh start) https://dev.mysql.com/doc/refman/5.7/en/mysql-cluster-install-shutdown-restart.html Shell>> /etc/init.d/mysql.server (start mysqld) Shell>> /etc/init.d/mysql-cluster (start cluster)
  18. shell>ps -ef | grep mysqld shell>sudo service mysql.server status ERROR! MySQL is not running shell>sudo service mysql.server start Starting MySQL.. SUCCESS! shell>sudo service mysql.server status SUCCESS! MySQL running (9872)
  19. Loginto Mysql database: Server> mysql –u root –p Enter password: xxxxxxxxxxxxx Demo data replication on Dimension  Cosmicray mysql> select @@hostname; mysql> show databases; mysql> create database ClusterDemo_db; mysql> show database; mysql> use ClusterDemo_db; mysql> create table demotbl (ID int(10), Name char(35), Zip int(10)); mysql> desc demotbl; mysql> insert into demotbl values(001, ‘Kabul’,23508); mysql> insert into demotbl values(002, ‘Qandahar’, 20365); mysql> insert into demotbl values(003, ‘Herat’, 54231); mysql> select * from demotbl;
  20. Demo data replication on Cosmicray  Dimension mysql> select @@hostname; mysql> show databases; mysql> use ClusterDemo_db; mysql> show tables from ClusterDemo_db; mysql> select * from demotbl; mysql> alter table demo ENGINE=NDBCLUSTER; mysql> create table demo (ID int(10), Name char(35), Zip int(10)) ENGINE=NDBCLUSTER; mysql> show tables from ClusterDemo_db; mysql> select * from demotbl; mysql> drop table demotbl; mysql> drop database ClusterDemo_db; Is data equally replicated/redistributed on all nodes? ndb_mgm> all report memory CLUSTER SHUTDOWN AND RESTART CLUSTER: shell> ndb_mgm -e shutdown CLUSTER RESTART: shell> ndb_mgmd -f /var/lib/mysql-cluster/config.ini Use the ndb_mgm client to verify that both data nodes have started successfully.
  21. shell> sudo reboot [Restart Server] shell> cd /l01/mysql/logs shell> sudo service mysql.server restart/stop/status shell> cd /l01/mysql/logs /l01/mysql/logs..> tail ndb_1_cluster.log
  22. This model shows partitioning of data from an object/table in the database.
  23. Fragment P1 is stored on data node 1 and a primary replica (fragment) is stored on data node 2; Redundancy/mirrored (RAID-1)
  24. Fragment P1 is stored on data node 1 and a primary replica (fragment) is stored on data node 2; Redundancy/mirrored (RAID-1)
  25. Fragment P3 is stored on data node 2 and a primary replica (fragment) is stored on data node 1; Redundancy/mirrored (RAID-1)
  26. Fragment P3 is stored on data node 2 and a primary replica (fragment) is stored on data node 1; Redundancy/mirrored (RAID-1)
  27. Fragment P2 is stored on data node 3 and a primary replica (fragment) is stored on data node 4; Redundancy/mirrored (RAID-1)
  28. Fragment P2 is stored on data node 3 and a primary replica (fragment) is stored on data node 4; Redundancy/mirrored (RAID-1)
  29. Fragment P4 is stored on data node 4 and a primary replica (fragment) is stored on data node 3; Redundancy/mirrored (RAID-1)
  30. Partition P4 is stored on data node 4 and a primary replica (fragment) is stored on data node 3; Redundancy/mirrored (RAID-1)
  31. Data nodes 1 and nodes 2 are housed in one server
  32. Data nodes 3 and nodes 4 are housed in a different server. This provides data redundancy. However for better redundancy nodes two should have been in server 2 and node 3 could have been in server 1.
  33. Scenario 1 In the case where node 1 fails the cluster will still be up by failng over to data nodes 2, 3 and 4. We are still able to maintain Data Consistency, Partitioning Latency and Availability Data Consistency  Data Partitioning  Data Availability 
  34. Scenario 2 When data nodes 1 and data nodes 4 fail, the cluster will still be up by falling over to data nodes 2 and 3. We are still able to maintain Data Consistency, Partitioning Latency and Availability Consistency  Partitioning  Availability 
  35. Scenario 3 When data nodes 2 and data nodes 3 fail, the cluster will still be up by falling over to data nodes 1 and 4. We are still able to maintain Data Consistency, Partitioning Latency and Availability Consistency  Partitioning  Availability  WHEN FAILED NODES ARE BROUGHT BACK ONLINE: When failed nodes are brought back online there is low partitioning Latency during self healing period.
  36. Scenario 4 When data nodes 1 and data nodes 2 fail, the cluster will shutdown because data will not be consistent. And data will not be available to customers. Consistency  Partitioning  Availability 
  37. Scenario 5 When data nodes 1, 2, 3, the cluster will shutdown because data will not be consistent. And data will not be available to customers. Consistency  Partitioning  Availability 