This document provides an overview of MySQL Cluster architecture, maintenance, and support. It discusses the core architecture including data distribution across nodes, features like high availability and scalability, and the cluster structure with management, data, and SQL nodes. It also covers configuration files, database partitioning, basic administration functions like starting and stopping nodes, replication demonstrations, logging, backup and restore processes, and monitoring tools.
4. MySQL Cluster: Core architecture
What is it?
MySQL Cluster; MySQL nodes run mysql, data is stored on data nodes which
run network database daemons (ndbd). Clients connect to any of the MySQL
nodes to run a query. MySQL nodes connect to the data nodes to store and
retrieve the necessary data.
5. MySQL Cluster: Features
MySQL Cluster!
A high-availability, high redundancy version of MySQL server for distributed
environments. A tool to partition and synchronize data on multiple servers.
These network of database resources provide such advantages as redundancy,
zero down time and prevents single-point-of failure.
How is Clustering achieved?
Data is split-up among multiple servers so that a single server failure cannot
cause downtime. Data distribution is managed by a network of db’s (ndb).
Multiple mysql servers connect to an ndb cluster to read/write. If one node
(server) goes down the other nodes (servers) services clients requests.
6. MySQL Cluster: Features
High Availability
MySQL Cluster (NDB Cluster) is a storage engine that offers 99.9% availability
Scalability
Throughput up to 200M queries per second
Designed for many parallel and short transactions
High Performance
Primary Key based queries, insertions and short index scans
Low partition latency
Allows human-unnoticeable delays between an input being processed and the
corresponding output.
Self-healing (node recovery and cluster recovery)
Failed nodes automatically restart and resync with other nodes before joining the
cluster. With complete application transparency
7. MySQL Cluster: Features
Synchronous Replication –
Data within each data node is synchronously replicated to another data node
Failed nodes are automatically and transparently updated with current data
Automatic Failover –
MySQL cluster heartbeating mechanism instantly detects node failures and
automatically fails over to other nodes within the cluster, without interrupting
service to clients
Shared Nothing, no single-point-of failure –
Each node has its own disk and memory, the risk of a failure caused by shared
components such as storage is eliminated.
If all nodes in partition fails cluster shuts down
In ndbd partitioning Data Consistency is preferred over Availability (CAPS
theorem)
8. MySQL Cluster: Structure
NDB is configured with a range of load-balancing and failover options.
Each part of the cluster is considered a node (process)
Cluster nodes include;
Management Nodes (ndb_mgm)
Manages the other nodes within the cluster (config data, start and stop nodes, running
backups etc). Manages cluster configuration file.
This node is started first before other nodes
Data Nodes (ndbd); (two or more nodes)
Stores and manages data, txn mg’t and query execution. (> ndbd)
MySQL servers Nodes;
This node accesses the cluster data, (> mysqld –ndbcluster)
MySQL Client Node;
Access for client login
Management Client;
Provides admin functions such as cluster status, starting backups etc
9. MySQL Cluster: Model
MGM Node MGM Node
mysql nodemysql node
Application Nodes
LDAP
Server 2Server 1
Node Group
Data Node 1 Data Node 2
10. MySQL Cluster: Fault Tolerance
MGM Node MGM Node
mysql nodemysql node
Application Nodes
LDAP
Server 2Server 1
Node Group
Data Node 1 Data Node 2
11. MySQL Cluster: Fault Tolerance
MGM Node MGM Node
mysql nodemysql node
Application Nodes
LDAP
Server 2Server 1
Node Group
Data Node 1 Data Node 2
12. MySQL Cluster: Fault Tolerance
MGM Node MGM Node
mysql nodemysql node
Application Nodes
LDAP
Server 2Server 1
Node Group
Data Node 1 Data Node 2
13. Portal db Server: Structure
MySQL Server Database IP
PROD Blackhole 192.168.82.83
PROD Wormhole 192.168.82.82
PPRD Cosmicray 192.168.82.80
PPRD Dimension 192.168.82.81
TEST Quantum 192.168.82.68
TEST Supernova 192.168.82.69
14. Portal db Server: Struct-dimension/cosmicray
Node Type Machine Name IP Server
Management - ndb_mgmd(MGM) node id=1 192.168.82.80 Cosmicray
Management - ndb_mgmd(MGM) node id=6 192.168.82.81 Dimensions
Data/Storage - ndbd(NDB) node id=2 192.168.82.80 Cosmicray
Data/Storage - ndbd(NDB) node id=3 192.168.82.81 Dimensions
SQL/API node id=4 192.168.82.80 Cosmicray
SQL/API node id=5 192.168.82.81 Dimensions
SQL/API node id=7 Not connected
16. MySQL Cluster: Configuration files
To operate nodes in the cluster information about cluster
environment is stored in the cluster configuration file.
Cluster config file sets all parameters needed
There are two configuration files
Local Config file residing on each node (/etc/my.cnf)
This file provides information on how to nodes connect to the cluster
Global Config file on management node (/l01/mysql/config.ini)
Provides information about the cluster as a whole
Used by management node to start cluster, receive nodes connections
17. MySQL Cluster: Configuration files
{Server}> /l01/mysql/config.ini
[ndb_mgmd]
NodeId=x => node ID usually in numerals
datadir=/l01/mysql/logs => directory for this mgmt nodes log files
hostname=192.168.92.80 => hostname or IP address
[ndbd] => One [ndbd] section per data node
NodeId=y => node ID usually in numerals
datadir=/l01/mysql/data => directory for this data nodes data files
hostname=192.168.92.80 => hostname or IP address
[mysqld]
NodeId=z
Hostname=192.168.92.80
[tcp default]
Portnumber=2202 => This is the default however you can use any that is free for all hosts in the cluster
[ndb default]
NoOfReplicas=2 => Number of replicas
DataMemory=500M => Memory allocated for data storage
IndexMemory=100M => Memory allocated for index storage
BackupDataDir=/l01/mysql => location of backups
19. MySQL Cluster: Demo Partitioning
Mysql Cluster data partitioning is a feature that allow data to be
divided and stored on different replication from one node to the
other.
This involves horizontal partitioning, where rows in a table are divided
horizontally. Cluster uses an internal algorithm to implement this
partitioning so as to have same amount of table roles.
This ensures evenly balanced memory requirements across all the
nodes.
Number of cluster partitioning equal number of data nodes in each
cluster group. Each node group has same number of nodes.
20. Data Node 1
Data Node 2
Table T1
MySQL Cluster: Diag-1 Partitioning
P1
P2
21. Data Node 1
Data Node 2
Table T1
F1
MySQL Cluster: Diag-2 Partitioning
P1
P2
FxPrimary Fragment
22. Data Node 1
Data Node 2
F1
Table T1
F1
MySQL Cluster: Diag-3 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
23. Data Node 1
Data Node 2
F1
Table T1
F1
F2
MySQL Cluster: Diag-4 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
24. Data Node 1
Data Node 2
F2
F1
Table T1
F1
F2
MySQL Cluster: Diag-5 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
25. Data Node 1
Data Node 2
F2
F1
Table T1
F1
F2
MySQL Cluster: Diag-6 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
Node Group
Server 2
Server 1
26. Data Node 1
Data Node 2
F2
F1
Table T1
F1
F2
MySQL Cluster: Diag-7 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
Node Group
Server 2
Server 1
27. Data Node 1
Data Node 2
F2
F1
Table T1
F1
F2
MySQL Cluster: Diag-8 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
Node Group
Server 2
Server 1
28. Data Node 1
Data Node 2
F2
F1
Table T1
F1
F2
MySQL Cluster: Diag-9 Partitioning
P1
P2
Fx
FxSecondary Fragment
Primary Fragment
Node Group
Server 2
Server 1
33. MySQLd: Automatic/Re-Start
Upon Server Reboot Automatic restart is initiated:
shell> /etc/rc.d/init.d/mysql.server
shell> /etc/rc.d/init.d/mysql-cluster
START MySQLd:
shell> ps –ef | grep mysqld
shell> sudo service mysql.server status
shell> sudo service mysql.server start
shell> sudo service mysql.server status
34. MySQL Cluster: Demo Cluster Replication
Demo data replication on Dimension Cosmicray
server> mysql –u root –p
mysql> select @@hostname;
mysql> show databases;
mysql> create database ClusterDemo_db;
mysql> show database;
mysql> use ClusterDemo_db;
mysql> create table demotbl (ID int(10), Name char(35), Zip int(10));
mysql> desc demotbl;
mysql> insert into demotbl values(001, ‘Kabul’,23508);
mysql> insert into demotbl values(002, ‘Qandahar’, 20365);
mysql> insert into demotbl values(003, ‘Herat’, 54231);
mysql> select * from demotbl;
35. MySQL Cluster: Demo Cluster Replication
Demo data replication on Dimension Cosmicray
mysql> select @@hostname;
mysql> show databases;
mysql> use ClusterDemo_db;
mysql> show tables from ClusterDemo_db;
mysql> select * from demotbl;
mysql> alter table demotbl ENGINE=NDBCLUSTER;
mysql> create table demo (ID int(10), Name char(35), Zip int(10)) ENGINE=NDBCLUSTER;
mysql> show tables from ClusterDemo_db;
mysql> select * from demotbl;
mysql> drop table demotbl;
mysql> drop database ClusterDemo_db;
Is data equally replicated/redistributed on all nodes?
ndb_mgm> all report memory
36. MySQL Cluster: Portal database Logs
Demo data replication on Dimension Cosmicray
shell> cd /l01/mysql/logs
[shell]> more ndb_3_trace.log.next {}
[shell]> more ndb_3_error.log {}
[shell]> more ndb_3_trace.log.5 {}
[shell]> more ndb_6_out.log {}
[shell]> more ndb_6_cluster.log {}
[shell]> more mysqld.log {}
[shell]> more ndb_3_out.log {}
38. MySQL Cluster: Backup & Restore
Backup creates a snapshot of all NDB nodes on the cluster at a given
point in time. This consists of;
Metadata => Backup-backup_id.node_id.ctl
table records and => Backup-backup_id.node_id.Data
transaction logs => Backup-backup_id.node_id.log
Backup is executed with the “START BACKUP” command and restore
with the command “ndb_restore” via the management client node.
ndb_mgm > START BACKUP NOWAIT;
ndb_mgm > ALL REPORT BACKUPSTATUS;
ndb_mgm > ALL REPORT MEMORY;
39. MySQL Cluster: Backup & Restore
ndb_restore; the cluster restore program is implemented as a separate
command-line utility with the following options;
Server> ndb_restore -c 192.168.82.80 -b 1 -n 3 -r /backup-cluster/BACKUP/BACKUP-1
• -c Connecting string
-b Backup Id = 1
-n Nodeid = 3
-r Restore Data
-m Restore Meta Data
-Backup location = /backup-cluster/BACKUP/BACKUP-1
41. MySQL Cluster: Monitoring
By continuous monitoring we ensure that databases are up and
servers have enough resources (memory, ram and cpu) to perform
functions efficiently.
Ensuring that databases are always up and all jobs actively running.
MySQL monitoring tools include
a. Nagios
b. Crontab/Scripts,
c. OEM
d. Emails
42. MySQL Cluster: Monitoring-Nagios
Nagios enables distributed monitoring of all Cluster nodes
disk, load, ndbd, mgmd, telnet and processes
43. MySQL Cluster: Monitoring-Scripts
Scheduled cronjobs enable us to monitor several processes
Check process script “check_mysql.sh” runs every five (5) mins to ensure that processes
are running. If down “start_mysql.sh” is invoked and email notification sent.
Check daemon ndbd every 5 mins
Check daemon ndb_mgm every five mins
Diskmon process “diskmon.sh” runs every five (5) mins to check the size of disk space.
Notifications are generated when disk space grows beyond the warning and threshold
sizes.
Backup script “mysql_backup.sh” runs the ff. schedule
Schedule backups include;
Monitor if ndb and mgm daemons are up and active
Monitor if database instance is up
Monitor diskspace warning (85%) and alert (90%)
44. Conclusion:
MySQL NDB Cluster is designed to deliver a high availability, fault
tolerant database where no single failure results in loss of service.
Cluster database provides automatic failover, self healing, shared
nothing architecture and no single point of failure.
Fault tolerance of cluster databases depends on the following factors;
Architectural choice of the configuration, placement of nodes on hosts and
resources hosts are dependent on.
Eliminating single point of failure that could result in either hosts being lost.
Hosting management node on separate hosts to improve fault-tolerant
solution.
49. Data Node 1
Data Node 2
Data Node 3
Data Node 4
Table T1
P2
P3
P4
P1
MySQL Cluster: Demo Partitioning
50. Data Node 1
Data Node 2
F1
Data Node 3
Data Node 4
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
51. Data Node 1
Data Node 2
F1
F1
Data Node 3
Data Node 4
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
52. Data Node 1
Data Node 2
F1
F3 F1
Data Node 3
Data Node 4
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
53. Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
54. Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
55. Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2
F2
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
56. Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2
F4 F2
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
57. Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Table T1
MySQL Cluster: Demo Partitioning
P2
P3
P4
P1
58. Data Node 1
Data Node 2
F3
F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Table T1
F1
F3
MySQL Cluster: Demo Partitioning
Server 1
P2
P3
P4
P1
59. Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
MySQL Cluster: Demo Partitioning
Server 1
Server 2
P2
P3
P4
P1
60. Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
MySQL Cluster: Demo Partitioning
Server 1
Server 2
P2
P3
P4
P1
61. Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
P
MySQL Cluster: Demo Partitioning
Server 1
Server 2
P2
P3
P4
P1
62. Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
MySQL Cluster: Demo Partitioning
Server 1
Server 2
P2
P3
P4
P1
63. Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
P2
P3
P4
P1
MySQL Cluster: Demo Partitioning
Server 1
Server 2
64. Data Node 1
Data Node 2
F1 F3
F3 F1
Data Node 3
Data Node 4
F2 F4
F4 F2
Node Group 1
Node Group 2
Table T1
P2
P3
P4
P1
MySQL Cluster: Demo Partitioning
Server 1
Server 2
Editor's Notes
The CAP Theorem states that, in a distributed system (a collection of interconnected nodes that share data.), you can only have two out of the following three guarantees across a write/read pair: Consistency, Availability, and Partition Tolerance - one of them must be sacrificed.
Consistency means that data is the same across the cluster, so you can read or write to/from any node and get the same data.
Availability means the ability to access the cluster even if a node in the cluster goes down.
Partition Tolerance means that the cluster continues to function even if there is a "partition" (communications break) between two nodes (both nodes are up, but can't communicate).
In order to get both availability and partition tolerance, you have to give up consistency. Consider if you have two nodes, X and Y, in a master-master setup. Now, there is a break between network comms in X and Y, so they can't synch updates. At this point you can either:
A) Allow the nodes to get out of sync (giving up consistency), or
B) Consider the cluster to be "down" (giving up availability)
All the combinations available are:
CA - data is consistent between all nodes - as long as all nodes are online - and you can read/write from any node and be sure that the data is the same, but if you ever develop a partition between nodes, the data will be out of sync (and won't re-sync once the partition is resolved).
CP - data is consistent between all nodes, and maintains partition tolerance (preventing data desync) by becoming unavailable when a node goes down.
AP - nodes remain online even if they can't communicate with each other and will resync data once the partition is resolved, but you aren't guaranteed that all nodes will have the same data (either during or after the partition)
In our configuration we are hosting one management node, one data node and one mysql node on each server. Not only this both data-nodes form one node group.
Adv:
Loss of one mg’t cluster will still be up.
When we lose one management node Cluster will still be up
When we lost a data node Cluster will still be up
However when we lose both data nodes then cluster will be down and we cannot provide service to our custmers.
An example of our in-house Network database; dimension and cosmicray
1. The following figure shows how MySQL Cluster creates primary and secondary fragments of each partition. We have configured the cluster to use two physical data nodes with a replicas.
2, All of the data nodes responsible for the same fragments form a Node Group (NG).
3. The cluster automatically creates one “node group” from the number of replicas and data nodes specified. Updates are synchronously replicated between members of the node group to protect against data loss and enable failover in the event of a node failure.
4. Data held within the Cluster is partitioned, with each node group being responsible for 2 or more fragments.
5. If any single data node is lost the other data nodes within its node group will continue to provide service.
6. The management node (ndb_mgmd process) is required when adding nodes to the cluster.
7. A heart-beat protocol is used between the data nodes in order to identify when a node has been lost.
The diag illustrates a typical small configuration with one or more data nodes from different node groups being stored on two different physical hosts and a management node on same server. If any single node (process) or physical host is lost then service can continue.
Fragment P1 is stored on data node 1 and a primary replica (fragment) is stored on data node 2;
Redundancy/mirrored (RAID-1)
1. The following figure shows how MySQL Cluster creates primary and secondary fragments of each partition. We have configured the cluster to use two physical data nodes with a replicas.
2, All of the data nodes responsible for the same fragments form a Node Group (NG).
3. The cluster automatically creates one “node group” from the number of replicas and data nodes specified. Updates are synchronously replicated between members of the node group to protect against data loss and enable failover in the event of a node failure.
4. Data held within the Cluster is partitioned, with each node group being responsible for 2 or more fragments.
5. If any single data node is lost the other data nodes within its node group will continue to provide service.
6. The management node (ndb_mgmd process) is required when adding nodes to the cluster.
7. A heart-beat protocol is used between the data nodes in order to identify when a node has been lost.
The diag illustrates a typical small configuration with one or more data nodes from different node groups being stored on two different physical hosts and a management node on same server. If any single node (process) or physical host is lost then service can continue.
Fragment P1 is stored on data node 1 and a primary replica (fragment) is stored on data node 2;
Redundancy/mirrored (RAID-1)
1. The following figure shows how MySQL Cluster creates primary and secondary fragments of each partition. We have configured the cluster to use two physical data nodes with a replicas.
2, All of the data nodes responsible for the same fragments form a Node Group (NG).
3. The cluster automatically creates one “node group” from the number of replicas and data nodes specified. Updates are synchronously replicated between members of the node group to protect against data loss and enable failover in the event of a node failure.
4. Data held within the Cluster is partitioned, with each node group being responsible for 2 or more fragments.
5. If any single data node is lost the other data nodes within its node group will continue to provide service.
6. The management node (ndb_mgmd process) is required when adding nodes to the cluster.
7. A heart-beat protocol is used between the data nodes in order to identify when a node has been lost.
The diag illustrates a typical small configuration with one or more data nodes from different node groups being stored on two different physical hosts and a management node on same server. If any single node (process) or physical host is lost then service can continue.
Fragment P2 is stored on data node 2 and a primary replica (fragment) is stored on data node 1;
Redundancy/mirrored (RAID-1)
Fragment P2 is stored on data node 2 and a primary replica (fragment) is stored on data node 1;
Redundancy/mirrored (RAID-1)
Scenario 1
In the case where node 1 fails the cluster will still be up by failing over to data nodes 2.
We are still able to maintain Data Consistency, Partitioning Latency and Availability
Data Consistency
Data Partitioning
Data Availability
Scenario 2
In the case where node 2 fails the cluster will still be up by failing over to data nodes 1.
We are still able to maintain Data Consistency, Partitioning Latency and Availability
Data Consistency
Data Partitioning
Data Availability
1. The following figure shows how MySQL Cluster creates primary and secondary fragments of each partition. We have configured the cluster to use two physical data nodes with a replicas.
2, All of the data nodes responsible for the same fragments form a Node Group (NG).
3. The cluster automatically creates one “node group” from the number of replicas and data nodes specified. Updates are synchronously replicated between members of the node group to protect against data loss and enable failover in the event of a node failure.
4. Data held within the Cluster is partitioned, with each node group being responsible for 2 or more fragments.
5. If any single data node is lost the other data nodes within its node group will continue to provide service.
6. The management node (ndb_mgmd process) is required when adding nodes to the cluster.
7. A heart-beat protocol is used between the data nodes in order to identify when a node has been lost.
The diag illustrates a typical small configuration with one or more data nodes from different node groups being stored on two different physical hosts and a management node on same server. If any single node (process) or physical host is lost then service can continue.
Scenario 3
In the case where both nodes 1 and 2 fails the cluster will fail and shut down. Because data will not be consistent and service will not be available to customers
We are still able to maintain Data Consistency, Partitioning Latency and Availability
Consistency
Partitioning
Availability
1. The following figure shows how MySQL Cluster creates primary and secondary fragments of each partition. We have configured the cluster to use two physical data nodes with a replicas.
2, All of the data nodes responsible for the same fragments form a Node Group (NG).
3. The cluster automatically creates one “node group” from the number of replicas and data nodes specified. Updates are synchronously replicated between members of the node group to protect against data loss and enable failover in the event of a node failure.
4. Data held within the Cluster is partitioned, with each node group being responsible for 2 or more fragments.
5. If any single data node is lost the other data nodes within its node group will continue to provide service.
6. The management node (ndb_mgmd process) is required when adding nodes to the cluster.
7. A heart-beat protocol is used between the data nodes in order to identify when a node has been lost.
The diag illustrates a typical small configuration with one or more data nodes from different node groups being stored on two different physical hosts and a management node on same server. If any single node (process) or physical host is lost then service can continue.
Using Management Client to maintain nodes
Login to client;
server> ndb_mgm
Maintain nodes;
ndb_mgm> SHOW
ndb_mgm> <id> STATUS/START/STOP
ndb_mgm> <id> START
Find out status of cluster;
Server> ndb_mgm -e "SHOW“
Start nodes with sudo. Note that mysql does not have sudo rights, you therefore have to start from appadm.
Start management node
Start data nodes
Start mysql nodes
Two processes run when the cluster database is started. These are the primary and the angel process.
The angel process: This process monitors and attempt to restart the data node process
When the primary process is killed the angel process is able to start up the primary process.
[appadm]Server> ps –ef | grep ndb
Ndbd process showing 0 memor and 0 CPU usage is the angel process.
(The ndbd process showing 0 memory and CPU usage is the angel process. It actually does use a very small amount of each, of course. It simply checks to see if the main ndbd process (the primary data node process that actually handles the data) is running. If permitted to do so (for example, if the StopOnError configuration parameter is set to false—see Section 5.2.1, “NDB Cluster Data Node Configuration Parameters”), the angel process tries to restart the primary data node process.)
shell>ps -ef | grep mysqld
shell>sudo service mysql.server status
ERROR! MySQL is not running
shell>sudo service mysql.server start
Starting MySQL.. SUCCESS!
shell>sudo service mysql.server status
SUCCESS! MySQL running (9872)
Loginto Mysql database:
Server> mysql –u root –p
Enter password: xxxxxxxxxxxxx
Demo data replication on Dimension Cosmicray
mysql> select @@hostname;
mysql> show databases;
mysql> create database ClusterDemo_db;
mysql> show database;
mysql> use ClusterDemo_db;
mysql> create table demotbl (ID int(10), Name char(35), Zip int(10));
mysql> desc demotbl;
mysql> insert into demotbl values(001, ‘Kabul’,23508);
mysql> insert into demotbl values(002, ‘Qandahar’, 20365);
mysql> insert into demotbl values(003, ‘Herat’, 54231);
mysql> select * from demotbl;
Demo data replication on Cosmicray Dimension
mysql> select @@hostname;
mysql> show databases;
mysql> use ClusterDemo_db;
mysql> show tables from ClusterDemo_db;
mysql> select * from demotbl;
mysql> alter table demo ENGINE=NDBCLUSTER;
mysql> create table demo (ID int(10), Name char(35), Zip int(10)) ENGINE=NDBCLUSTER;
mysql> show tables from ClusterDemo_db;
mysql> select * from demotbl;
mysql> drop table demotbl;
mysql> drop database ClusterDemo_db;
Is data equally replicated/redistributed on all nodes?
ndb_mgm> all report memory
CLUSTER SHUTDOWN AND RESTART CLUSTER:
shell> ndb_mgm -e shutdown
CLUSTER RESTART:
shell> ndb_mgmd -f /var/lib/mysql-cluster/config.ini
Use the ndb_mgm client to verify that both data nodes have started successfully.
shell> sudo reboot [Restart Server]
shell> cd /l01/mysql/logs
shell> sudo service mysql.server restart/stop/status
shell> cd /l01/mysql/logs
/l01/mysql/logs..> tail ndb_1_cluster.log
This model shows partitioning of data from an object/table in the database.
Fragment P1 is stored on data node 1 and a primary replica (fragment) is stored on data node 2;
Redundancy/mirrored (RAID-1)
Fragment P1 is stored on data node 1 and a primary replica (fragment) is stored on data node 2;
Redundancy/mirrored (RAID-1)
Fragment P3 is stored on data node 2 and a primary replica (fragment) is stored on data node 1;
Redundancy/mirrored (RAID-1)
Fragment P3 is stored on data node 2 and a primary replica (fragment) is stored on data node 1;
Redundancy/mirrored (RAID-1)
Fragment P2 is stored on data node 3 and a primary replica (fragment) is stored on data node 4;
Redundancy/mirrored (RAID-1)
Fragment P2 is stored on data node 3 and a primary replica (fragment) is stored on data node 4;
Redundancy/mirrored (RAID-1)
Fragment P4 is stored on data node 4 and a primary replica (fragment) is stored on data node 3;
Redundancy/mirrored (RAID-1)
Partition P4 is stored on data node 4 and a primary replica (fragment) is stored on data node 3;
Redundancy/mirrored (RAID-1)
Data nodes 1 and nodes 2 are housed in one server
Data nodes 3 and nodes 4 are housed in a different server. This provides data redundancy. However for better redundancy nodes two should have been in server 2 and node 3 could have been in server 1.
Scenario 1
In the case where node 1 fails the cluster will still be up by failng over to data nodes 2, 3 and 4.
We are still able to maintain Data Consistency, Partitioning Latency and Availability
Data Consistency
Data Partitioning
Data Availability
Scenario 2
When data nodes 1 and data nodes 4 fail, the cluster will still be up by falling over to data nodes 2 and 3.
We are still able to maintain Data Consistency, Partitioning Latency and Availability
Consistency
Partitioning
Availability
Scenario 3
When data nodes 2 and data nodes 3 fail, the cluster will still be up by falling over to data nodes 1 and 4.
We are still able to maintain Data Consistency, Partitioning Latency and Availability
Consistency
Partitioning
Availability
WHEN FAILED NODES ARE BROUGHT BACK ONLINE:
When failed nodes are brought back online there is low partitioning Latency during self healing period.
Scenario 4
When data nodes 1 and data nodes 2 fail, the cluster will shutdown because data will not be consistent. And data will not be available to customers.
Consistency
Partitioning
Availability
Scenario 5
When data nodes 1, 2, 3, the cluster will shutdown because data will not be consistent. And data will not be available to customers.
Consistency
Partitioning
Availability