Your SlideShare is downloading. ×
MySQL 6.0 下的cluster + replicate - 20080220
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

MySQL 6.0 下的cluster + replicate - 20080220

183
views

Published on

MySQL 6.0 下的cluster + replicate高可用方案

MySQL 6.0 下的cluster + replicate高可用方案

Published in: Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
183
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. MySQL 6.0 下的 cluster + replicate Yejr(imysql@gmail.com) 2008-2-20 1. 简介 本文主要是尝试 MySQL 6.0 下的集群+复制,没有做深入的测试,仅可作安 装配置参考。 2. 准备工作 如果只是做普通的集群,那么一般只需要 4 台机器,甚至只需要 3 台,即把 管理节点和 SQL 节点放在同一个机器上,不过不推荐这么做。如果是集群+复制, 则机器数量翻番。 2.1. 普通集群 Node IP Address Management (MGM) node 192.168.0.1 MySQL server (SQL) node 192.168.0.2 Data (NDBD) node "A" 192.168.0.3
  • 2. Data (NDBD) node "B" 192.168.0.4 2.2. 集群+复制 Node IP Address Master Management (MGM) node 192.168.0.1 MySQL server (SQL) node 192.168.0.2 Data (NDBD) node "A" 192.168.0.3 Data (NDBD) node "B" 192.168.0.4
  • 3. Slave Management (MGM) node 192.168.1.1 MySQL server (SQL) node 192.168.1.2 Data (NDBD) node "A" 192.168.1.3 Data (NDBD) node "B" 192.168.1.4 3. 配置 MySQL 的安装路径默认为:/usr/local/mysql。 3.1. 管理节点 首先,创建数据文件主目录 /usr/local/mysql/data,然后编辑配置文件: vi /usr/local/mysql/config.ini
  • 4. #定义默认参数 [ndbd default] #设置集群中每个表保存的副本数 #这里有 2 个数据节点,那么每个节点则保存一个副本 NoOfReplicas=2 #设置用于保存数据的内存大小 DataMemory=1G #设置用于保存哈希索引的内存大小 IndexMemory=1G #设定管理节点相关参数 [ndb_mgmd] #指定 ID 号 id = 1 #指定管理节点主机 hostname=192.168.0.1 #指定管理节点的数据文件主目录 datadir=/usr/local/mysql/data #设定数据节点相关参数 [ndbd] id = 3 hostname=192.168.0.3 datadir=/usr/local/mysql/data [ndbd] id = 4 hostname=192.168.0.4 datadir=/usr/local/mysql/data
  • 5. #设定 SQL 节点相关参数 [mysqld] id = 2 hostname=192.168.0.2 #增加一个 API 节点,API 节点视需求增加 [mysqld] id = 5 3.2. SQL 节点 在 SQL 节点中,除了作为正常的 mysqld 服务在运行而设定的常规参数外, 想要加入集群中,只需要增加几行即可: vi /etc/my.cnf # mysql cluster #告诉 mysqld,要启用 ndbcluster 引擎 ndbcluster #指定管理节点 ndb-connectstring=192.168.0.1 [mysql_cluster] ndb-connectstring=192.168.0.1 3.3. 数据节点 数据节点配置很简单,只需要指定管理节点 IP 即可: vi /etc/my.cnf # mysql cluster #告诉 mysqld,要启用 ndbcluster 引擎 ndbcluster
  • 6. #指定管理节点 ndb-connectstring=192.168.0.1 [mysql_cluster] ndb-connectstring=192.168.0.1 4. 启动 4.1. 启动管理节点 很简单,直接运行命令: /usr/local/mysql/bin/ndb_mgmd –f /usr/local/mysql/config.ini 4.2. 启动数据节点 数据节点有些特殊,如果是第一次启动,则需要增加参数 –initial,如果不是, 直接启动即可: 第一次启动: /usr/local/mysql/bin/ndbd --initial 非第一次启动: /usr/local/mysql/bin/ndbd 如果你不是把 my.cnf 放在 /etc/ 下,则需要指定实际位置所在,比如: /usr/local/mysql/bin/ndbd –defaults-file=/usr/local/mysql/my.cnf --initial
  • 7. 或 /usr/local/mysql/bin/ndbd –defaults-file=/usr/local/mysql/my.cnf 4.3. 启动 SQL 节点 正常的 mysqld 启动方式启动即可,比如我习惯使用 /etc/init.d/mysql 来启动: service mysql start 或 /etc/init.d/mysql start 5. 管理 在管理节点上,运行客户端工具 ndb_mgm: /usr/local/mysql/bin/ndb_mgm -- NDB Cluster -- Management Client -- #查看集群状态 ndb_mgm> show Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=3 @192.168.0.3 (mysql-6.0.3 ndb-6.2.6, Nodegroup: 0, Master) id=4 @192.168.0.4 (mysql-6.0.3 ndb-6.2.6, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.0.1 (mysql-6.0.3 ndb-6.2.6) [mysqld(API)] 2 node(s) id=2 @192.168.0.2 (mysql-6.0.3 ndb-6.2.6)
  • 8. id=5 (not connected, accepting connect from any host) #查看全部数据节点状态 ndb_mgm> all status Node 3: started (mysql-6.0.3 ndb-6.2.6) Node 4: started (mysql-6.0.3 ndb-6.2.6) #报告数据节点的内存使用情况 ndb_mgm> 3 report memoryusage Node 3: Data usage is 6%(2154 32K pages of total 32768) Node 3: Index usage is 2%(3009 8K pages of total 131104) #重启第 3 个节点 ndb_mgm> 3 restart Node 3: Node shutdown initiated Node 3: Node shutdown completed, restarting, no start. Node 3 is being restarted Node 3: Started (version 6.2.6) 6. 备份、恢复 6.1. 备份 集群的备份很简单,只需在管理节点上执行 START BACKUP 即可: ndb_mgm> START BACKUP Waiting for completed, this may take several minutes Node 3: Backup 4 started from node 1 Node 3: Backup 4 started from node 1 completed StartGCP: 7847 StopGCP: 7850 #Records: 1736024 #LogRecords: 0 Data: 41649824 bytes Log: 0 bytes
  • 9. 6.2. 恢复 来做个实际的测试,在 SQL 节点上先制造一些数据出来: mysql> show tables; +----------------+ | Tables_in_test | +----------------+ | a | +----------------+ 1 row in set (0.01 sec) mysql> show create table a; +-------+-------------------------------------------------------------------------------------------+ | Table | Create Table | +-------+-------------------------------------------------------------------------------------------+ | a | CREATE TABLE `a` ( `id` int(11) DEFAULT NULL ) ENGINE=ndbcluster DEFAULT CHARSET=latin1 | +-------+-------------------------------------------------------------------------------------------+ 1 row in set (0.01 sec) mysql> select count(*) from a; +----------+ | count(*) | +----------+ | 1733965 | +----------+ 1 row in set (0.03 sec) mysql> checksum table a; +--------+------------+ | Table | Checksum | +--------+------------+ | test.a | 1641855839 | +--------+------------+ 1 row in set (11.61 sec) 然后用上面提到的方法进行备份后,删除该表的数据:
  • 10. mysql> truncate table a; Query OK, 0 rows affected (0.68 sec) 接下来在 2 个数据节点上分别执行以下命令恢复数据: /usr/local/mysql/bin/ndb_restore -n 3 -b 5 -r data/BACKUP/BACKUP-3 Nodeid = 3 Backup Id = 5 backup path = data/BACKUP/BACKUP-5 Opening file 'data/BACKUP/BACKUP-5/BACKUP-5.3.ctl' File size 8120 bytes Ndb version in backup files: ndb-6.2.6 Connected to ndb!! Opening file 'data/BACKUP/BACKUP-5/BACKUP-5-0.3.Data' File size 20809008 bytes _____________________________________________________ Processing data in table: sys/def/NDB$EVENTS_0(1) fragment 0 _____________________________________________________ Processing data in table: test/def/a(5) fragment 0 _____________________________________________________ Processing data in table: mysql/def/ndb_apply_status(4) fragment 0 _____________________________________________________ Processing data in table: mysql/def/NDB$BLOB_2_3(3) fragment 0 _____________________________________________________ Processing data in table: sys/def/SYSTAB_0(0) fragment 0 _____________________________________________________ Processing data in table: mysql/def/ndb_schema(2) fragment 0 Opening file 'data/BACKUP/BACKUP-5/BACKUP-5.3.log' File size 44 bytes Restored 866287 tuples and 0 log entries NDBT_ProgramExit: 0 - OK 再在 SQL 节点上查一下: mysql> select count(*) from a; +----------+ | count(*) | +----------+ | 866287 | +----------+ 1 row in set (0.00 sec)
  • 11. 然后在另一个数据节点上执行恢复。 /usr/local/mysql/bin/ndb_restore -n 4 -b 5 -r data/BACKUP/BACKUP-3 Nodeid = 4 Backup Id = 5 backup path = data/BACKUP/BACKUP-5 Opening file 'data/BACKUP/BACKUP-5/BACKUP-5.4.ctl' File size 8120 bytes Ndb version in backup files: ndb-6.2.6 Connected to ndb!! Opening file 'data/BACKUP/BACKUP-5/BACKUP-5-0.4.Data' File size 20841232 bytes _____________________________________________________ Processing data in table: sys/def/NDB$EVENTS_0(1) fragment 1 _____________________________________________________ Processing data in table: test/def/a(5) fragment 1 _____________________________________________________ Processing data in table: mysql/def/ndb_apply_status(4) fragment 1 _____________________________________________________ Processing data in table: mysql/def/NDB$BLOB_2_3(3) fragment 1 _____________________________________________________ Processing data in table: sys/def/SYSTAB_0(0) fragment 1 _____________________________________________________ Processing data in table: mysql/def/ndb_schema(2) fragment 1 Opening file 'data/BACKUP/BACKUP-5/BACKUP-5.4.log' File size 44 bytes Restored 867678 tuples and 0 log entries NDBT_ProgramExit: 0 – OK 再来查一次看看: mysql> checksum table a; +--------+------------+ | Table | Checksum | +--------+------------+ | test.a | 1641855839 | +--------+------------+ 1 row in set (11.62 sec)
  • 12. 恢复完毕。 7. 集群下的复制 7.1. 简述 从 MySQL 5.1 开始,就支持集群+复制了,这对于想要构建一个高可用方案 的用户来说,无疑是个惊喜。在这种模式下,既有主从的实时备份,又有基于集 群的负载均衡,不足指出在于,从我的测试结果来看,这种方案下的性能还不 是太高,仍有待改进。 集群+复制的配置其实很简单,就是配置好 2 个独立的集群后,把其中一个的 SQL 节点作为另一个集群 SQL 节点的 slave 即可。甚至可以使用下面几种架构: 3 个集群,6 个 SQL 节点,形成一个 3 个点环形的复制。
  • 13. 3 个集群,6 个 SQL 节点,形成一个 6 个点环形的复制,把另一个 SQL 节点 也利用起来。
  • 14. 7.2. 开始配置 7.2.1. master 上的配置 由于集群下的复制是基于 row-based 复制的,因此需要设置 logbin-format 的格 式为:ROW 或者 MIXED。
  • 15. 相对于普通的 mysqld 服务器配置,只需要增加类似如下 2 行: server-id = 1 binlog_format = "ROW" #or MIXED 7.2.2. slave 上的配置 新版本的 MySQL 已经不再通过 my.cnf 来指定 master 相关的信息了,而是通 过 CHANGE MASTER 指令来管理。因此,slave 上只需简单增加类似如下几行: server-id = 2 relay-log-purge=1 skip-slave-start replicate-ignore-db=mysql 7.3. 启动 slave 先 按 照 正 常 的 方 式 启 动 master 和 slave 上 的 mysqld 后 , 执 行 SHOW PROCESSLIST 可以看到 2 个 mysqld 都只有 2 个进程: mysql> SHOW PROCESSLIST; +----+-------------+-----------+------+---------+------+----------------------------------- +------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-------------+-----------+------+---------+------+----------------------------------- +------------------+ | 1 | system user | | | Daemon | 0 | Waiting for event from ndbcluster | NULL | | 2 | root | localhost | NULL | Query | 0 | NULL | show processlist | +----+-------------+-----------+------+---------+------+----------------------------------- +------------------+ 2 rows in set (0.00 sec)
  • 16. 在 slave 上执行 SHOW SLAVE STATUS 再看看: mysql> show slave statusG Empty set (0.00 sec) 可以看到,现在还没有任何复制相关的配置。因此,我们需要先在 master 上添加 一个账户用于复制: mysql> GRANT REPLICATION SLAVE, GRANT REPLICATION CLIENT ON *.* TO rep@’192.168.1.2’ IDENTIFIED BY ‘rep’; Query OK, 0 rows affected (0.00 sec) 然后,我们用 CHANGE MASTER 指令来指定一下 master 相关的参数: mysql> CHANGE MASTER TO MASTER_HOST='192.168.0.2', MASTER_USER='rep', MASTER_PASSWORD='rep'; Query OK, 0 rows affected (0.01 sec) mysql> SHOW SLAVE STATUSG *************************** 1. row *************************** Slave_IO_State: Master_Host: 192.168.0.2 Master_User: rep Master_Port: 3306 Connect_Retry: 60 Master_Log_File: Read_Master_Log_Pos: 4 Relay_Log_File: slave-relay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: Slave_IO_Running: No Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: mysql Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0
  • 17. Exec_Master_Log_Pos: 0 Relay_Log_Space: 106 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec) 执行 START SLAVE 来启动它。 mysql> start slave; Query OK, 0 rows affected (0.00 sec) 再来看看: mysql> SHOW SLAVE STATUSG *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.2 Master_User: rep Master_Port: 3306 Connect_Retry: 60 Master_Log_File: binlog.000001 Read_Master_Log_Pos: 256 Relay_Log_File: slave-relay-bin.000002 Relay_Log_Pos: 398 Relay_Master_Log_File: binlog.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: mysql
  • 18. Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 256 Relay_Log_Space: 557 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec) 7.4. 简单测试 这个留给读者自己按照常规的测试去完成吧 。

×