分类: Mysql/postgreSQL
2012-04-17 15:26:44
参考文档:
HA MySQL, write scaling using Cluster to non-cluster replication
1. 做好建立复制的准备工作
show master status;
2. 备份mysql cluster数据库
ndb_mgm> START BACKUP WAIT COMPLETED;
Waiting for completed, this may take several minutes
Node 1: Backup 3 started from node 50
Node 1: Backup 3 started from node 50 completed
StartGCP: 345698 StopGCP: 345701
#Records: 2113 #LogRecords: 0
Data: 58044 bytes Log: 0 bytes
cd /data/mcm_data/clusters/c01_altium
find . -name BACKUP
./1/data/BACKUP
ndb_mgm -e "all status"
Connected to Management Server at: localhost:1186
Node 1: started (mysql-5.5.19 ndb-7.2.4)
Node 2: started (mysql-5.5.19 ndb-7.2.4)
data node IDS: 1,2
3. 转换备份文件
mkdir /data/dump
导出metadata,不过这个是ndb的metadata,不能用于sqlnode或其他standalone的mysql
ndb_restore --backupid=1 --nodeid=1 --backup-path=./1/data/BACKUP/BACKUP-1 \
--print-meta \
--fields-enclosed-by=\" \
--include-databases=test
举例:
Successfully printed table: -- test/def/b --
Version: 1
Fragment type: 9
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 2
Number of primary keys: 1
Length of frm data: 207
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
FragmentCount: 2
ExtraRowGciBits: 0
ExtraRowAuthorBits: 0
TableStatus: Retrieved
-- Attributes --
name Char(100;latin1_swedish_ci) NULL AT=FIXED ST=MEMORY
$PK Bigunsigned PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
在所有的datanode上导出数据
ndb_restore --backupid=3 --nodeid=1 --backup-path=./1/data/BACKUP/BACKUP-3 \
--print-data --print-log --append \
--fields-enclosed-by=\" \
--include-databases=test \
--tab=/data/dump
wc -l /data/dump/*
0 /data/dump/a.txt
20 /data/dump/b.txt
0 /data/dump/c.txt
0 /data/dump/d.txt
0 /data/dump/e.txt
ndb_restore --backupid=3 --nodeid=2 --backup-path=./2/data/BACKUP/BACKUP-3 \
--print-data --print-log --append \
--fields-enclosed-by=\" \
--include-databases=test \
--tab=/data/dump
wc -l /data/dump/*
1 /data/dump/a.txt
20 /data/dump/b.txt
1 /data/dump/c.txt
1 /data/dump/d.txt
1 /data/dump/e.txt
可以看出每个datanode上导出了一般的数据
cat /data/dump/b.txt
"c" "296" "1"
"d" "755" "4"
"e" "626" "7"
"b" "238" "8"
"a" "965" "11"
"e" "620" "12"
"d" "748" "13"
"b" "231" "15"
"a" "961" "16"
"c" "296" "24"
"d" "755" "25"
"c" "296" "27"
"c" "296" "28"
"b" "231" "29"
"a" "961" "30"
"e" "626" "31"
"e" "620" "34"
"e" "626" "36"
"b" "231" "39"
"d" "748" "40"
最后一列是隐藏的PK,在datanode之间没有重复
这样制定参数不正常,不过这是缺省值,不要手工去改了
--fields-terminated-by="\t" --lines-terminated-by="\n"
4 导出导入metadata
mysqldump --no-data --port=3306 --protocol=tcp --user=root test > meta.sql
修改存储引擎:
sed -i -e 's#ENGINE=ndbcluster#ENGINE=innodb#' meta.sql
mysql --port=13306 --protocol=tcp --user=em_admin --password=admindemo test < meta.sql
5. 导入备份书库
节点1, 加delete参数清除数据:
for filename in `ls -1 /data/dump/*.txt`;
do mysqlimport --local --delete --port=13306 --protocol=tcp --user=em_admin --password=admindemo --fields-enclosed-by=\" test $filename;
done
其他节点:
for filename in `ls -1 /data/dump/*.txt`;
do mysqlimport --local --host=192.168.1.12 --port=13306 --protocol=tcp --user=em_admin --password=admindemo --fields-enclosed-by=\" test $filename;
done
注意都要带local从本地读数据,缺省指定的文件时server本地的文件名
无pk的表,隐含PK放到最后一列,导入到innoDB的时候最后一列被忽略
7 开启复制
找到stopGCP对应的lsn号:
show master status;
select file,position,gci from mysql.ndb_binlog_index
WHERE gci>=345701 ORDER BY gci ASC LIMIT 5;
CHANGE MASTER TO MASTER_HOST='masterA',
MASTER_PORT=3306,
MASTER_USER='repl',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='@file',
MASTER_LOG_POS=@pos;
START SLAVE ;
SHOW SLAVE STATUS\g
99 注意事项
ndb_apply_status, epochs and ndb_binlog_index
The mysql.ndb_apply_status is very important as it stores what epochs from the Cluster that the slave has applied.
The epochs are consistent units and stores 10ms (controlled by TimeBetweenEpochs in config.ini) of committed transactions.
On the master side, the epochs are mapped to binary log filename and position in the mysql.ndb_binlog_index table.
The mysql.ndb_binlog_index table is a MYISAM table is local to each master and is the glue between what epoch has been applied on the slave, and what binary log and position it corresponds to for the master.