Chinaunix首页 | 论坛 | 博客
  • 博客访问: 17125
  • 博文数量: 1
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 12
  • 用 户 组: 普通用户
  • 注册时间: 2014-03-06 15:16
个人简介

有linux,才有生存的力量!

文章分类

全部博文(1)

我的朋友

分类: Mysql/postgreSQL

2014-03-06 15:57:02

# -------------------------------------------------------------------------------------------------------------------------- #
# -                                  Percona XtraDB Cluster 5.6                                                             - #
# -------------------------------------------------------------------------------------------------------------------------- #

######################################################    CentOS 6.x    ######################################################
yum install nc rsync libaio perl perl-CPAN perl-Time-HiRes perl-DBD-MySQL
wget
rpm -ivh socat-1.7.2.1-1.el6.rf.x86_64.rpm
rpm -e mysql-libs --nodeps

rpm -Uhv
yum install Percona-XtraDB-Cluster-server-56 Percona-XtraDB-Cluster-client-56 Percona-XtraDB-Cluster-galera-3

###################################################### Ubuntu_server_13.10 #######################################################
apt-get install gcc make wget socat

apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
mv /etc/apt/source.list /etc/apt/source.list.ubuntu

lsb_release -a
#----------------------
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 13.10
Release:        13.10
Codename:       saucy
#----------------------

vim /etc/apt/source.list
#================================
deb saucy main # "saucy" is Ubuntu 13.10 mark
deb-src saucy main
#================================

apt-get update
apt-get install percona-xtradb-cluster-galera-3.x percona-xtradb-cluster-server-5.6 percona-xtradb-cluster-client-5.6
#================================================================================
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 percona-xtradb-cluster-client-5.6 : Depends: libdbi-perl but it is not installable
 percona-xtradb-cluster-server-5.6 : Depends: libdbi-perl but it is not installable
                                     Depends: libdbd-mysql-perl but it is not installable
                                     Depends: libaio1 (>= 0.3.93) but it is not installable
                                     Depends: percona-xtrabackup (>= 2.1.4) but it is not going to be installed
                                     Depends: iproute but it is not installable
E: Unable to correct problems, you have held broken packages.
#================================================================================

# Ubuntu 13.10 mirrors.163.com
vim /etc/apt/source.list
#===============================
deb saucy main restricted universe multiverse
deb saucy-security main restricted universe multiverse
deb saucy-updates main restricted universe multiverse
deb saucy-proposed main restricted universe multiverse
deb saucy-backports main restricted universe multiverse
deb-src saucy main restricted universe multiverse
deb-src saucy-security main restricted universe multiverse
deb-src saucy-updates main restricted universe multiverse
deb-src saucy-proposed main restricted universe multiverse
deb-src saucy-backports main restricted universe multiverse
#===============================

# Change source.list.163 to apt-get install 
apt-get install libdbi-perl libdbd-mysql-perl libaio1 iproute

# go on change source.list.percona install percona
apt-get install percona-xtradb-cluster-galera-3.x percona-xtradb-cluster-server-5.6 percona-xtradb-cluster-client-5.6


##############################################           Configuring the nodes          ###############################################
Individual nodes should be configured to be able to bootstrap the cluster. More details about bootstrapping the cluster can be found in the Bootstrapping the cluster guide.
Configuration file /etc/mysql/my.cnf for the first node should look like:
#=====================================================================================
[mysqld]
datadir=/var/lib/mysql
user=mysql

# Path to Galera library
wsrep_provider=/usr/lib/libgalera_smm.so

# Empty gcomm address is being used when cluster is getting bootstrapped
wsrep_cluster_address=gcomm://

# Cluster connection URL contains the IPs of node#1, node#2 and node#3
#wsrep_cluster_address=gcomm://10.0.0.21,10.0.0.22,10.0.0.23

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# This is a recommended tuning variable for performance
innodb_locks_unsafe_for_binlog=1

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=1

# Node #1 address
wsrep_node_address=10.0.0.21

# SST method
wsrep_sst_method=xtrabackup

# Cluster name
wsrep_cluster_name=my_ubuntu_cluster

# Authentication for SST method
wsrep_sst_auth="sstuser:s3cretPass"
#=====================================================================================

Note For the first member of the cluster variable wsrep_cluster_address should contain empty gcomm:// when the cluster is being bootstrapped. But as soon as we have bootstrapped the cluster and have at least one more node joined that line can be removed from the my.cnf configuration file and the one where wsrep_cluster_address contains all three node addresses. In case the node gets restarted and without making this change it will make bootstrap new cluster instead of joining the existing one.
After this, first node can be started with the following command:

[root@pxc1 ~]# /etc/init.d/mysql start

This command will start the first node and bootstrap the cluster (more information about bootstrapping cluster can be found in Bootstrapping the cluster manual).
After the first node has been started, cluster status can be checked by:

mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |
...
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
...
| wsrep_cluster_size         | 1                                    |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
...
| wsrep_ready                | ON                                   |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)
This output shows that the cluster has been successfully bootstrapped.

In order to perform successful State Snapshot Transfer using XtraBackup new user needs to be set up with proper privileges:

mysql@pxc1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cretPass';
mysql@pxc1> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
mysql@pxc1> FLUSH PRIVILEGES;

Note MySQL root account can also be used for setting up the State Snapshot Transfer with Percona XtraBackup, but it’s recommended to use a different (non-root) user for this.

Configuration file /etc/mysql/my.cnf on the second node (pxc2) should look like this:
#=====================================================================================
[mysqld]
datadir=/var/lib/mysql
user=mysql

# Path to Galera library
wsrep_provider=/usr/lib/libgalera_smm.so

# Cluster connection URL contains IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://10.0.0.21,10.0.0.22,10.0.0.23

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# This is a recommended tuning variable for performance
innodb_locks_unsafe_for_binlog=1

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=1

# Node #2 address
wsrep_node_address=10.0.0.22

# Cluster name
wsrep_cluster_name=my_ubuntu_cluster

# SST method
wsrep_sst_method=xtrabackup

#Authentication for SST method
wsrep_sst_auth="sstuser:s3cretPass"
#=====================================================================================

Second node can be started with the following command:

[root@pxc2 ~]# /etc/init.d/mysql start

After the server has been started it should receive the state snapshot transfer automatically. Cluster status can now be checked on both nodes. This is the example from the second node (pxc2):

mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | b598af3e-ace3-11e2-0800-3e90eb9cd5d3 |
...
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
...
| wsrep_cluster_size         | 2                                    |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
...
| wsrep_ready                | ON                                   |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)
This output shows that the new node has been successfully added to the cluster.


####################################################################################





阅读(2387) | 评论(1) | 转发(0) |
0

上一篇:没有了

下一篇:没有了

给主人留下些什么吧!~~

啦哆A梦2014-03-07 10:22:24