Chinaunix首页 | 论坛 | 博客
  • 博客访问: 638902
  • 博文数量: 110
  • 博客积分: 3808
  • 博客等级: 中校
  • 技术积分: 1930
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-15 14:32
个人简介

声的伟大!

文章分类

全部博文(110)

文章存档

2014年(1)

2013年(2)

2012年(12)

2011年(81)

2010年(14)

分类: LINUX

2011-03-25 09:08:01

Swquence 1 Solutions

1>Recreate node1 and node2 if you have not already done so,by logging in to node5 and executing the command:

node5# /root/nodes -l2
 
 
2>From any node in the cluster,delete any pre-existing partions on our shared storag(the /root/RH436/HelpfulFiles/wipe_sda script makes this easy),then make sure the OS on each mode has its partition table updated using the partprobe command.

node1# /root/RH436/HelpfulFiles/wipe_sda
node1,2# partprobe /dev/sda
 
 
3>Configure node1 and node2 for more verbose logging. This is optional,but often helpful for troubleshooting.
Modify the default /etc/syslog.conf(on your freshly rebuilt cluster node) to send a copy of log messages to both the console and /var/log/messages.
Uncomment the kernel line(delete the '#'character at the beginning of the line) near the top of the file,and add the following line anywhere you'd like:
 
*.info;mail,authpriv,cron,kern.none  /dev/console
After editing syslog.conf,restart the service:
node1# service syslog restart
Repeat for node2.
 
 
4>Install the luci RPM on your node5 and ricci and httpd RPMs on node1 and node2 of your assigned cluster
 
node5# yum -y install luci
node1,2# yum -y install ricci httpd
 
 
5>Start the ricci services on node1 and node2,and configure it to start on boot.
 
node1,2# service ricci start
node1,2# chkconfig ricci on
 
 
6>Initialize the luci service on node5 and create an administrative user named admin with a password of redhat.
 
node5# luci_admin init
 
 
7>Restart luci (and configure to persist a reboot) and open the web page the command output suggests.Use the web browser on your local classrooom machine to access the web page.
 
node5# chkconfig luci on; service luci restart
Open a web browser,where X is your assigned cluster number.
 
 
8>Log in to luci using admin as the Login Name and redhat as the Password.
 
 
9>From the "Luci Homebase" page,select the cluster tab near the top and then select "Create a New Cluster" from the left sidebar.Enter a cluster name of clusterX,where X is your assigned cluster number.Enter the fully-qualified name for your two cluster nodes and the password for the root user on each.Make sure that "Download packages" is pre-selected,then select the "Check if node passwords are identical" option.All other options can be left as-is.Do not click the Submit button yet!
 
cXn1.example.com redhat
cXn2.example.com redhat
 
 
10>To complete the fencing setup,we need to configure node5 as a simple single-node cluster with the same fence_xvm.key as the cluster nodes.Complete the following three steps:
First,install the cman packages on node5,but do not start the cman service yet.
 
node5# yum -y install cman
Second,on node5,copy the file /root/node5_cluster.conf to /etc/cluster/cluster.conf.
 
node5# scp node1:/etc/cluster/fence_xvm.key /etc/cluster
 
Once the previous three steps are completed,start the cman service on node5 and make sure it persists a reboot.
 
node5# cman on; service cman start
 
 
11>Verify the web server is working properly by pointing a web browserver on your local workstation to the or running the command: 
local# elinks -dump
Verify the virtual IP address adn cluster status with the following commands:
 
node1# ip addr list
node1,2# clustat
 
 
12>If the previous step was successful,try to reload the service using the luci interface onto the other node in cluster,and verify it worked (you may need to refresh the luci status screen to see the service name change from the red to green color,otherwise you can continuously monitor the service status with the clustat -i 1 command from one of the node terminal windows.)
Cluster List --> clusterX --> Services --> Choose a Task... --> Relocate this service to cXn2.example.com --> Go
Note:the service can also be manually relocated using the command:
node1# clusvcadm -r webby -m cXn2.example.com
from any active node in cluster:
 
 
13>While continuously monitoring the cluster service status from node1,reboot node2 and watch the state of webby.
From on terminal window on node1:
node1# clustat -i 1
From another terminal window on node1:
node1# tail -f /var/log/messages
 
gfs
gfs_mkfs -p lock_dlm -t clustername:fsname -j 5 -b 4096 -J /dev/sdb1
 
阅读(1457) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~