Chinaunix首页 | 论坛 | 博客
  • 博客访问: 108095
  • 博文数量: 29
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 397
  • 用 户 组: 普通用户
  • 注册时间: 2014-12-26 15:36
文章分类

全部博文(29)

文章存档

2016年(3)

2015年(13)

2014年(13)

我的朋友

分类: 系统运维

2014-12-31 15:02:20

 
系统配置
 
1.禁用防火墙,停止开机启动
/usr/sbin/ntpdate ntp.api.bz
/etc/init.d/iptables stop ; chkconfig iptables off
2.修改网络,添加host文件 (所有节点)
[root@NameNode01 ~]# vim /etc/hosts
10.0.2.75       NameNode01
10.0.2.216      DataNode01
10.0.2.217      DataNode02
10.0.2.218      DataNode03
10.0.2.219      NameNode-bak
 
echo '*       soft    nofile    32768' >> /etc/security/limits.conf ; echo '*       hard    nofile    32768' >> /etc/security/limits.conf
echo 'ulimit -u 5120' >> /etc/profile ; source /etc/profile
 
双网卡绑定
 
/etc/init.d/NetworkManager  stop ; chkconfig NetworkManager off ;chkconfig --list NetworkManager
chkconfig network on ; chkconfig --list network
 
 
[root@DC01DR02R10C03NNJTDN01 script]# cat bonding.sh
#!/bin/bash
arry=(bond0 em1 em2 10.101.1.231 255.255.255.0 10.101.1.1)
#modify ifcfg-bond* file
echo "DEVICE=${arry[0]}
IPADDR=${arry[3]}
NETMASK=${arry[4]}
GATEWAY=${arry[5]}
ONBOOT=yes
BOOTPROTO=static
USERCTL=no" >/tmp/ifcfg-${arry[0]}
/bin/cat /tmp/ifcfg-${arry[0]} > /etc/sysconfig/network-scripts/ifcfg-${arry[0]}
 
#modify ifcfg-eth0 file
echo "DEVICE=${arry[1]}
USERCTL=no
ONBOOT=yes
MASTER=${arry[0]}
SLAVE=yes
BOOTPROTO=none" >/tmp/ifcfg-${arry[1]}
/bin/cat /tmp/ifcfg-${arry[1]} > /etc/sysconfig/network-scripts/ifcfg-${arry[1]}
 
#modify ifcfg-eth1 file
echo "DEVICE=${arry[2]}
USERCTL=no
ONBOOT=yes
MASTER=${arry[0]}
SLAVE=yes
BOOTPROTO=none" >/tmp/ifcfg-${arry[2]}
/bin/cat /tmp/ifcfg-${arry[2]} > /etc/sysconfig/network-scripts/ifcfg-${arry[2]}
 
#modify modules.cof/modprobe.cof
MODCONF=/NULL
TEMPFILE=/tmp/modfile
BAKFILE=/etc/.modconf
 
echo "Please Select Your Bond Mode:(balance-rr/active-backup)or(0/1)?"
read MODE
 
        if [ -f /etc/modprobe.d/dist.conf ]; then
                MODCONF=/etc/modprobe.d/dist.conf
        #else
                #MODCONF=/etc/modules.conf
        fi
 
        cp $MODCONF $BAKFILE
 
 
        echo "alias ${arry[0]} bonding" >> $MODCONF
        echo "options ${arry[0]} miimon=100 mode=$MODE" >> $MODCONF
        echo "ifenslave bond0 eth0 eth1" >> /etc/rc.d/rc.local
#restart network
echo "System will restart network continue(y/n)?"
read bb
if [ "$bb" = "y" ] || [ "$bb" = "yes" ] || [ "$bb" = "Y" ];then
/etc/init.d/network restart
fi
echo "bonding OK!"
 
exit 0
 
[root@DC01DR02R10C03DN11 script]# cat clbond.sh
#!/bin/bash
MORE=/bin/more
IP=`/sbin/ifconfig bond0 |grep "inet addr" | awk -F: '{print $2}' | awk '{print $1}' >> /tmp/${IP}bondinfo`
${MORE} /proc/net/bonding/bond0 >> /tmp/${IP}bondinfo
echo "#################################" >> /tmp/${IP}bondinfo
 
==============================================================================================
cd /etc/sysconfig/network-scripts/
 
vim ifcfg-bond0
# Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet
DEVICE=bond0
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.101.0.25
NETMASK=255.255.255.0
GATEWAY=10.101.0.1
 
 
cat  ifcfg-em1   
DEVICE=em1
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=yes
 
 cat   ifcfg-em2     
DEVICE=em2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=yes
 
 
[root@localhost modprobe.d]# tail -3 /etc/modprobe.d/dist.conf
#bonding
alias bond0 bonding
options bond0 miimon=100 mode=0
 
 
创建hadoop用户,设置SSH互信
(1)创建账户( hadoop/123456   mapreduce/123 )
[root@DataNode03 ~]#  useradd hadoop
[root@DataNode03 ~]#  useradd mapreduce
[root@DataNode03 ~]# passwd  hadoop
Changing password for user hadoop.
New password:
BAD PASSWORD: it is too simplistic/systematic
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@DataNode03 ~]# passwd  mapreduce
Changing password for user mapreduce.
New password:
BAD PASSWORD: it is WAY too short
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
(2)创建互信
HDFS机器上生成hadoop密码对
[root@NameNode01 ~]# su - hadoop
[hadoop@NameNode01 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
94:18:82:86:7e:63:2e:c1:de:03:0e:36:38:58:fb:62 hadoop@NameNode01
The key's randomart image is:
+--[ RSA 2048]----+
| . .. .          |
|. +  . o .       |
|=o .  . o        |
|=B.+   .         |
|=.O..   S        |
| +E+.            |
| ....            |
|                 |
|                 |
+-----------------+
[hadoop@NameNode01 ~]$  cd .ssh/ ; cp id_rsa.pub authorized_keys
[hadoop@NameNode01 .ssh]$
1)Slave机器创建ssh目录
[root@DataNode01 ~]# su - hadoop
mkdir /home/hadoop/.ssh ; chmod 700 /home/hadoop/.ssh ; chown  hadoop.hadoop -R /home/hadoop/.ssh
 
2)把Master.Hadoop上的公钥复制到Slave.Hadoop上,
如:
[hadoop@NameNode01 .ssh]$  scp authorized_keys  10.0.2.219:/home/hadoop/.ssh/
3)公钥验证
[hadoop@NameNode01 .ssh]$ ssh 10.0.2.216
[hadoop@DataNode01 ~]$ logout
Connection to 10.0.2.216 closed.
[hadoop@NameNode01 .ssh]$ ssh 10.0.2.217
[hadoop@DataNode02 ~]$ logout
Connection to 10.0.2.217 closed.
[hadoop@NameNode01 .ssh]$ ssh 10.0.2.218
[hadoop@DataNode03 ~]$ logout
Connection to 10.0.2.218 closed.
[hadoop@NameNode01 .ssh]$ ssh 10.0.2.219
[hadoop@NameNode-bak ~]$ logout
Connection to 10.0.2.219 closed.
 
Mapreduce机器生成mapreduce公钥
[root@NameNode-bak ~]# su - mapreduce
[mapreduce@NameNode-bak ~]$ pwd
/home/mapreduce
[mapreduce@NameNode-bak ~]$ ssh-keygen -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/home/mapreduce/.ssh/id_rsa):
Created directory '/home/mapreduce/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/mapreduce/.ssh/id_rsa.
Your public key has been saved in /home/mapreduce/.ssh/id_rsa.pub.
The key fingerprint is:
0c:1a:9a:e8:9a:0b:22:d6:67:20:42:42:ca:5c:42:bc mapreduce@NameNode-bak
The key's randomart image is:
+--[ RSA 2048]----+
|o+ .             |
|=.o              |
|o+. . .          |
|oE o o o         |
|o.o..   S        |
|o o .            |
|+o . o           |
|*.  o            |
|+.               |
+-----------------+
[mapreduce@NameNode-bak ~]$  cd .ssh/ ; cp id_rsa.pub authorized_keys
1)Slave机器创建ssh目录
如:
[mapreduce@DataNode03 ~]$
mkdir /home/mapreduce/.ssh ; chmod 700 /home/mapreduce/.ssh ; chown  mapreduce.mapreduce -R /home/mapreduce/.ssh
2)把Mapreduce上的公钥复制到Slave.Hadoop上,
[mapreduce@NameNode-bak .ssh]$  scp authorized_keys  10.0.2.216:/home/mapreduce/.ssh/
mapreduce@10.0.2.216's password:
authorized_keys                                                                                                                       100%  404     0.4KB/s   00:00   
[
3)公钥验证:
[mapreduce@NameNode-bak .ssh]$ ssh 10.0.2.216
[mapreduce@DataNode01 ~]$ logout
Connection to 10.0.2.216 closed.
[mapreduce@NameNode-bak .ssh]$ ssh 10.0.2.217
[mapreduce@DataNode02 ~]$ logout
Connection to 10.0.2.217 closed.
[mapreduce@NameNode-bak .ssh]$ ssh 10.0.2.218
[mapreduce@DataNode03 ~]$ logout
Connection to 10.0.2.218 closed.
 
4.HDFS集群部署
添加环境变量:
[hadoop@NameNode01 ~]$ vim /etc/profile
export PATH
export JAVA_HOME=/home/hadoop/jdk1.6.0_24
export JRE_HOME=/home/hadoop/jdk1.6.0_24/jre
export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
export HADOOP_HOME=/home/hadoop/hadoop-0.20.2-cdh3u5
export PATH=$HADOOP_HOME/bin:$PATH
[hadoop@NameNode01 ~]$ source /etc/profile
 
#[hadoop@NameNode01 ~]$ scp .bash_profile 10.0.2.216:/home/hadoop/
HDFS软件同步
[hadoop@NameNode01 ~]$ scp hadoop-0.20.2-cdh3u5+jdk-nn.tar.gz 10.0.2.216:/home/hadoop/
 
分区,格式化
 
[root@DC01DR02R10C03DN11 script]# cat mkdir.sh
#!/bin/bash
mkdir -vp /data{0,1,2}
Directory=(/data0 /data1 /data2)
for DR in ${Directory[@]:0}
do
   mkdir ${DR}/dfs
   mkdir ${DR}/mapred
   cd ${DR}
   chown hadoop.hadoop dfs
   chown mapreduce.mapreduce mapred
done
 
[root@DC01DR02R10C03DN11 script]# cat parted.sh
#!/bin/bash
Parted=/sbin/parted
Mount=/bin/mount
#### Partition The Disk ##############
${Parted} -s /dev/sdb mklabel gpt
${Parted} -s /dev/sdb mkpart  primary 0G 11000G
${Parted} -s /dev/sdb mkpart  primary 11001G 21000G
${Parted} -s /dev/sdb mkpart  primary 21001G 33000G
${Parted} -s /dev/sdb print
 
####### Formatting The Disk ############
Directory=(/data0 /data1 /data2)
Partition=(/dev/sdb1 /dev/sdb2 /dev/sdb3)
/bin/mkdir ${Directory[@]:0}
for fs in ${Partition[@]:0}
  do
   /sbin/mkfs.ext4 -F ${fs}
  done
/bin/echo "${Partition[0]}         ${Directory[0]}               ext4  defaults,noatime,nodiratime      0 2" >> /etc/fstab
/bin/echo "${Partition[1]}         ${Directory[1]}               ext4  defaults,noatime,nodiratime      0 2" >> /etc/fstab
/bin/echo "${Partition[2]}         ${Directory[2]}               ext4  defaults,noatime,nodiratime      0 2" >> /etc/fstab
${Mount} -a
 
 
mount -o noatime -o nodiratime -o remount /data5
 
,noatime,nodiratime
 
 
创建保存块文件目录(datanode)
[root@DataNode01 ~]#
mkdir /data{0,1,2} ; chown hadoop:hadoop -R /data{0,1,2} ; ll -d /data*
drwxr-xr-x 2 hadoop hadoop 4096 Nov 18 16:57 /data0
drwxr-xr-x 2 hadoop hadoop 4096 Nov 18 16:57 /data1
drwxr-xr-x 2 hadoop hadoop 4096 Nov 18 16:57 /data2
 
启动HDFS服务
初始化:
[hadoop@NameNode01 ~]$ hadoop namenode -format
[hadoop@NameNode01 bin]$ ./start-dfs.sh
starting namenode, logging to /home/hadoop/hadoop-0.20.2-cdh3u5/logs/hadoop-hadoop-namenode-NameNode01.out
10.0.2.218: starting datanode, logging to /home/hadoop/hadoop-0.20.2-cdh3u5/logs/hadoop-hadoop-datanode-DataNode03.out
10.0.2.219: starting datanode, logging to /home/hadoop/hadoop-0.20.2-cdh3u5/logs/hadoop-hadoop-datanode-NameNode-bak.out
10.0.2.217: starting datanode, logging to /home/hadoop/hadoop-0.20.2-cdh3u5/logs/hadoop-hadoop-datanode-DataNode02.out
10.0.2.216: starting datanode, logging to /home/hadoop/hadoop-0.20.2-cdh3u5/logs/hadoop-hadoop-datanode-DataNode01.out
10.0.2.75: starting secondarynamenode, logging to /home/hadoop/hadoop-0.20.2-cdh3u5/logs/hadoop-hadoop-secondarynamenode-NameNode01.out
 
Error occurred during initialization of VM
Too small initial heap for new size specified
 
 
ERROR:
2013-11-18 17:18:03,484 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /data0: namenode namespaceID = 885256863; datanode namespaceID = 1617026162
问题:Namenode上namespaceID与datanode上namespaceID不一致。
问题产生原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有清空datanode下的数据,所以造成namenode节点上的namespaceID与datanode节点上的namespaceID不一致。启动失败。
解决办法:参考该网址 http://blog.csdn.net/wh62592855/archive/2010/07/21/5752199.aspx  给出两种解决方法,我们使用的是第一种解决方法:即:
 
  (1)停掉集群服务
 
  (2)在出问题的datanode节点上删除data目录,data目录即是在hdfs-site.xml文件中配置的dfs.data.dir目录,本机器上那个是/var/lib/hadoop-0.20/cache/hdfs/dfs/data/ (注:我们当时在所有的datanode和namenode节点上均执行了该步骤。以防删掉后不成功,可以先把data目录保存一个副本).
 
  (3)格式化namenode.
 
  (4)重新启动集群。
 
  问题解决。
 
  这种方法带来的一个副作用即是,hdfs上的所有数据丢失。如果hdfs上存放有重要数据的时候,不建议采用该方法,可以尝试提供的网址中的第二种方法。
 
[hadoop@DataNode01 /]$ rm -rf /data{0,1,2}/*
[hadoop@NameNode01 ~]$ hadoop namenode -format
[hadoop@NameNode01 bin]$ ./start-dfs.sh
启动成功 OK !
 
Mapreduce集群部署
[mapreduce@NameNode-bak ~]$ vim .bash_profile
export JAVA_HOME=/home/mapreduce/jdk1.6.0_24
export JRE_HOME=/home/maprecue/jdk1.6.0_24/jre
export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
export HADOOP_HOME=/home/mapreduce/hadoop-0.20.2-cdh3u5
export PATH=$HADOOP_HOME/bin:$PATH
[mapreduce@NameNode-bak ~]$ source .bash_profile
 
同步环境变量
如:
[mapreduce@NameNode-bak ~]$ scp .bash_profile 10.0.2.216:/home/mapreduce/
Mapreduce软件同步
[mapreduce@NameNode-bak ~]$ scp hadoop-0.20.2-cdh3u5+jdk-JT.tar.gz  10.0.2.216:/home/mapreduce/
[mapreduce@NameNode-bak ~]$ tar zxvf hadoop-0.20.2-cdh3u5+jdk-JT.tar.gz
 
启动Mapreduce服务
[mapreduce@NameNode-bak bin]$ ./start-mapred.sh
 
ERROR:
Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapreduce, access=WRITE, inode="/":hadoop:supergroup:drwxr-xr-x
 
hadoop fs -chown mapreduce:supergroup  /
 
[mapreduce@NameNode-bak bin]$ ./stop-mapred.sh
[hadoop@DataNode02 ~]$ hadoop fs -chmod 777 /
[mapreduce@NameNode-bak bin]$ ./start-mapred.sh
 
 
Mapreduce 服务 OK!
 
阅读(2389) | 评论(0) | 转发(0) |
0

上一篇:Bonding

下一篇:Rsync+Inotify

给主人留下些什么吧!~~