Chinaunix首页 | 论坛 | 博客
  • 博客访问: 7094251
  • 博文数量: 3857
  • 博客积分: 6409
  • 博客等级: 准将
  • 技术积分: 15948
  • 用 户 组: 普通用户
  • 注册时间: 2008-09-02 16:48
个人简介

迷彩 潜伏 隐蔽 伪装

文章分类

全部博文(3857)

文章存档

2017年(5)

2016年(63)

2015年(927)

2014年(677)

2013年(807)

2012年(1241)

2011年(67)

2010年(7)

2009年(36)

2008年(28)

分类: HADOOP

2014-04-17 13:21:59

一、amazon ec2 环境


 申请4台VPC(virtual private cloud),在一个security group下,ubuntu 12.04.4。
 1台m3.large,cpu 2.5GHz双核,7G内存,30G分区在/mnt 下,这台机器用来当NameNode。
 3台m1.large,cpu 1.8GHz双核7G内存,400G分区在/mnt 下,这3台机器用来当DataNode。

二、编译打包hadoop-2.4.0和hbase-0.98.1


 因为apache发布的hadoop里面的native的library是32位的,所以就自己编译了,编译过程遇到很多问题,google解决了,下面就不贴出错误信息,直接写正确的步骤。
 因为缺少java环境,所以下载安装了jdk-7u51的包 jdk-7u51-linux-x64.tar.gz,自己写一段脚本debian_set_java.sh 来安装:
#!/bin/bash

if [ `whoami` != "root" ]then
 echo "Use root to set java environment."
 exit 1
fi
if [ `uname` == 'Darwin' ]then
 echo "Darwin system."
 exit 1
fi

function unpack_java()
{
 mkdir /usr/local/java
 tar -pzxf $1 -C /usr/local/java/
 if [ `echo $?` != 0 ]then
   echo "untar file failed."
   exit 1
 fi  
 echo "unpack $1 done"
}

function set_j8_env()
{
 cp -p /etc/profile /tmp/profile.bak_`date "+%Y%m%d%H%M%S"`
 sed -i '$a \\nJAVA_HOME=/usr/local/java/jdk1.8.0\nPATH=$PATH:$JAVA_HOME/bin\nexport JAVA_HOME\nexport PATH'/etc/profile
 source /etc/profile
 update-alternatives --install /usr/bin/javac javac /usr/local/java/jdk1.8.0/bin/javac  95  
 update-alternatives --install /usr/bin/java  java  /usr/local/java/jdk1.8.0/bin/java  95  
 update-alternatives --config java
 echo "set java 8 environment done"
}

function set_j7_env()
{
 cp -p /etc/profile /tmp/profile.bak_`date "+%Y%m%d%H%M%S"`
 sed -i '$a \\nJAVA_HOME=/usr/local/java/jdk1.7.0_51\nPATH=$PATH:$JAVA_HOME/bin\nexport JAVA_HOME\nexport PATH'/etc/profile
 source /etc/profile
 update-alternatives --install /usr/bin/javac javac /usr/local/java/jdk1.7.0_51/bin/javac  95
 update-alternatives --install /usr/bin/java  java  /usr/local/java/jdk1.7.0_51/bin/java  95
 update-alternatives --config java
 echo "set java 7 environment done"
}

unpack_java $1
set_j7_env

bash debian_set_java.sh jdk-7u51-linux-x64.tar.gz
安装好java7环境,我们正式开始编译打包。

1.Hadoop Compile
apt-get install protobuf-compiler
$ protoc
version == 2.4.1
因为需要protobuf版本>2.5.0,所以从官网下载并编译安装,编译先装编译器。
sudo apt-get install g++ make
wget
tar jxvf protobuf-2.5.0.tar.bz2
cd protobuf-2.5.0; ./configure --prefix=/usr/local; make
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib

编译的时候会引用${FINDBUGS_HOME}变量
wget
tar zxvf findbugs-2.0.3.tar.gz
export FINDBUGS_HOME=/home/ubuntu/findbugs-2.0.3
export PATH=$PATH:${FINDBUGS_HOME}/bin

编译hadoop和hbase要用maven,先安装编译打包依赖:
sudo apt-get install maven cmake zlib1g-dev libssl-dev
mvn package -Pdist,native,docs,src -DskipTests -Dtar
执行过程中会报错`hadoop-hdfs/target/findbugsXml.xml does not exist`,去掉docs继续编译:
mvn package -Pdist,native,src -DskipTests -Dtar

差不多半个小时,编译目录有2.6G大小,得到打包文件
hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0.tar.gz


2.Hbase Compile

hbase-0.98.1/pom.xml 文件修改hadoop版本 2.2.0 -> 2.4.0
mvn clean package -DskipTests
但是没有看到有生成包,仔细找资料,获得打包过程。
hbase可以打包出hadoop1,也可以打包hadoop2,我们需要hadoop2,先生成pom.xml.hadoop2 文件,在打包


bash ./dev-support/generate-hadoopX-poms.sh 0.98.1 0.98.1-hadoop2
MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 clean install -DskipTests -Prelease
MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 install -DskipTests site assembly:single -Prelease


10多分钟,编译目录有100多M大小,得到打包文件
hbase-0.98.1/hbase-assembly/target/hbase-0.98.1-hadoop2-bin.tar.gz


下面附mvn的几个命令:

  1. mvn clean
  2. mvn compile -DskipTests
  3. mvn package -DskipTests
  4. mvn package -Pdist -DskipTests -Dtar
  5. mvn package -Pdist,native -DskipTests -Dtar
  6. mvn install -DskipTests

三、在各台机器上安装hadoop和hbase


 在NameNode上用pssh这个批量脚本的工具来给三台DataNode配置环境,同时NameNode自己也配置好环境。

1. Prepare the following files in NameNods's /tmp diretory:
  jdk-7u51-linux-x64.tar.gz hadoop-2.4.0.tar.gz hbase-0.98.1-hadoop2-bin.tar.gz key.tar.gz debian_set_java.sh 


 hadoop-2.4.0-config.tar.gz hbase-0.98.1-config.tar.gz 这两个conf文件放到是 core-site.xml, hdfs-site.xml, mapred-site.xml, yarn-site.xml 和 hbase-site.xml 篇幅有限,请看这篇配置文章

  key.tar.gz 里的两个文件,在NameNode上 ssh-keygen -t rsa -P "" 得到的~/.ssh/ 目录下的id_rsa 和 id_rsa.pub
因为amazon会有.pem 的文件,里面是私钥内容,用于登录amazon ec2主机,而这里我们不使用这个文件,而是自己生成。
/etc/ssh/ssh_config 将其中的# StrictHostKeyChecking ask 改成 StrictHostKeyChecking no

2. 执行我写的安装脚本,注释说明步骤

  其中用到的hosts 文件是类似这样的行 ubuntu@127.0.0.1
# Install jdk7u51, already installed it on NameNode before
# sudo bash debian_set_java.sh jdk-7u51-linux-x64.tar.gz

# Add pub key
cat /home/ubuntu/.ssh/id_rsa.pub >> /home/ubuntu/.ssh/authorized_keys

# Disable ipv6
sudo sed -e '$a net.ipv6.conf.all.disable_ipv6=1\nnet.ipv6.conf.default.disable_ipv6=1\nnet.ipv6.conf.lo.disable_ipv6=1' -i /etc/sysctl.conf
sudo sysctl -p

# change /etc/hosts
sudo sed -e '\$a 172.31.0.241    cloud1\n172.31.10.229   cloud2\n172.31.10.231   cloud3\n172.31.10.230   cloud4' -i /etc/hosts
# need change hostname in each node /etc/hostname
# need delete 127.0.0.1 localhost in /etc/hosts, because if not, hadoop DataNode will not started

# change owner of data directory
sudo chown ubuntu:ubuntu /mnt

# uncompress hadoop hbase
tar zxf /tmp/hadoop-2.4.0.tar.gz -C /mnt
tar zxf /tmp/hbase-0.98.1-hadoop2-bin.tar.gz -C /mnt
ln -s /mnt/hadoop-2.4.0 /mnt/hadoop
ln -s /mnt/hbase-0.98.1-hadoop2 /mnt/hbase

# modify hadoop-env.sh, yarn-env.sh
sed -e 's/${JAVA_HOME}/\/usr\/local\/java\/jdk1.7.0_51/' -i /mnt/hadoop/etc/hadoop/hadoop-env.sh
sed -e 's/#export HADOOP_HEAPSIZE=/export HADOOP_HEAPSIZE=1024/' -i /mnt/hadoop/etc/hadoop/hadoop-env.sh
sed -e '/# some Java parameters/a\export JAVA_HOME=\/usr\/local\/java\/jdk1.7.0_51/' -i /mnt/hadoop/etc/hadoop/yarn-env.sh

# make dir for hadoop
mkdir /mnt/hadoop/tmp
mkdir /mnt/hadoop/name

# add slaves
echo -e 'cloud2\ncloud3\ncloud4' > /mnt/hadoop/etc/hadoop/slaves

# add xml configuration files
tar zxf /tmp/hadoop-2.4.0-config.tar.gz -C /mnt/hadoop/etc/hadoop/

# Change system limition
sudo sed -e '$a username    -            nofile   65536\nusername    soft/hard    nproc    32000' -i /etc/security/limits.conf
sudo sed -i '$a \\nsession required        pam_limits.so' /etc/pam.d/common-session

# modify hbase hbase-env.sh
sed -e 's/# export JAVA_HOME=\/usr\/java\/jdk1.6.0/export JAVA_HOME=\/usr\/local\/java\/jdk1.7.0_51/' -i /mnt/hbase/conf/hbase-env.sh
sed -e 's/# export HBASE_HEAPSIZE=1000/export HBASE_HEAPSIZE=4096/' -i /mnt/hbase/conf/hbase-env.sh
sed -e 's/# export HBASE_MANAGES_ZK=true/export HBASE_MANAGES_ZK=true/' -i /mnt/hbase/conf/hbase-env.sh

# add regionservers
echo -e 'cloud2\ncloud3\ncloud4' > /mnt/hbase/conf/regionservers

# add xml configuration files
tar zxf /tmp/hbase-0.98.1-config.tar.gz -C /mnt/hbase/conf/

# copy hadoop 64bit library to hbase
cp -r /mnt/hadoop/lib/native /mnt/hbase/lib

# soft link hadoop file to hbase
ln -s /mnt/hadoop/etc/hadoop/hdfs-site.xml /mnt/hbase/conf/



# pssh to execute the above commands in datanode
pssh -h hosts -i 'mkdir /home/ubuntu/.ssh ; tar zxvf /tmp/key.tar.gz -C /home/ubuntu/.ssh/ ; rm /tmp/key.tar.gz'
pssh -h hosts -i 'cd /tmp; sudo bash debian_set_java.sh jdk-7u51-linux-x64.tar.gz'
pssh -h hosts -i 'cat /home/ubuntu/.ssh/id_rsa.pub >> /home/ubuntu/.ssh/authorized_keys'

pscp -h hosts jdk-7u51-linux-x64.tar.gz debian_set_java.sh key.tar.gz /tmp
pscp -h hosts hadoop-2.4.0.tar.gz hbase-0.98.1-hadoop2-bin.tar.gz /tmp
pssh -h hosts -i "sudo sed -e '\$a net.ipv6.conf.all.disable_ipv6=1\nnet.ipv6.conf.default.disable_ipv6=1\nnet.ipv6.conf.lo.disable_ipv6=1' -i /etc/sysctl.conf"
pssh -h hosts -i 'sudo sysctl -p'
# pssh -h hosts -i 'sudo sed -e '1 i172.31.0.241    cloud1\n172.31.10.229   cloud2\n172.31.10.231   cloud3\n172.31.10.230   cloud4' -i /etc/hosts'
# need change hostname in each /etc/hostname
pssh -h hosts -i 'sudo chown ubuntu:ubuntu /mnt'
pssh -h hosts -i 'tar zxf /tmp/hadoop-2.4.0.tar.gz -C /mnt'
pssh -h hosts -i 'tar zxf /tmp/hbase-0.98.1-hadoop2-bin.tar.gz -C /mnt'
pssh -h hosts -i 'ln -s /mnt/hadoop-2.4.0 /mnt/hadoop'
pssh -h hosts -i 'ln -s /mnt/hbase-0.98.1-hadoop2 /mnt/hbase'
pssh -h hosts -i "sed -e 's/\${JAVA_HOME}/\/usr\/local\/java\/jdk1.7.0_51/' -i /mnt/hadoop/etc/hadoop/hadoop-env.sh"
pssh -h hosts -i "sed -e 's/#export HADOOP_HEAPSIZE=/export HADOOP_HEAPSIZE=1024/' -i /mnt/hadoop/etc/hadoop/hadoop-env.sh"
pssh -h hosts -i "sed -e '/# some Java parameters/a\export JAVA_HOME=\/usr\/local\/java\/jdk1.7.0_51/' -i /mnt/hadoop/etc/hadoop/yarn-env.sh"
pssh -h hosts -i 'mkdir /mnt/hadoop/tmp'
pssh -h hosts -i 'mkdir /mnt/hadoop/data'
pssh -h hosts -i "echo -e 'cloud2\ncloud3\ncloud4' > /mnt/hadoop/etc/hadoop/slaves"
pscp -h hosts hadoop-2.4.0-config.tar.gz /tmp
pssh -h hosts -i 'tar zxf /tmp/hadoop-2.4.0-config.tar.gz -C /mnt/hadoop/etc/hadoop/'

pssh -h hosts -i "sudo sed -e '\$a username    -            nofile   65536\nusername    soft/hard    nproc    32000' -i /etc/security/limits.conf"
pssh -h hosts -i "sudo sed -i '\$a \\\nsession required        pam_limits.so' /etc/pam.d/common-session"
pssh -h hosts -i "sed -e 's/# export JAVA_HOME=\/usr\/java\/jdk1.6.0/export JAVA_HOME=\/usr\/local\/java\/jdk1.7.0_51/' -i /mnt/hbase/conf/hbase-env.sh"
pssh -h hosts -i "sed -e 's/# export HBASE_HEAPSIZE=1000/export HBASE_HEAPSIZE=4096/' -i /mnt/hbase/conf/hbase-env.sh"
pssh -h hosts -i "sed -e 's/# export HBASE_MANAGES_ZK=true/export HBASE_MANAGES_ZK=true/' -i /mnt/hbase/conf/hbase-env.sh"
pssh -h hosts -i "echo -e 'cloud2\ncloud3\ncloud4' > /mnt/hbase/conf/regionservers"
pscp -h hosts hbase-0.98.1-config.tar.gz /tmp
pssh -h hosts -i 'tar zxf /tmp/hbase-0.98.1-config.tar.gz -C /mnt/hbase/conf/'
pssh -h hosts -i 'cp -r /mnt/hadoop/lib/native /mnt/hbase/lib'
pssh -h hosts -i 'ln -s /mnt/hadoop/etc/hadoop/hdfs-site.xml /mnt/hbase/conf/'

# If you restart hadoop(Maybe reformat NameNode), you need clear data directorys.
pssh -h hosts -i 'rm -r /mnt/hadoop/data/*'


四、启动hadoop和hbase

 
 启动过程中会遇到一个问题,aws ec2默认只有22端口是开放的,其他一律不行,于是我们要配置Security Group的端口开放策略。
 端口 8088,50070 对所有网络都开放,因为要从这两个端口进入web访问hadoop的管理界面。
 端口 60020,8030 - 8033,3888,2888,60010,2181,30000 - 65535,9000,50010 在这个Security Group内部机器开放,hadoop和hbase程序可以正常访问。
这些动态端口像 8030 - 8033 是hadoop的配置文件里面的,还有部分像 30000 - 65535 是hbase的zookeeper管理的动态端口,都是一步步看log的错误尝试出来,说多都是泪,累觉不爱啊!

cd /mnt/hadoop/
bin/hdfs namenode -format
sbin/start-dfs.sh # cloud1 NameNode SecondaryNameNode, cloud2 3 4 DataNode
sbin/start-yarn.sh # cloud1 ResourceManager, cloud2 3 4 NodeManager
jps
查看集群状态 bin/hdfs dfsadmin -report
查看文件块组成 bin/hdfs fsck / -files -blocks
NameNode查看hdfs  
查看RM    

cd /mnt/hbase
bin/start-hbase.sh

附hbase的一些命令:

  1. bin/hbase shell
  2. t = create 'test', 'cf'
  3. list 'test'
  4. put 'test', 'row1', 'cf:a', 'value1'
  5. put 'test', 'row2', 'cf:b', 'value2'
  6. scan 'test'
  7. describe 'test'
  8. get 'test', 'row1'
  9. disable 'test'
  10. drop 'test'
  11. count 'test', CACHE => 1000



阅读(693) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~