Hadoop搭建高可用的HA集群
阅读原文时间:2023年07月10日阅读:1

一、工具准备

  1、7台虚拟机(至少需要3台),本次搭建以7台为例,配好ip,关闭防火墙,修改主机名和IP的映射关系(/etc/hosts),关闭防火墙

  2、安装JDK,配置环境变量

二、集群规划:

集群规划(7台):
主机名 IP 安装的软件 运行的进程
hadoop01 192.168.*.121 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)
hadoop02 192.168.*.122 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)
hadoop03 192.168.*.123 jdk、hadoop ResourceManager
hadoop04 192.168.*.124 jdk、hadoop ResourceManager
hadoop05 192.168.*.125 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain
hadoop06 192.168.*.126 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain
hadoop07 192.168.*.127 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain

三台集群的集群规划:

主机名            IP            安装的软件           运行的进程  
hadoop01    192.168.\*.201    jdk、hadoop           NameNode、DFSZKFailoverController(zkfc) JournalNode、QuorumPeerMain(zookeeper)  
hadoop02    192.168.\*.202    jdk、hadoop           NameNode、DFSZKFailoverController(zkfc) JournalNode、QuorumPeerMain(zookeeper)  
hadoop03    192.168.\*.203    jdk、hadoop           DataNode  JournalNode、QuorumPeerMain(zookeeper)

三、安装步骤

  1、配置zookeeper集群(hadoop05上)

   1.1解压 

tar -zxvf zookeeper.tar.gz -C /hadoop/

  1.2修改配置

cd /hadoop/zookeeper/conf/
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg

修改:dataDir=/home/app/hadoop/zookeeper/data

在最后添加:

server.1=hadoop05:2888:3888
server.2=hadoop06:2888:3888
server.3=hadoop07:2888:3888

保存退出

然后创建一个tmp文件夹

mkdir /hadoop/zookeeper/tmp

再创建一个空文件

touch /hadoop/zookeeper/tmp/myid

最后向该文件写入ID

echo 1 > /hadoop/zookeeper/tmp/myid

   1.3将配置好的zookeeper拷贝到其他节点(首先分别在hadoop06、hadoop07根目录下创建一个hadoop目录:mkdir /hadoop)

scp -r /hadoop/zookeeper/ hadoop06:/hadoop/
scp -r /hadoop/zookeeper/ hadoop07:/hadoop/

注意:修改hadoop06、hadoop07对应/hadoop/zookeeper/tmp/myid内容

hadoop06:

echo 2 > /hadoop/zookeeper/tmp/myid

hadoop07:

echo 3 > /hadoop/zookeeper/tmp/myid

2.安装配置hadoop集群(在hadoop01上操作)(hadoop用的是3.2.1版本)

  2.1解压

tar -zxvf hadoop-3.2.1.tar.gz -C /hadoop/

  2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下)

#将hadoop添加到环境变量中
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.8
export HADOOP_HOME=/hadoop/hadoop-3.2.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

#hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下
cd /home/hadoop/app/hadoop-3.2.1/etc/hadoop

  2.2.1修改hadoo-env.sh

export JAVA_HOME=/home/hadoop/app/jdk1.7.0_55

  2.2.2修改core-site.xml



fs.defaultFS hdfs://ns1

hadoop.tmp.dir /home/hadoop/app/hadoop-3.2.1/tmp

<!-- 指定zookeeper地址 -->  
<property>  
    <name>ha.zookeeper.quorum</name>  
    <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value>  
</property>  

2.2.3修改hdfs-site.xml

dfs.nameservices ns1 dfs.ha.namenodes.ns1 nn1,nn2 dfs.namenode.rpc-address.ns1.nn1 hadoop01:9000 dfs.namenode.http-address.ns1.nn1 hadoop01:9870 dfs.namenode.rpc-address.ns1.nn2 hadoop02:9000 dfs.namenode.http-address.ns1.nn2 hadoop02:9870 dfs.namenode.shared.edits.dir qjournal://hadoop05:8485;hadoop06:8485;hadoop07:8485/ns1 dfs.journalnode.edits.dir /home/hadoop/app/hadoop-3.2.1/journaldata dfs.ha.automatic-failover.enabled true dfs.client.failover.proxy.provider.ns1 org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence shell(/bin/true) dfs.ha.fencing.ssh.private-key-files /home/hadoop/.ssh/id_rsa dfs.ha.fencing.ssh.connect-timeout 30000

2.2.4修改mapred-site.xml

mapreduce.framework.name yarn

2.2.5修改yarn-site.xml

yarn.resourcemanager.ha.enabled true yarn.resourcemanager.cluster-id yrc yarn.resourcemanager.ha.rm-ids rm1,rm2 yarn.resourcemanager.hostname.rm1 hadoop03 yarn.resourcemanager.hostname.rm2 hadoop04 yarn.resourcemanager.zk-address hadoop05:2181,hadoop06:2181,hadoop07:2181 yarn.nodemanager.aux-services mapreduce_shuffle

2.2.6修改workers(workers是指定子节点的位置,因为要在hadoop01上启动HDFS、在hadoop03启动yarn,所以hadoop01上的workers文件指定的是datanode的位置,hadoop03上的workers文件指定的是nodemanager的位置)

2.*的版本是slaves

hadoop05
hadoop06
hadoop07

2.2.7配置免密码登陆

#首先要配置hadoop01到hadoop02、hadoop05、hadoop06、hadoop07的免密码登陆
#在hadoop01上生产一对钥匙
ssh-keygen -t rsa
#将公钥拷贝到其他节点,包括自己
ssh-copy-id hadoop01
ssh-copy-id hadoop02
ssh-copy-id hadoop05
ssh-copy-id hadoop06
ssh-copy-id hadoop07
#配置hadoop03到hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆
#在hadoop03上生产一对钥匙
ssh-keygen -t rsa
#将公钥拷贝到其他节点
ssh-copy-id hadoop04
ssh-copy-id hadoop05
ssh-copy-id hadoop06
ssh-copy-id hadoop07
#注意:两个namenode之间要配置ssh免密码登陆,别忘了配置hadoop02到hadoop01的免登陆
在hadoop02上生产一对钥匙
ssh-keygen -t rsa
ssh-copy-id -i hadoop01

2.4将配置好的hadoop拷贝到其他节点

scp -r /hadoop-3.2.1/ hadoop02:/home/hadoop/app/
scp -r /hadoop-3.2.1/ hadoop03:/home/hadoop/app/
scp -r /hadoop-3.2.1/ hadoop04:/home/hadoop/app/
scp -r /hadoop-3.2.1/ hadoop05:/home/hadoop/app/
scp -r /hadoop-3.2.1/ hadoop06:/home/hadoop/app/
scp -r /hadoop-3.2.1/ hadoop07:/home/hadoop/app/

###注意:接下来的步骤严格按照下面的步骤

2.5启动zookeeper集群(分别在hadoop05、hadoop06、hadoop07上启动zk)

cd /home/hadoop/app/zookeeper/bin/
./zkServer.sh start
#查看状态:一个leader,两个follower
./zkServer.sh status

2.6启动journalnode(分别在在hadoop05、hadoop06、hadoop07上执行)

cd /home/hadoop/app/hadoop-3.2.1
sbin/hdfs --daemon start journalnode
#运行jps命令检验,hadoop05、hadoop06、hadoop07上多了JournalNode进程

2.7格式化HDFS

#在hadoop01上执行命令:
hdfs namenode -format
#格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里我配置的是/hadoop/hadoop-3.2.1/tmp,
# 然后将/hadoop/hadoop-3.2.1/tmp拷贝到hadoop02的/hadoop/hadoop-3.2.1/下。
scp -r tmp/ hadoop02:/home/hadoop/app/hadoop-3.2.1/
##也可以这样,建议hdfs namenode -bootstrapStandby

2.8格式化ZKFC(在hadoop01上执行即可)

hdfs zkfc -formatZK

2.9启动HDFS(在hadoop01上执行)

sbin/start-dfs.sh

2.10启动YARN(#####注意#####:是在hadoop03上执行start-yarn.sh,把namenode和resourcemanager分开是因为性能问题,因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动)

sbin/start-yarn.sh

2.11手动启动hadoop04的resoucemanager

sbin/yarn --daemon start resourcemanager

到此,hadoop-3.2.1配置完毕,可以统计浏览器访问:
http://192.168.*.201:9870
NameNode 'hadoop01:9000' (active)
http://192.168.*.202:9870
NameNode 'hadoop02:9000' (standby)

验证HDFS HA  
    首先向hdfs上传一个文件  
    hadoop fs -put /etc/profile /profile  
    hadoop fs -ls /  
    然后再kill掉active的NameNode  
    kill -9 <pid of NN>  
    通过浏览器访问:http://192.168.1.202:9870  
    NameNode 'hadoop02:9000' (active)  
    这个时候hadoop02上的NameNode变成了active  
    在执行命令:  
    hadoop fs -ls /  
    -rw-r--r--   3 root supergroup       1926 2014-02-06 15:36 /profile  
    刚才上传的文件依然存在!!!  
    手动启动那个挂掉的NameNode  
    sbin/hadoop-daemon.sh start namenode  
    通过浏览器访问:http://192.168.1.201:9870  
    NameNode 'hadoop01:9000' (standby)

验证YARN:  
    运行一下hadoop提供的demo中的WordCount程序:  
    hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar wordcount /profile /out

OK,大功告成!!!