Apache Hadoop 2.2.0 HDFS HA + YARN多机部署
阅读原文时间:2021年04月20日阅读:1

部署逻辑架构:

HDFS HA部署物理架构

注意

JournalNode使用资源很少,即使在实际的 生产环境中,也是把JournalNode和 DataNode部署在同一台机器上;

生产环境中,建议主备NameNode各单独一台机器。




YARN部署架构:




个人实验环境部署图

ubuntu12 32bit

apache hadoop 2.2.0

jdk1.7




准备工作

1.在4台机器都配置hosts;

2.配置NameNode节点可以免密码登录到其余所有节点,只需要单向免密登录即可,无需双向;

免密码登录仅仅在启动、停止集群时使用。

3.安装jdk

4.创建专门的账号,不要用root账号部署、管理hadoop




部署hadoop

第一步:把hadoop安装包解压到每一个节点(可以解压到一个节点,然后完成后续第2步的配置后,再scp拷贝到其余节点)的固定目录下(各节点目录统一),比如/home/yarn/Hadoop/hadoop-2.2.0

第二步:修改配置文件(只需在一个节点上配置,配置好后再用scp分发到其余节点)

配置文件路径:etc/hadoop/

hadoop-env.sh

修改JDK路径,在文件中搜索以下行,将JAVA_HOME设置为JDK安装路径即可:

# The java implementation to use.

export JAVA_HOME=/usr/lib/jvm/java-6-sun


core-site.xml

指定Active NameNode的host名/ip和端口号,端口号可以根据自己的需要修改:

fs.defaultFS

hdfs://SY-0217:8020

注意:以上配置的SY-0217是固定host,只适用于手动切换主备NameNode的场景,如果需要通过ZooKeeper来自动切换,则需要配置逻辑名称,后面会详述。


mapred-site.xml

mapreduce.framework.name

yarn

The runtime framework for executing MapReduce jobs.

Can be one of local, classic or yarn.

mapreduce.jobhistory.address

SY-0355:10020

MapReduce JobHistory Server IPC host:port

mapreduce.jobhistory.webapp.address

SY-0355:19888

MapReduce JobHistory Server Web UI host:port


hdfs-site.xml

非常关键的配置文件!

dfs.nameservices

hadoop-test

指定命名空间名称,可随意起名

Comma-separated list of nameservices.

dfs.ha.namenodes.hadoop-test

nn1,nn2

在命名空间下指定NameNode逻辑名

The prefix for a given nameservice, contains a comma-separated

list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE).

dfs.namenode.rpc-address.hadoop-test.nn1

SY-0217:8020

为“命名空间名.NameNode逻辑名”配置rpc地址

RPC address for nomenode1 of hadoop-test

dfs.namenode.rpc-address.hadoop-test.nn2

SY-0355:8020

为“命名空间名.NameNode逻辑名”配置rpc地址

RPC address for nomenode2 of hadoop-test

dfs.namenode.http-address.hadoop-test.nn1

SY-0217:50070

为“命名空间名.NameNode逻辑名”配置http地址

The address and the base port where the dfs namenode1 web ui will listen on.

dfs.namenode.http-address.hadoop-test.nn2

SY-0355:50070

为“命名空间名.NameNode逻辑名”配置http地址

The address and the base port where the dfs namenode2 web ui will listen on.

dfs.namenode.name.dir

file:///home/dongxicheng/hadoop/hdfs/name

配置NameNode元数据存放的路径;

如果机器上有多块硬盘的话,推荐配置多个路径,用逗号分隔。

Determines where on the local filesystem the DFS name node

should store the name table(fsimage).  If this is a comma-delimited list

of directories then the name table is replicated in all of the

directories, for redundancy.

dfs.datanode.data.dir

file:///home/dongxicheng/hadoop/hdfs/data

    配置DataNode数据存放的路径;

如果机器上有多块硬盘的话,推荐配置多个路径,用逗号分隔。

Determines where on the local filesystem an DFS data node

should store its blocks.  If this is a comma-delimited

list of directories, then data will be stored in all named

directories, typically on different devices.

Directories that do not exist are ignored.

dfs.namenode.shared.edits.dir

qjournal://SY-0355:8485;SY-0225:8485;SY-0226:8485/hadoop-journal

配置JournalNode,包含三部分:

(1)qjournal是协议,无需修改;

(2)然后就是三台部署JournalNode的主机host/ip:端口,三台机器之间用分号分隔;

(3)最后的hadoop-journal是journalnode的命名空间,可以随意取名。

A directory on shared storage between the multiple namenodes

in an HA cluster. This directory will be written by the active and read

by the standby in order to keep the namespaces synchronized. This directory

does not need to be listed in dfs.namenode.edits.dir above. It should be

left empty in a non-HA cluster.

dfs.journalnode.edits.dir

/home/dongxicheng/hadoop/hdfs/journal/

journalnode的本地数据存放目录,指定一个路径就够。

dfs.ha.automatic-failover.enabled

false

是否自动切换。由于没有配置ZooKeeper,所以不能实现自动切换,所以这里配置的是false。

Whether automatic failover is enabled. See the HDFS High

Availability documentation for details on automatic HA

configuration.


yarn-site.xml

指定ResourceManager

The hostname of the RM.

yarn.resourcemanager.hostname

master

The address of the applications manager interface in the RM.

yarn.resourcemanager.address

${yarn.resourcemanager.hostname}:8032

The address of the scheduler interface.

yarn.resourcemanager.scheduler.address

${yarn.resourcemanager.hostname}:8030

The http address of the RM web application.

yarn.resourcemanager.webapp.address

${yarn.resourcemanager.hostname}:8088

The https adddress of the RM web application.

yarn.resourcemanager.webapp.https.address

${yarn.resourcemanager.hostname}:8090

yarn.resourcemanager.resource-tracker.address

${yarn.resourcemanager.hostname}:8031

The address of the RM admin interface.

yarn.resourcemanager.admin.address

${yarn.resourcemanager.hostname}:8033

指定fairscheduler调度器

The class to use as the resource scheduler.

yarn.resourcemanager.scheduler.class

org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler

指定fairscheduler调度器配置文件路径

fair-scheduler conf location

yarn.scheduler.fair.allocation.file

${yarn.home.dir}/etc/hadoop/fairscheduler.xml

指定nodemanager的本地工作目录,推荐配置多个路径,用逗号分隔

List of directories to store localized files in. An

application's localized file directory will be found in:

${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.

Individual containers' work directories, called container_${contid}, will

be subdirectories of this.

yarn.nodemanager.local-dirs

/home/yarn/Hadoop/yarn/local

Whether to enable log aggregation

yarn.log-aggregation-enable

true

Where to aggregate logs to.

yarn.nodemanager.remote-app-log-dir

/home/yarn/Hadoop/yarn/tmp /logs

每个nodemanager上可以用的内存大小

Amount of physical memory, in MB, that can be allocated  for containers.

注意:我的NM虚拟机是1G内存,1核CPU,当该值配置小于1024时,NM是无法启动的!会报错:

NodeManager from  slavenode2 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.

yarn.nodemanager.resource.memory-mb

1024

每个nodemanager上可用的CPU核数

Number of CPU cores that can be allocated

for containers.

yarn.nodemanager.resource.cpu-vcores

1

the valid service name should only contain a-zA-Z0-9_ and can not start with numbers

yarn.nodemanager.aux-services

mapreduce_shuffle


slaves

指定slave机器的host名即可


fairscheduler.xml

下面的例子把把集群资源配置成3个队列,为每个队列配置内存、CPU核、运行程序上限个数、权重等信息。

5 mb, 1 vcores

60 mb, 1 vcores

10

300

1.0

root,yarn

5 mb, 1 vcores

10 mb, 1 vcores

5 mb, 1 vcores

15 mb, 1 vcores

第三步:将在一台机器上配好的所有配置文件scp到其它所有节点机器

第四步: 启动HDFS HA + YARN集群

注意:所有操作均在Hadoop部署目录下进行。

启动Hadoop集群:

Step1 :

在各个JournalNode节点上,输入以下命令启动journalnode服务:

sbin/hadoop-daemon.sh start journalnode

Step2:

在[nn1]上,对其进行格式化,并启动:

bin/hdfs namenode -format

sbin/hadoop-daemon.sh start namenode

Step3:

在[nn2]上,同步nn1的元数据信息:

bin/hdfs namenode -bootstrapStandby

Step4:

启动[nn2]:

sbin/hadoop-daemon.sh start namenode

经过以上四步操作,nn1和nn2均处理standby状态

Step5:

将[nn1]切换为Active

bin/hdfs haadmin -transitionToActive nn1

Step6:

在[nn1]上,启动所有datanode

sbin/hadoop-daemons.sh start datanode


下面在 RM所在master节点启动YARN:

sbin/start-yarn.sh

在运行MRJS的slave1上执行以下命令启动 MR JobHistory Server

sbin/mr-jobhistory-daemon.sh start historyserver


至此,HDFS HA + YARN都成功启动完毕,在各个节点输入jps查看进程

也可以用web查看:

HDFS HA界面:

master:50070/dfshealth.jsp

slave1:50070/dfshealth.jsp

YARN界面:

master:8088


第五步:停止集群

在RM和NN所在节点master执行:

先停止yarn

sbin/stop-yarn.sh

再停止hdfs

sbin/stop-dfs.sh

在运行 JobHistoryServer的slave1上执行:

停止JobHistoryServer

sbin/mr-jobhistory-daemon.sh stop historyserver

第六步:再次启动

注意,再次启动时,所有的格式化命令都不用执行了!!!