个人学习整理,所有资料来自尚硅谷
Hadoop HA高可用
1. HA概述2. HDFS-HA集群搭建
2.1 HDFS-HA核心问题 3. HDFS-HA手动模式
3.1 环境准备3.2 配置HDFS-HA集群3.3 启动HDFS-HA集群 4. HDFS-HA自动模式
4.1 HDFS-HA自动故障转移工作机制4.2 HDFS-HA自动故障转移的集群规划4.3 配置HDFS-HA自动故障转移
4.3.1 具体配置4.3.2 启动 5. YARN-HA配置
5.1 YARN-HA工作机制5.2 配置YARN-HA集群 6. HADOOP HA 的最终规划
Hadoop HA高可用 1. HA概述- 所谓HA(High Availability),即高可用(7*24小时不中断服务)实现高可用最关键的策略是消除单点故障。HA严格来说应该分成各个组件的HA机制:HDFS的HA和YARN的HA.NameNode主要在以下两个方面影响HDFS集群:
NameNode机器发生意外,如宕机,集群将无法使用,直到管理员重启。
NameNode机器需要升级,包括软件、硬件升级,此时集群也将无法使用。
HDFS HA功能通过配置多个NameNode(Active/Standby)实现在集群中对NameNode的热备来解决上述问题。如果出现故障,如机器崩溃或者机器需要升级维护,此时可通过此方式将NameNode很快的切换到另外一台机器。
2. HDFS-HA集群搭建 当前HDFS集群的规划
| hadoop102 | hadoop103 | hadoop104 |
|---|---|---|
| NameNode | Secondarynamenode | |
| DataNode | DataNode | DataNode |
HA的主要目的是消除namenode的单点故障,需要将hdfs集群规划成以下模样
| hadoop102 | hadoop103 | hadoop104 |
|---|---|---|
| NameNode | NameNode | NameNode |
| DataNode | DataNode | DataNode |
怎么保证三台namenode的数据一致?
a. Fsimage:让一台nn生成数据,让其他机器nn同步
b. Edits:需要引进新的模块JournalNode来保证edits的文件的数据一致性
怎么让同时只有一台nn是active,其他所有是standby?
a. 手动分配
b. 自动分配
2nn在ha架构中并不存在,定期合并fsimage和edits谁来做?
由standby的nn来做
如果nn发生了什么问题,如何让其他的nn上位干活?
a. 手动故障转移
b. 自动故障转移
- 修改IP修改主机名及主机名和IP地址的映射关闭防火墙ssh免密登录安装JDK,配置环境变量等
- 在opt目录下创建一个ha文件
[atguigu@hadoop102 ~]$ cd /opt [atguigu@hadoop102 opt]$ sudo mkdir ha [atguigu@hadoop102 opt]$ sudo chown atguigu:atguigu /opt/ha
- 将/opt/module下的hadoop-3.1.3拷贝到/opt/ha目录下(删除data和log目录)
[atguigu@hadoop102 opt]$ cp -r /opt/module/hadoop-3.1.3 /opt/ha/
- 配置core-site.xml
fs.defaultFS hdfs://mycluster hadoop.tmp.dir /opt/ha/hadoop-3.1.3/data
- 配置hdfs-site.xml
dfs.namenode.name.dir file://${hadoop.tmp.dir}/name dfs.datanode.data.dir file://${hadoop.tmp.dir}/data dfs.journalnode.edits.dir ${hadoop.tmp.dir}/jn dfs.nameservices mycluster dfs.ha.namenodes.mycluster nn1,nn2,nn3 dfs.namenode.rpc-address.mycluster.nn1 hadoop102:8020 dfs.namenode.rpc-address.mycluster.nn2 hadoop103:8020 dfs.namenode.rpc-address.mycluster.nn3 hadoop104:8020 dfs.namenode.http-address.mycluster.nn1 hadoop102:9870 dfs.namenode.http-address.mycluster.nn2 hadoop103:9870 dfs.namenode.http-address.mycluster.nn3 hadoop104:9870 dfs.namenode.shared.edits.dir qjournal://hadoop102:8485;hadoop103:8485;hadoop104:8485/mycluster dfs.client.failover.proxy.provider.mycluster org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /home/atguigu/.ssh/id_rsa
- 分发配置好的hadoop环境到其他节点
- 将HADOOP_HOME环境更改到HA目录(三台机器)
[atguigu@hadoop102 ~]$ sudo vim /etc/profile.d/my_env.sh #HADOOP_HOME export HADOOP_HOME=/opt/ha/hadoop-3.1.3 export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin
在三台机器上source环境变量
[atguigu@hadoop102 ~]$source /etc/profile
- 在各个JournalNode节点上,输入以下命令启动journalnode服务
[atguigu@hadoop102 ~]$ hdfs --daemon start journalnode [atguigu@hadoop103 ~]$ hdfs --daemon start journalnode [atguigu@hadoop104 ~]$ hdfs --daemon start journalnode
- 在【nn1】上,对其进行格式化并启动
[atguigu@hadoop102 ~]$ hdfs namenode -format [atguigu@hadoop102 ~]$ hdfs --daemon start namenode
- 在【nn2】和【nn3】上同步【nn1】的元数据信息
[atguigu@hadoop103 ~]$ hdfs namenode -bootstrapStandby [atguigu@hadoop104 ~]$ hdfs namenode -bootstrapStandby
- 启动【nn2】和【nn3】
[atguigu@hadoop103 ~]$ hdfs --daemon start namenode [atguigu@hadoop104 ~]$ hdfs --daemon start namenode
- 查看web页面显示
三台机器目前都是standby,手动配置高可用集群,需将一台改成active。
- 在所有节点上,启动datanode
[atguigu@hadoop102 ~]$ hdfs --daemon start datanode [atguigu@hadoop103 ~]$ hdfs --daemon start datanode [atguigu@hadoop104 ~]$ hdfs --daemon start datanode
- 将【nn1】切换为Active
[atguigu@hadoop102 ~]$ hdfs haadmin -transitionToActive nn1
- 查看是否Active
[atguigu@hadoop102 ~]$ hdfs haadmin -getServiceState nn1
- kill掉hadoop102上的NameNode进程
此时在hadoop103上将【nn2】切换为Active,出现如下情况:
再次启动【nn1】:
[atguigu@hadoop102 ~]$ hdfs --daemon start namenode
此时hadoop102状态重新变为standby,此时若再在hadoop103上将【nn2】切换为Active:
[atguigu@hadoop103 ~]$ hdfs haadmin -transitionToActive nn2 [atguigu@hadoop103 ~]$ hdfs haadmin -getServiceState nn2 active
分析:为什么手动配置高可用集群时需要所有namenode是启动状态,才能让其中一个节点转换为active?
原因:当前集群中设置了一个隔离机制,同一时间只能允许有一个active的namenode对外服务。现在配置了三个namenode,要让hadoop102的namenode切换为Active就要保证它能和hadoop103和hadoop104相互连接。如果hadoop102与hadoop104无法连接成功,那么只能代表hadoop102与hadoop104之间无法通信,但是hadoop104可能能与其他服务器进行通信。假如hadoop104的namenode是Active状态,然后现在再让hadoop102的namenode切换为Active,那么之后就会出现两个Acitve,出现脑裂情况,因此手动配置高可用集群时需要所有namenode是启动状态。
因此这种HA手动模式并不是真正意义上的高可用。
4. HDFS-HA自动模式 4.1 HDFS-HA自动故障转移工作机制 自动故障转移为HDFS部署增加了两个新组件:Zookeeper和ZKFailoverController(ZKFC)进程。Zookeeper是维护少量协调数据,通知客户端这些数据的改变和监视客户端故障的高可用服务。
| hadoop102h | hadoop103 | hadoop104 |
|---|---|---|
| NameNode | NameNode | NameNode |
| JournalNode | JournalNode | JournalNode |
| DataNode | DataNode | DataNode |
| Zookeeper | Zookeeper | Zookeeper |
| ZKFC | ZKFC | ZKFC |
- 在hdfs-site.xml中增加
dfs.ha.automatic-failover.enabled true
- 在core-site.xml文件中增加
ha.zookeeper.quorum hadoop102:2181,hadoop103:2181,hadoop104:2181
- 进行分发
[atguigu@hadoop102 hadoop]$ xsync hdfs-site.xml core-site.xml4.3.2 启动
- 关闭所有HDFS服务
[atguigu@hadoop102 hadoop]$ stop-dfs.sh
- 启动Zookeeper集群
[atguigu@hadoop102 hadoop]$ zk.sh start
zk.sh脚本:
[atguigu@hadoop102 bin]$ cat zk.sh
#!/bin/bash
case $1 in
"start"){
for i in hadoop102 hadoop103 hadoop104
do
echo ----------zookeeper $i 启动----------
ssh $i "/opt/module/zookeeper-3.5.7/bin/zkServer.sh start"
done
};;
"stop"){
for i in hadoop102 hadoop103 hadoop104
do
echo ----------zookeeper $i 停止----------
ssh $i "/opt/module/zookeeper-3.5.7/bin/zkServer.sh stop"
done
};;
"status"){
for i in hadoop102 hadoop103 hadoop104
do
echo ----------zookeeper $i 状态 ----------
ssh $i "/opt/module/zookeeper-3.5.7/bin/zkServer.sh status"
done
};;
esac
- 启动Zookeeper以后,然后再初始HA在Zookeeper中状态
[atguigu@hadoop102 hadoop-3.1.3]$ hdfs zkfc -formatZK
- 启动HDFS服务:
[atguigu@hadoop102 hadoop-3.1.3]$ start-dfs.sh
查看进程:
- 可以去zkCli.sh客户端查看Namenode选举锁节点内容:
[atguigu@hadoop102 zookeeper-3.5.7]$ bin/zkCli.sh [zk: localhost:2181(CONNECTED) 10] get -s /hadoop-ha/mycluster/ActiveStandbyElectorLock
发现hadoop103的namenode为active,观察web页面:
hadoop102
hadoop103
hadoop104
- Kill掉Active的namenode的进程(注意:是在hadoop103上kill):
[atguigu@hadoop103 ~]$ jps 41681 DFSZKFailoverController 42789 Jps 41334 NameNode 41560 JournalNode 41145 QuorumPeerMain 41420 DataNode [atguigu@hadoop103 ~]$ kill -9 41334
再次去zkCli.sh客户端查看:
[zk: localhost:2181(CONNECTED) 0] get -s /hadoop-ha/mycluster/Active ActiveBreadCrumb ActiveStandbyElectorLock [zk: localhost:2181(CONNECTED) 0] get -s /hadoop-ha/mycluster/ActiveStandbyElectorLock myclusternn3 hadoop104 �>(�> cZxid = 0x1100000016 ctime = Tue Feb 01 16:42:21 CST 2022 mZxid = 0x1100000016 mtime = Tue Feb 01 16:42:21 CST 2022 pZxid = 0x1100000016 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x300137a86600001 dataLength = 33 numChildren = 0
发现hadoop104的namenode转移为active,观察web页面:
hadoop104
hadoop102
再次Kill掉Active的namenode的进程(注意:此时是在hadoop104上kill):
[atguigu@hadoop104 ~]$ jps 40529 JournalNode 40115 QuorumPeerMain 40391 DataNode 42136 Jps 40652 DFSZKFailoverController 40303 NameNode [atguigu@hadoop104 ~]$ kill -9 40303 [atguigu@hadoop104 ~]$ jps 40529 JournalNode 40115 QuorumPeerMain 40391 DataNode 40652 DFSZKFailoverController 42175 Jps
再次去zkCli.sh客户端查看:
[zk: localhost:2181(CONNECTED) 0] get -s /hadoop-ha/mycluster/ActiveStandbyElectorLock myclusternn1 hadoop102 �>(�> cZxid = 0x110000001b ctime = Tue Feb 01 16:48:07 CST 2022 mZxid = 0x110000001b mtime = Tue Feb 01 16:48:07 CST 2022 pZxid = 0x110000001b cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x400137b09dd0000 dataLength = 33 numChildren = 0
发现hadoop102的namenode转移为active,观察web页面:
hadoop102
最后重启hadoop103、hadoop104上的namenode进程
[atguigu@hadoop103 zookeeper-3.5.7]$ hdfs --daemon start namenode [atguigu@hadoop104 zookeeper-3.5.7]$ hdfs --daemon start namenode
观察web页面:
hadoop103
hadoop104
5. YARN-HA配置 5.1 YARN-HA工作机制当前可以启动多个ResourceManager,谁先启动就会现在Zookeeper中注册一个临时节点,并成为Active ResourceManager,后启动的也会尝试注册,但会发现该临时节点已存在,成为Standby ResourceManager。所有Standby ResourceManager会维护一个长轮询查看该节点信息是否存在,若该临时节点不存在了(即Active ResourceManager挂了,该临时节点自动删除了),那么Standby ResourceManager将自动切换成Active ResourceManager。
5.2 配置YARN-HA集群- 规划集群
| hadoop102 | hadoop103 | hadoop104 |
|---|---|---|
| ResourceManager | ResourceManager | ResourceManager |
| NodeManager | NodeManager | NodeManager |
| Zookeeper | Zookeeper | Zookeeper |
- 核心问题:
如果 如果当前 active rm 挂了,其他 rm 怎么将其他 standby rm 上位?
核心原理跟 hdfs 一样,利用了 zk 的临时节点。
当前 rm 上有很多的计算程序在等待运行, 其他的 rm 怎么将这些程序接手过来接着跑?
rm 会将当前的所有计算程序的状态存储在 zk 中,其他 rm 上位后会去读取,然后接着跑。
- 具体配置
yarn-site.xml
yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.ha.enabled true yarn.resourcemanager.cluster-id cluster-yarn1 yarn.resourcemanager.ha.rm-ids rm1,rm2,rm3 yarn.resourcemanager.hostname.rm1 hadoop102 yarn.resourcemanager.webapp.address.rm1 hadoop102:8088 yarn.resourcemanager.address.rm1 hadoop102:8032 yarn.resourcemanager.scheduler.address.rm1 hadoop102:8030 yarn.resourcemanager.resource-tracker.address.rm1 hadoop102:8031 yarn.resourcemanager.hostname.rm2 hadoop103 yarn.resourcemanager.webapp.address.rm2 hadoop103:8088 yarn.resourcemanager.address.rm2 hadoop103:8032 yarn.resourcemanager.scheduler.address.rm2 hadoop103:8030 yarn.resourcemanager.resource-tracker.address.rm2 hadoop103:8031 yarn.resourcemanager.hostname.rm3 hadoop104 yarn.resourcemanager.webapp.address.rm3 hadoop104:8088 yarn.resourcemanager.address.rm3 hadoop104:8032 yarn.resourcemanager.scheduler.address.rm3 hadoop104:8030 yarn.resourcemanager.resource-tracker.address.rm3 hadoop104:8031 yarn.resourcemanager.zk-address hadoop102:2181,hadoop103:2181,hadoop104:2181 yarn.resourcemanager.recovery.enabled true yarn.resourcemanager.store.class org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore yarn.nodemanager.env-whitelist JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME
分发yarn-site.xml。
启动YARN
(1)启动yarn
[atguigu@hadoop102 hadoop]$ start-yarn.sh Starting resourcemanagers on [ hadoop102 hadoop103 hadoop104] Starting nodemanagers [atguigu@hadoop102 hadoop]$ jpsall =============== hadoop102 =============== 46101 DataNode 46566 DFSZKFailoverController 48871 ResourceManager 46360 JournalNode 49194 Jps 48989 NodeManager 45646 QuorumPeerMain 47631 NameNode =============== hadoop103 =============== 41681 DFSZKFailoverController 44545 NodeManager 44897 Jps 44466 ResourceManager 43221 NameNode 41560 JournalNode 41145 QuorumPeerMain 41420 DataNode =============== hadoop104 =============== 43744 NodeManager 40529 JournalNode 42434 NameNode 40115 QuorumPeerMain 43923 Jps 40391 DataNode 40652 DFSZKFailoverController 43663 ResourceManager
(2)查看服务状态:
[atguigu@hadoop102 hadoop]$ yarn rmadmin -getServiceState rm1 standby [atguigu@hadoop102 hadoop]$ yarn rmadmin -getServiceState rm2 active [atguigu@hadoop102 hadoop]$ yarn rmadmin -getServiceState rm3 standby
(3)可以去zkCli.sh客户端查看ResourceManager选举锁节点内容:
[zk: localhost:2181(CONNECTED) 0] get -s /yarn-leader-election/cluster-yarn1/ActiveStandbyElectorLock cluster-yarn1rm2 cZxid = 0x1100000030 ctime = Tue Feb 01 17:19:03 CST 2022 mZxid = 0x1100000030 mtime = Tue Feb 01 17:19:03 CST 2022 pZxid = 0x1100000030 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x300137a86600005 dataLength = 20 numChildren = 0
(4)web查看hadoop102:8088的yarn状态:
自动跳转至hadoop103:8088/cluster
(5)若kill掉hadoop103上的ResourceManager进程
[atguigu@hadoop103 zookeeper-3.5.7]$ jps 41681 DFSZKFailoverController 44545 NodeManager 44466 ResourceManager 43221 NameNode 41560 JournalNode 41145 QuorumPeerMain 45195 Jps 41420 DataNode [atguigu@hadoop103 zookeeper-3.5.7]$ kill -9 44466
查看服务状态:
[atguigu@hadoop103 zookeeper-3.5.7]$ yarn rmadmin -getServiceState rm1 active [atguigu@hadoop103 zookeeper-3.5.7]$ yarn rmadmin -getServiceState rm2 2022-02-01 17:46:50,053 INFO ipc.Client: Retrying connect to server: hadoop103/192.168.10.103:8033. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS) Operation failed: Call From hadoop103/192.168.10.103 to hadoop103:8033 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused [atguigu@hadoop103 zookeeper-3.5.7]$ yarn rmadmin -getServiceState rm3 standby
web查看hadoop102:8088和hadoop104:8088的yarn状态:
自动跳转至hadoop102:8088/cluster
6. HADOOP HA 的最终规划将整个 ha 搭建完成后,集群的最终规划:
| hadoop102 | hadoop103 | hadoop104 |
|---|---|---|
| NameNode | NameNode | NameNode |
| JournalNode | JournalNode | JournalNode |
| DataNode | DataNode | DataNode |
| Zookeeper | Zookeeper | Zookeeper |
| ZKFC | ZKFC | ZKFC |
| ResourceManager | ResourceManager | ResourceManager |
| NodeManager | NodeManager | NodeManager |



