大家按照之前的链接进行安装好之后进入到这里安装高可用
https://blog.csdn.net/weixin_45955039/article/details/118784204?spm=1001.2014.3001.55021.进入到hadoop的配置文件中
cd /usr/local/hadoop/etc/hadoop/2.配置core-site.xml
3.配置hdfs-site.xmlhadoop.tmp.dir file:/usr/local/hadoop/tmp fs.defaultFS hdfs://ns hadoop.proxyuser.hadoop.groups * hadoop.proxyuser.hadoop.hosts * ha.zookeeper.quorum master:2181,slave1:2181,slave2:2181
4.将修改好的文件发送给另外两台虚拟机dfs.nameservices ns dfs.ha.namenodes.ns nn1,nn2 dfs.namenode.rpc-address.ns.nn1 master:9000 dfs.namenode.http-address.ns.nn1 master:50070 dfs.namenode.rpc-address.ns.nn2 slave1:9000 dfs.namenode.http-address.ns.nn2 slave1:50070 dfs.namenode.shared.edits.dir qjournal://master:8485;slave1:8485;slave2:8485/ns dfs.journalnode.edits.dir /usr/local/hadoop/tmp/journal dfs.namenode.name.dir file:/usr/local/hadoop/tmp/dfs/name dfs.datanode.data.dir file:/usr/local/hadoop/tmp/dfs/data dfs.ha.automatic-failover.enabled true dfs.client.failover.proxy.provider.ns org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /home/hadoop/.ssh/id_rsa dfs.qjournal.write-txns.timeout.ms 60000
sudo scp -r /usr/local/hadoop/etc/hadoop slave1:`pwd` sudo scp -r /usr/local/hadoop/etc/hadoop slave2:`pwd`5.启动zookeeper
zkServer.sh start6。查看zookeeper状态
zkServer.sh status7. 3个节点都使用以下命令启动JournalNode
hadoop-daemon.sh start journalnode8.在master上格式化NameNode,在master使用以下命令:
hdfs namenode -format9.启动master上的NameNode,在master使用以下命令:
hadoop-daemon.sh start namenode10.同步master上NameNode的数据到slave1,在slave1使用以下命令:
hdfs namenode -bootstrapStandby11.关闭master上的NameNode,在master使用以下命令:
hadoop-daemon.sh stop namenode12. 在master初始化Zookeeper监控工具,在master使用以下命令:
hdfs zkfc -formatZK13.在master启动hdfs,yarn
#关闭hdfs,yarn sbin/stop-dfs.sh sbin/stop-yarn.sh #启动hdfs,yarn sbin/start-dfs.sh sbin/start-yarn.sh #启动所有 sbin/start-all.sh #关闭所有 sbin/stop-all.sh



