- 一、启动HDFS集群
- 1、启动Zookeeper
- 2、启动journalnode
- 3、格式化NameNode (在hadoop60节点)
- 4、格式化zkfc(hadoop60节点)
- 5、备用节点NameNode同步主节点NameNode元数据
- (1)主节点hadoop60启动NameNode
- (2)查看jps进程,发现namenode已启动
- 6、查看主节点NameNode信息
- 7、hadoop62同步hadoop63节点NameNode元数据
- 8、查看备用节点NameNode信息
- 9、关闭主节点NameNode进程
- 10、jps看一下进程,发现namenode关掉了
- 11、关闭journalnode进程
- 12、一键启动HDFS集群
- 13、web界面查看HDFS(百度输入IP地址:9870)
- 14、HDFS shell
命令:runRemoteCmd.sh “/home/hadoop/app/zookeeper/bin/zkServer.sh start” all
[hadoop@hadoop60 app]$ runRemoteCmd.sh "/home/hadoop/app/zookeeper/bin/zkServer.sh start" all *******************hadoop60*********************** ZooKeeper JMX enabled by default Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED *******************hadoop61*********************** ZooKeeper JMX enabled by default Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED *******************hadoop62*********************** ZooKeeper JMX enabled by default Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED runRemoteCm d.sh "2、启动journalnode
命令:runRemoteCmd.sh “/home/hadoop/app/hadoop/bin/hdfs --daemon start journalnode” all
[hadoop@hadoop60 app]$ runRemoteCmd.sh "/home/hadoop/app/hadoop/bin/hdfs --daemon start journalnode" all *******************hadoop60*********************** journalnode is running as process 2303. Stop it first. *******************hadoop61*********************** journalnode is running as process 2781. Stop it first. *******************hadoop62*********************** journalnode is running as process 2763. Stop it first.3、格式化NameNode (在hadoop60节点)
命令:[hadoop@hadoop60 hadoop]$bin/hdfs namenode -format
[hadoop@hadoop60 hadoop]$ bin/hdfs namenode -format
注:如果出现下面划线的,说明格式化成功
命令:[hadoop@hadoop60 hadoop]$ bin/hdfs zkfc -formatZK
[hadoop@hadoop60 hadoop]$ bin/hdfs zkfc -formatZK
注:如果出现下面划线的,说明格式化成功
[hadoop@hadoop60 hadoop]$ bin/hdfs --daemon start namenode(2)查看jps进程,发现namenode已启动
[hadoop@hadoop60 hadoop]$ jps 2724 Jps 2681 NameNode 2303 JournalNode 2463 QuorumPeerMain6、查看主节点NameNode信息
[hadoop@hadoop60 hadoop]$ cat /home/hadoop/data/hadoop3/tmp/dfs/name/current/VERSION #Sat Oct 30 09:39:34 CST 2021 namespaceID=1264475431 clusterID=CID-631a92bd-c0c3-4cbd-9b0b-0af8d35f437e cTime=1635557974151 storageType=NAME_NODE blockpoolID=BP-1478213952-192.168.24.60-1635557974151 layoutVersion=-65 [hadoop@hadoop60 hadoop]$7、hadoop62同步hadoop63节点NameNode元数据
[hadoop@hadoop61 hadoop]$ bin/hdfs namenode -bootstrapStandby [hadoop@hadoop62 hadoop]$ bin/hdfs namenode -bootstrapStandby
注:如果出现下面划线的,说明同步成功
注:各个节点对照一下看是否一致,如果一致,则namenode数据同步成功。
[hadoop@hadoop62 hadoop]$ cat /home/hadoop/data/hadoop3/tmp/dfs/name/current/VERSION #Sat Oct 30 10:08:55 CST 2021 namespaceID=1264475431 clusterID=CID-631a92bd-c0c3-4cbd-9b0b-0af8d35f437e cTime=1635557974151 storageType=NAME_NODE blockpoolID=BP-1478213952-192.168.24.60-1635557974151 layoutVersion=-659、关闭主节点NameNode进程
[hadoop@hadoop60 hadoop]$ bin/hdfs --daemon stop namenode10、jps看一下进程,发现namenode关掉了
[hadoop@hadoop60 hadoop]$ jps 3041 Jps 2303 JournalNode 2463 QuorumPeerMain11、关闭journalnode进程
[hadoop@hadoop60 hadoop]$ runRemoteCmd.sh "/home/hadoop/app/hadoop/bin/hdfs --daemon stop journalnode" all *******************hadoop60*********************** *******************hadoop61*********************** *******************hadoop62*********************** [hadoop@hadoop60 hadoop]$12、一键启动HDFS集群
(1)注:启动前zookeeper是不能关闭的,jps查看下,发现
QuorumPeerMain还在就OK
[hadoop@hadoop60 hadoop]$ jps 3355 Jps 2463 QuorumPeerMain
(2)一键启动HDFS集群
[hadoop@hadoop60 hadoop]$ sbin/start-dfs.sh Starting namenodes on [hadoop60 hadoop61 hadoop62] Starting datanodes Starting journal nodes [hadoop60 hadoop62 hadoop61] Starting ZK Failover Controllers on NN hosts [hadoop60 hadoop61 hadoop62] [hadoop@hadoop60 hadoop]$
(3)查看各节点NameNode状态(一个active,两个standby,如果是的话,说明启动OK)
[hadoop@hadoop62 hadoop]$ bin/hdfs haadmin -getServiceState nn1 active [hadoop@hadoop62 hadoop]$ bin/hdfs haadmin -getServiceState nn2 standby [hadoop@hadoop62 hadoop]$ bin/hdfs haadmin -getServiceState nn3 standby13、web界面查看HDFS(百度输入IP地址:9870)
(1)查看hdfs文件系统目录,可以看出hdfs文件和目录为空。
[hadoop@hadoop60 hadoop]$ bin/hdfs dfs -ls / [hadoop@hadoop60 hadoop]$
(2)本地新建一个文件wd.txt
[hadoop@hadoop60 hadoop]$ vi wd.txt [hadoop@hadoop60 hadoop]$ cat wd.txt hadoop hadoop hadoop [hadoop@hadoop60 hadoop]$
(3)新建目录qwq
[hadoop@hadoop60 hadoop]$ bin/hdfs dfs -mkdir /qwq
(4)查看hdfs文件系统目录,可以看到qwq已经存在
[hadoop@hadoop60 hadoop]$ bin/hdfs dfs -ls / Found 1 items -rw-r--r-- 3 hadoop supergroup 21 2021-10-30 10:43 /qwq
(5)在web页面,也可以看到test已经存在
(6)将本地文件上传到hdfs qwq目录下
[hadoop@hadoop60 hadoop]$ bin/hdfs dfs -put wd.txt /qwq
(7)查看hdfs文件系统目录,可以看到wd.txt已经存在
[hadoop@hadoop60 hadoop]$ bin/hdfs dfs -ls /qwq Found 1 items -rw-r--r-- 3 hadoop supergroup 21 2021-10-30 13:32 /qwq/wd.txt
(8)在web页面,也可以看到wd.txt已经存在



