查看机器上记录的datanode的日志文件,查看错误信息
2021-11-17 13:50:03,722 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/usr/hadoop/hadoop/dfs/data/ java.io.IOException: Incompatible clusterIDs in /usr/hadoop/hadoop/dfs/data: namenode clusterID = CID-13b3cdc7-596a-4e7e-9419-e741957c1521; datanode clusterID = CID-467d96b8-1049-4894-a768-41e56b2d2033 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:768) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:293) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:409) at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:388) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:564) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1659) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1620) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:826) at java.lang.Thread.run(Thread.java:748) 2021-11-17 13:50:03,725 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool(Datanode Uuid c00528f4-b2ba-4f6e-9c19-a43ba3e7224e) service to localhost/127.0.0.1:9000. Exiting. java.io.IOException: All specified directories have failed to load. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:565) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1659) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1620) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:826) at java.lang.Thread.run(Thread.java:748) 2021-11-17 13:50:03,725 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid c00528f4-b2ba-4f6e-9c19-a43ba3e7224e) service to localhost/127.0.0.1:9000 2021-11-17 13:50:03,736 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid c00528f4-b2ba-4f6e-9c19-a43ba3e7224e) 2021-11-17 13:50:05,739 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode 2021-11-17 13:50:05,794 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
在网上查了一些资料后发现主要是因为namenode和datanode的clusterID不一致导致的。
产生原因是因为多次hdfs nomenode -format,每一次format后namenode都会产生新的clusterID,而datanode还是原本的clusterID。
解决办法
1、进入hadoop安装目录,进入etc/hadoop查看自己的hdfs-site.xml,找到dfs.namenode.name.dir和dfs.datanode.data.dir配置的路径
2、进入dfs.namenode.name.dir配置的文件路径下的current文件夹,然后查看VERSION文件
cat VERSION
或者直接查看
cat /usr/hadoop/hadoop/dfs/name/current/VERSION
然后复制clusterID。
接着根据dfs.datanode.data.dir配置的文件路径,找到current文件夹,编辑VERSION文件,把这里的cluterID替换为namenode中的clusterID即可。
大功告成,重新启动即可!



