上一篇我们已经准备好三台虚拟机,计划一主两从搭建hadoop集群。没安装的可以参考一下。
虚拟机安装传送门:mac搭建hadoop集群之虚拟机安装
集群规划| 主机 | 角色 |
|---|---|
| node1 | NN DN RM NM |
| node2 | SNN DN NM |
| node3 | DN NM |
hadoop3.3安装包下载
更改主机名分别将三台虚拟机分别更改为node1、node2、node3。
vim /etc/hostname新增host映射
vim /etc/hostname # 添加的内容 172.16.254.4 node1 172.16.254.5 node2 172.16.254.6 node3时间同步及防火墙关闭
# 集群时间同步 ntpdate ntp5.aliyun.com # 防火墙关闭 firewall-cmd --state #查看防火墙状态 systemctl stop firewalld.service #停止firewalld服务 systemctl disable firewalld.service #开机禁用firewalld服务ssh免密登录(只需要配置node1至node1、node2、node3即可)
#node1生成公钥私钥 (一路回车) ssh-keygen #node1配置免密登录到node1 node2 node3 ssh-copy-id node1 ssh-copy-id node2 ssh-copy-id node3上传安装包
新建目录/export/server目录
mkdir -p /export/server
将hadoop3.3安装包下载内的 jdk-8u241-linux-x64.tar.gz和hadoop-3.3.0-Centos7-64-with-snappy.tar.gz两个压缩包上传到node1中的/export/server。
解压cd /export/server tar zxvf jdk-8u241-linux-x64.tar.gz tar zxvf hadoop-3.3.0-Centos7-64-with-snappy.tar.gz环境变量的配置
#配置环境变量 vim /etc/profile # jdk export JAVA_HOME=/export/server/jdk1.8.0_241 export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar # hadoop export HADOOP_HOME=/export/server/hadoop-3.3.0 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
配置完以后记得运行source /etc/profile,输入java -version查看jdk版本确认配置是否生效。
修改hadoop配置文件配置文件路径/export/server/hadoop-3.3.0/etc/hadoop
-
hadoop-env.sh
#文件最后添加 export JAVA_HOME=/export/server/jdk1.8.0_241 export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_JOURNALNODE_USER=root export HDFS_ZKFC_USER=root export HDFS_SECONDARYNAMENODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root
-
core-site.xml
fs.defaultFS hdfs://node1:8020 hadoop.tmp.dir /export/data/hadoop-3.3.0/data hadoop.http.staticuser.user root hadoop.proxyuser.root.hosts * hadoop.proxyuser.root.groups * fs.trash.interval 1440 -
hdfs-site.xml
dfs.namenode.rpc-address node1:8020 dfs.namenode.name.dir /export/server/name dfs.datanode.data.dir /export/server/data dfs.namenode.secondary.http-address node2:9868 -
mapred-site.xml
mapreduce.framework.name yarn mapreduce.jobhistory.address node1:10020 mapreduce.jobhistory.webapp.address node1:19888 yarn.app.mapreduce.am.env HADOOP_MAPRED_HOME=${HADOOP_HOME} mapreduce.map.env HADOOP_MAPRED_HOME=${HADOOP_HOME} mapreduce.reduce.env HADOOP_MAPRED_HOME=${HADOOP_HOME} -
yarn-site.xml
yarn.resourcemanager.hostname node1 yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.pmem-check-enabled false yarn.nodemanager.vmem-check-enabled false yarn.log-aggregation-enable true yarn.log.server.url http://node1:19888/jobhistory/logs yarn.log-aggregation.retain-seconds 604800 -
workers
node1 node2 node3
cd /export/server scp -r hadoop-3.3.0 root@node2:$PWD scp -r hadoop-3.3.0 root@node3:$PWDHadoop集群启动
-
(首次启动,有数据的情况下千万别执行)格式化namenode
hdfs namenode -format
-
脚本一键启动
start-all.sh 可一键启动,当然也可以分开执行。[root@node1 ~]# start-dfs.sh Starting namenodes on [node1] Last login: Thu Nov 5 10:44:10 CST 2020 on pts/0 Starting datanodes Last login: Thu Nov 5 10:45:02 CST 2020 on pts/0 Starting secondary namenodes [node2] Last login: Thu Nov 5 10:45:04 CST 2020 on pts/0 [root@node1 ~]# start-yarn.sh Starting resourcemanager Last login: Thu Nov 5 10:45:08 CST 2020 on pts/0 Starting nodemanagers Last login: Thu Nov 5 10:45:44 CST 2020 on pts/0 [root@node1 ~]# start-all.sh
通过执行jps查看运行的hadoop集群是否满足我们的规划。node1这样完全满足。
-
Web UI页面
- HDFS集群:http://node1:9870/(http://172.16.254.4:9870/可能需要加上ip才能正常打开)
- YARN集群:http://node1:8088/
至此,hadoop集群搭建成功!



