运行环境:VMWare、CentOS7、JDK1.8、Hadoop2.7.3
文末附有需要用到的软件包下载地址。
依次点击 “文件 - 新建虚拟机 - 默认选项,下一步 - 选择Linux版本,下一步 - 修改虚拟机名称和位置,下一步,默认选项,下一步 - 完成”,如图所示:
2. 配置静态 IP2.1 打开VMware的虚拟网络编辑器,查看网关信息。
2.2 使用如下命令修改网络配置文件,将 BOOTPROTO 由动态路由 dhcp 修改为静态路由 static、onBOOT 开机启动由 no 修改为 yes,并填入上面获得的网关 GATEWAY、子网掩码 NETMASK、IP地址 IPADDR。
vi /etc/sysconfig/network-scrpits/ifcfg-ens33 GATEWAY=192.x.x.x NETMASK=255.255.255.0 IPADDR=192.x.x.x
2.3 使用命令 service network restart 更新网络,使得修改信息生效;
2.4 使用 ip a 或 ip addr(centOS 7以后的命令为这个)查看IP地址,如图所示,可以查看到配置的静态IP,即表示配置静态IP成功。
此时还不能直接通过域名 ping 通外网,还需要配置域名解析服务 DNS。
2.5 使用上面的命令在网卡文件后追加 DNS 解析服务器;
2.6 使用命令 service network restart 更新网络,使得修改信息生效;
vi /etc/sysconfig/network-scrpits/ifcfg-ens33 DNS1=114.114.114.114 DNS2=8.8.8.8
使用 ping www.baidu.com 测试如图所示,可以连通外网。
2.7 使用命令 hostnamectl set-hostname ‘xxxx’ 修改主机名(修改后需重启生效)。
3. 使用 SSH 工具连接服务器(补充)工具介绍:sourceCRT、mobax、Xshell 都可以,以 mobax 为例
双击创建的session,输入密码并保存接受,连接成功如图所示:
4. 将环境需要的安装包上传至服务器[root@hadoop01 ~]# cd / [root@hadoop01 /]# ls bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var [root@hadoop01 /]# cd usr/ [root@hadoop01 usr]# ls bin etc games include lib lib64 libexec local sbin share src tmp # 创建存放软件安装包的文件夹 [root@hadoop01 usr]# mkdir software_installer [root@hadoop01 usr]# ls bin etc games include lib lib64 libexec local sbin share software_installer src tmp [root@hadoop01 usr]# cd software_installer/ [root@hadoop01 software_installer]# mkdir java [root@hadoop01 software_installer]# mkdir hadoop [root@hadoop01 software_installer]# mkdir mysql [root@hadoop01 software_installer]# ls hadoop java mysql [root@hadoop01 software_installer]#
如果是使用的 mobax 软件可以在左侧边栏可视化上传文件,直接从电脑本地拖拽即可。
5. 安装 JDK 环境这里提供的是 jdk 的 .tar.gz 压缩包,使用如下命令进行解压缩
# -C 指定解压路径 [root@hadoop01 java]# tar -zxvf jdk-8u211-linux-x64.tar.gz -C /usr/local/
配置环境变量
[root@hadoop01 java]# cd /usr/local/jdk1.8.0_211/
# 查看当前路径便于复制
[root@hadoop01 jdk1.8.0_211]# pwd
/usr/local/jdk1.8.0_211
[root@hadoop01 jdk1.8.0_211]# vi /etc/profile
# 在配置文件尾部追加
# JAVA_HOME
export JAVA_HOME=/usr/local//jdk1.8.0_211
export PATH=$PATH:$JAVA_HOME/bin
# 更新配置文件
[root@hadoop01 jdk1.8.0_211]# source /etc/profile
# 更新后使用 java -version 可检查是否配置成功,如下即成功
[root@hadoop01 jdk1.8.0_211]# java -version
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)
6. 安装 MySQL5.7
这里使用在线安装的方式进行安装(可灵活修改版本,并无需后续配置),不推荐使用离线方式。
6.1 去官网 https://dev.mysql.com/downloads/mysql/ 下载安装包
6.2 解压
[root@hadoop01 local]# cd /usr/software_installer/mysql/ [root@hadoop01 mysql]# ls mysql80-community-release-el7-3.noarch.rpm [root@hadoop01 mysql]# rpm -ivh mysql80-community-release-el7-3.noarch.rpm
6.3 切换路径
[root@hadoop01 mysql]# cd /etc/yum.repos.d/ [root@hadoop01 yum.repos.d]# ls CentOS-base.repo CentOS-fasttrack.repo CentOS-Vault.repo CentOS-CR.repo CentOS-Media.repo mysql-community.repo CentOS-Debuginfo.repo CentOS-Sources.repo mysql-community-source.repo
6.4 修改配置文件,选择 MySQL 版本,把 8.0 的enable=1 改成 0,把 5.7 的 enable=0 改成 1。如图所示:
6.5 安装
[root@hadoop01 yum.repos.d]# yum -y install mysql-community-server
如图所示,即成功安装 MySQL 5.7
6.6 启动 MySQL 服务并查看临时密码
[root@hadoop01 /]# service mysqld start Redirecting to /bin/systemctl start mysqld.service [root@hadoop01 /]# cd /var/lo local/ lock/ log/ [root@hadoop01 /]# cd /var/log/ [root@hadoop01 log]# ll total 632 drwxr-xr-x. 2 root root 232 Sep 23 2021 anaconda drwx------. 2 root root 23 Sep 23 2021 audit -rw-------. 1 root root 17830 Sep 23 11:11 boot.log -rw-------. 1 root utmp 0 Sep 23 2021 btmp drwxr-xr-x. 2 chrony chrony 6 Aug 8 2019 chrony -rw-------. 1 root root 1670 Sep 23 12:01 cron -rw-r--r--. 1 root root 123955 Sep 23 11:11 dmesg -rw-r--r--. 1 root root 123955 Sep 23 2021 dmesg.old -rw-r-----. 1 root root 0 Sep 23 2021 firewalld -rw-r--r--. 1 root root 193 Sep 23 2021 grubby_prune_debug -rw-r--r--. 1 root root 292000 Sep 23 12:09 lastlog -rw-------. 1 root root 750 Sep 23 12:09 maillog -rw-------. 1 root root 285474 Sep 23 12:12 messages -rw-r-----. 1 mysql mysql 4647 Sep 23 12:12 mysqld.log drwxr-xr-x. 2 root root 6 Sep 23 2021 rhsm -rw-------. 1 root root 7014 Sep 23 12:12 secure -rw-------. 1 root root 0 Sep 23 2021 spooler -rw-------. 1 root root 64000 Sep 23 12:09 tallylog drwxr-xr-x. 2 root root 23 Sep 23 2021 tuned -rw-------. 1 root root 719 Sep 23 2021 vmware-network.log -rw-------. 1 root root 3062 Sep 23 11:11 vmware-vgauthsvc.log.0 -rw-r--r--. 1 root root 2957 Sep 23 11:11 vmware-vmsvc.log -rw-rw-r--. 1 root utmp 5376 Sep 23 11:56 wtmp -rw-------. 1 root root 2197 Sep 23 12:09 yum.log [root@hadoop01 log]# cat mysqld.log
临时密码如图所示:
6.7 登录 MySQL 并修改密码
[root@hadoop01 log]# mysql -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 2 Server version: 5.7.35 Copyright (c) 2000, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. # 直接修改的话由于该密码不符合安全密码,无法进行修改,需修改安全级别 mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY '123456'; ERROR 1819 (HY000): Your password does not satisfy the current policy requirements # 修改密码的长度 mysql> set global validate_password_length=4; Query OK, 0 rows affected (0.00 sec) # 修改密码的安全级别 mysql> set global validate_password_policy=0; Query OK, 0 rows affected (0.00 sec) # 修改密码 mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'root'; Query OK, 0 rows affected (0.00 sec) # 更新语句 mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec)
6.8 此时如果使用本地客户端连接该服务器的数据库,还是不行的,需要进行授权
mysql> use mysql; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed # 更新域属性 mysql> update user set host='%' where user ='root'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 # 执行以上语句之后再执行更新语句: mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) # 再执行授权语句: mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'WITH GRANT OPTION; Query OK, 0 rows affected (0.00 sec)
此时本地客户端可以连接到该数据库,如图所示:
7. 服务器之间做域名映射[root@hadoop01 hadoop]# vi /etc/hosts # 追加 192.168.xx.110 hadoop01 192.168.xx.120 hadoop02 192.168.xx.130 hadoop038. 关闭防火墙
# 临时关闭防火墙,重启后失效
[root@hadoop01 hadoop]# systemctl stop firewalld
# 查看防火墙状态
[root@hadoop01 hadoop]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Thu 2021-09-23 12:01:48 EDT; 5s ago
Docs: man:firewalld(1)
Process: 719 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
Main PID: 719 (code=exited, status=0/SUCCESS)
Sep 23 11:11:49 hadoop01 systemd[1]: Starting firewalld - dynamic firewall daemon...
Sep 23 11:11:50 hadoop01 systemd[1]: Started firewalld - dynamic firewall daemon.
Sep 23 12:01:47 hadoop01 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Sep 23 12:01:48 hadoop01 systemd[1]: Stopped firewalld - dynamic firewall daemon.
# 永久关闭防火墙,打开使用指令 systemctl start firewalld
[root@hadoop01 hadoop]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@hadoop01 hadoop]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
Sep 23 11:11:49 hadoop01 systemd[1]: Starting firewalld - dynamic firewall daemon...
Sep 23 11:11:50 hadoop01 systemd[1]: Started firewalld - dynamic firewall daemon.
Sep 23 12:01:47 hadoop01 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Sep 23 12:01:48 hadoop01 systemd[1]: Stopped firewalld - dynamic firewall daemon.
[root@hadoop01 hadoop]#
9. 设置 SSH 免密登录(补充)
9.1 每台服务器,先各自生成自己的密码 (生成自己的公钥与私钥)
# 输入指令后一直回车 [root@hadoop01 hadoop]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:fCz53Hr7rBSufE56rKZn/6nLT7+8nwkJtWOk5tv0hvs root@hadoop01 The key's randomart image is: +---[RSA 2048]----+ | | | | | o | | . o + . | | S o+.+ | | =ooo.o | | oo=+o | | .+BO+++o| | .=*OO@XE*| +----[SHA256]-----+ [root@hadoop01 hadoop]#
如图,已经生成公钥和私钥,三台服务器都要生成自己的公钥与私钥
9.2 分发公钥(记住别忘记告诉自己一份)
[root@hadoop02 /]# ssh-copy-id hadoop01 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'hadoop01 (192.xx.xx.xx)' can't be established. ECDSA key fingerprint is SHA256:VhX+gclYY0o3MgV/cQmDbXVw0qazKGjK3HekZ8cFVIc. ECDSA key fingerprint is MD5:ea:e0:fa:98:35:9b:e6:3b:5a:59:2a:a4:e9:5c:cb:d9. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@hadoop01's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'hadoop01'" and check to make sure that only the key(s) you wanted were added. [root@hadoop02 /]# ssh-copy-id hadoop02 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'hadoop02 (192.xx.xx.xx)' can't be established. ECDSA key fingerprint is SHA256:VhX+gclYY0o3MgV/cQmDbXVw0qazKGjK3HekZ8cFVIc. ECDSA key fingerprint is MD5:ea:e0:fa:98:35:9b:e6:3b:5a:59:2a:a4:e9:5c:cb:d9. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@hadoop02's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'hadoop02'" and check to make sure that only the key(s) you wanted were added. [root@hadoop02 /]# ssh-copy-id hadoop03 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'hadoop03 (192.xx.xx.xx)' can't be established. ECDSA key fingerprint is SHA256:VhX+gclYY0o3MgV/cQmDbXVw0qazKGjK3HekZ8cFVIc. ECDSA key fingerprint is MD5:ea:e0:fa:98:35:9b:e6:3b:5a:59:2a:a4:e9:5c:cb:d9. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@hadoop03's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'hadoop03'" and check to make sure that only the key(s) you wanted were added. [root@hadoop02 /]#
9.3 服务器之间分发文件
# 从 hadoop01 分发 jdk 至 hadoop03 指定路径 [root@hadoop01 /]# scp -r /usr/software_installer/java/jdk-8u211-linux-x64.tar.gz hadoop03:/usr/software_installer/java/ jdk-8u211-linux-x64.tar.gz 100% 186MB 82.1MB/s 00:02 [root@hadoop01 /]#10. 安装配置 hadoop
10.1 下载到 hadoop 的 jar 包后将 jar 包上传至服务器。
10.2 解压
[root@hadoop01 hadoop]# tar -zxvf hadoop-2.7.3.tar.gz -C /usr/local/
10.3 修改 hadoop 的配置文件
[root@hadoop01 hadoop]# cd /usr/local/hadoop-2.7.3/etc/hadoop/ [root@hadoop01 hadoop]# ll total 152 -rw-r--r--. 1 root root 4436 Aug 17 2016 capacity-scheduler.xml -rw-r--r--. 1 root root 1335 Aug 17 2016 configuration.xsl -rw-r--r--. 1 root root 318 Aug 17 2016 container-executor.cfg -rw-r--r--. 1 root root 774 Aug 17 2016 core-site.xml -rw-r--r--. 1 root root 3589 Aug 17 2016 hadoop-env.cmd -rw-r--r--. 1 root root 4224 Aug 17 2016 hadoop-env.sh -rw-r--r--. 1 root root 2598 Aug 17 2016 hadoop-metrics2.properties -rw-r--r--. 1 root root 2490 Aug 17 2016 hadoop-metrics.properties -rw-r--r--. 1 root root 9683 Aug 17 2016 hadoop-policy.xml -rw-r--r--. 1 root root 775 Aug 17 2016 hdfs-site.xml -rw-r--r--. 1 root root 1449 Aug 17 2016 httpfs-env.sh -rw-r--r--. 1 root root 1657 Aug 17 2016 httpfs-log4j.properties -rw-r--r--. 1 root root 21 Aug 17 2016 httpfs-signature.secret -rw-r--r--. 1 root root 620 Aug 17 2016 httpfs-site.xml -rw-r--r--. 1 root root 3518 Aug 17 2016 kms-acls.xml -rw-r--r--. 1 root root 1527 Aug 17 2016 kms-env.sh -rw-r--r--. 1 root root 1631 Aug 17 2016 kms-log4j.properties -rw-r--r--. 1 root root 5511 Aug 17 2016 kms-site.xml -rw-r--r--. 1 root root 11237 Aug 17 2016 log4j.properties -rw-r--r--. 1 root root 931 Aug 17 2016 mapred-env.cmd -rw-r--r--. 1 root root 1383 Aug 17 2016 mapred-env.sh -rw-r--r--. 1 root root 4113 Aug 17 2016 mapred-queues.xml.template -rw-r--r--. 1 root root 758 Aug 17 2016 mapred-site.xml.template -rw-r--r--. 1 root root 10 Aug 17 2016 slaves -rw-r--r--. 1 root root 2316 Aug 17 2016 ssl-client.xml.example -rw-r--r--. 1 root root 2268 Aug 17 2016 ssl-server.xml.example -rw-r--r--. 1 root root 2191 Aug 17 2016 yarn-env.cmd -rw-r--r--. 1 root root 4567 Aug 17 2016 yarn-env.sh -rw-r--r--. 1 root root 690 Aug 17 2016 yarn-site.xml
(1)修改hadoop的依赖环境
[root@hadoop01 hadoop]# vi hadoop-env.sh # 将 JAVA_HOME 修改为 jdk 的目录 export JAVA_HOME=/usr/local/jdk1.8.0_211
(2)修改 core-site.xml(作用:idea api调用)
[root@hadoop01 hadoop]# vi core-site.xml # 在间添加 fs.defaultFS hdfs://hadoop01:9000 hadoop.tmp.dir /usr/local/hadoop-2.7.3/tmp
(3)修改hdfs-site.xml(hdfs的存储 元数据与真实数据的存储位置)
[root@hadoop01 hadoop]# vi hdfs-site.xml # 在间添加 dfs.namenode.name.dir /usr/local/hadoop-2.7.3/data/name dfs.datanode.data.dir /usr/local/hadoop-2.7.3/data/data dfs.replication 3 dfs.secondary.http.address hadoop01:50090
(4)配置 mapreduce-site.xml
# 修改为正式文件 [root@hadoop01 hadoop]# mv mapred-site.xml.template mapred-site.xml [root@hadoop01 hadoop]# ls capacity-scheduler.xml hadoop-policy.xml kms-log4j.properties ssl-client.xml.example configuration.xsl hdfs-site.xml kms-site.xml ssl-server.xml.example container-executor.cfg httpfs-env.sh log4j.properties yarn-env.cmd core-site.xml httpfs-log4j.properties mapred-env.cmd yarn-env.sh hadoop-env.cmd httpfs-signature.secret mapred-env.sh yarn-site.xml hadoop-env.sh httpfs-site.xml mapred-queues.xml.template hadoop-metrics2.properties kms-acls.xml mapred-site.xml hadoop-metrics.properties kms-env.sh slaves [root@hadoop01 hadoop]# vi mapred-site.xml # 在间添加 mapreduce.framework.name yarn
(5)配置yarn资源
[root@hadoop01 hadoop]# vi yarn-site.xml # 在间添加 yarn.resourcemanager.hostname hadoop01 yarn.nodemanager.aux-services mapreduce_shuffle
(6)修改从节点
[root@hadoop01 hadoop]# vi slaves # 修改为 hadoop02 hadoop03
(7)配置环境变量
[root@hadoop01 hadoop-2.7.3]# vi /etc/profile # 追加如下 # HADOOP_HOME export HADOOP_HOME=/usr/local/hadoop-2.7.3 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin # 更新 [root@hadoop01 hadoop-2.7.3]# source /etc/profile11. 克隆服务器
11.1 将被克隆的机器关闭
11.2 按图操作
11.3 注意点
克隆完成后,机器的账号密码与被克隆号机器一致,不能同时启动两台机器没需要先将克隆出的机器的IP地址进行修改,否则会产生冲突。
12. 启动 hadoop12.1 集群启动之前,一定要记得初始化
[root@hadoop01 /]# hadoop namenode -format DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 21/09/23 13:08:45 INFO namenode.NameNode: STARTUP_MSG: 21/09/23 13:08:45 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 21/09/23 13:08:45 INFO namenode.NameNode: createNameNode [-format] 21/09/23 13:08:46 WARN common.Util: Path /usr/local/hadoop-2.7.3/data/name should be specified as a URI in configuration files. Please update hdfs configuration. 21/09/23 13:08:46 WARN common.Util: Path /usr/local/hadoop-2.7.3/data/name should be specified as a URI in configuration files. Please update hdfs configuration. Formatting using clusterid: CID-cbb0ccb6-039f-4d59-8dae-a254ebda7acf 21/09/23 13:08:46 INFO namenode.FSNamesystem: No KeyProvider found. 21/09/23 13:08:46 INFO namenode.FSNamesystem: fsLock is fair:true 21/09/23 13:08:46 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 21/09/23 13:08:46 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 21/09/23 13:08:46 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 21/09/23 13:08:46 INFO blockmanagement.BlockManager: The block deletion will start around 2021 Sep 23 13:08:46 21/09/23 13:08:46 INFO util.GSet: Computing capacity for map BlocksMap 21/09/23 13:08:46 INFO util.GSet: VM type = 64-bit 21/09/23 13:08:46 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 21/09/23 13:08:46 INFO util.GSet: capacity = 2^21 = 2097152 entries 21/09/23 13:08:46 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 21/09/23 13:08:46 INFO blockmanagement.BlockManager: defaultReplication = 3 21/09/23 13:08:46 INFO blockmanagement.BlockManager: maxReplication = 512 21/09/23 13:08:46 INFO blockmanagement.BlockManager: minReplication = 1 21/09/23 13:08:46 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 21/09/23 13:08:46 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 21/09/23 13:08:46 INFO blockmanagement.BlockManager: encryptDataTransfer = false 21/09/23 13:08:46 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 21/09/23 13:08:46 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 21/09/23 13:08:46 INFO namenode.FSNamesystem: supergroup = supergroup 21/09/23 13:08:46 INFO namenode.FSNamesystem: isPermissionEnabled = true 21/09/23 13:08:46 INFO namenode.FSNamesystem: HA Enabled: false 21/09/23 13:08:46 INFO namenode.FSNamesystem: Append Enabled: true 21/09/23 13:08:47 INFO util.GSet: Computing capacity for map INodeMap 21/09/23 13:08:47 INFO util.GSet: VM type = 64-bit 21/09/23 13:08:47 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB 21/09/23 13:08:47 INFO util.GSet: capacity = 2^20 = 1048576 entries 21/09/23 13:08:47 INFO namenode.FSDirectory: ACLs enabled? false 21/09/23 13:08:47 INFO namenode.FSDirectory: XAttrs enabled? true 21/09/23 13:08:47 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 21/09/23 13:08:47 INFO namenode.NameNode: Caching file names occuring more than 10 times 21/09/23 13:08:47 INFO util.GSet: Computing capacity for map cachedBlocks 21/09/23 13:08:47 INFO util.GSet: VM type = 64-bit 21/09/23 13:08:47 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB 21/09/23 13:08:47 INFO util.GSet: capacity = 2^18 = 262144 entries 21/09/23 13:08:47 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 21/09/23 13:08:47 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 21/09/23 13:08:47 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 21/09/23 13:08:47 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 21/09/23 13:08:47 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 21/09/23 13:08:47 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 21/09/23 13:08:47 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 21/09/23 13:08:47 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 21/09/23 13:08:47 INFO util.GSet: Computing capacity for map NameNodeRetryCache 21/09/23 13:08:47 INFO util.GSet: VM type = 64-bit 21/09/23 13:08:47 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB 21/09/23 13:08:47 INFO util.GSet: capacity = 2^15 = 32768 entries 21/09/23 13:08:47 INFO namenode.FSImage: Allocated new BlockPoolId: BP-866472713-192.xx.xx.xx-1632416927282 21/09/23 13:08:47 INFO common.Storage: Storage directory /usr/local/hadoop-2.7.3/data/name has been successfully formatted. 21/09/23 13:08:47 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop-2.7.3/data/name/current/fsimage.ckpt_0000000000000000000 using no compression 21/09/23 13:08:47 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop-2.7.3/data/name/current/fsimage.ckpt_0000000000000000000 of size 351 bytes saved in 0 seconds. 21/09/23 13:08:47 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 21/09/23 13:08:47 INFO util.ExitUtil: Exiting with status 0 21/09/23 13:08:47 INFO namenode.NameNode: SHUTDOWN_MSG: [root@hadoop01 /]#
12.2 启动
[root@hadoop01 /]# start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [hadoop01] hadoop01: starting namenode, logging to /usr/local/hadoop-2.7.3/logs/hadoop-root-namenode-hadoop01.out hadoop03: starting datanode, logging to /usr/local/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop03.out hadoop02: starting datanode, logging to /usr/local/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop02.out Starting secondary namenodes [hadoop01] hadoop01: starting secondarynamenode, logging to /usr/local/hadoop-2.7.3/logs/hadoop-root-secondarynamenode-hadoop01.out starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop-2.7.3/logs/yarn-root-resourcemanager-hadoop01.out hadoop02: starting nodemanager, logging to /usr/local/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop02.out hadoop03: starting nodemanager, logging to /usr/local/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop03.out [root@hadoop01 /]#
启动成功后可以看到三台服务器的节点情况如图:
访问 Web 端口 192.xx.xx.xx:50070 可以看到
到这里,Hadoop 环境搭建就成功了!
13. hdfs shell 文本交互命令# 创建文件夹 [root@hadoop01 /]# hadoop fs -mkdir /software_installer [root@hadoop01 /]# hadoop fs -mkdir /xx # 删除文件夹 [root@hadoop01 /]# hadoop fs -rm -r /xx 21/09/23 13:19:22 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. Deleted /xx # 上传文件 [root@hadoop01 /]# hadoop fs -put /usr/software_installer/ /software_installer/ # 移动文件 [root@hadoop01 /]# hadoop fs -mv /software_installer/software_installer/* /software_installer # 删除文件夹 [root@hadoop01 /]# hadoop fs -rm -r /software_installer/software_install rm: `/software_installer/software_install': No such file or directory [root@hadoop01 /]# hadoop fs -rm -r /software_installer/software_installer 21/09/23 13:24:45 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. Deleted /software_installer/software_installer [root@hadoop01 /]#附件
我整理了需要用到的软件,上传到了我的阿里云盘,如果你还没有注册,可以点击下方我的邀请链接进行下载注册,可比直接在应用商城中下载注册多500G空间,当然我也会获得同样大小的扩容空间,嘻嘻來!
##############################################################
我在使用不限速「阿里云盘」,赠送你 500GB 快来试试吧
点此链接领取福利:阿里云盘下载
##############################################################
我用阿里云盘分享了「Hadoop环境搭建」,你可以不限速下载
复制这段内容打开「阿里云盘」App 即可获取
链接:Hadoop环境搭建软件包



