栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Java

hadoop完全分布式搭建

Java 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

hadoop完全分布式搭建

前期准备

三台linux,yum和网络都配置好
yum和网络配置点这里
jdk和hadoop的安装包上传上来

1.修改主机名

master下执行

hostnamectl set-hostname master
bash

slave1下执行

hostnamectl set-hostname slave1
bash

slave2下执行

hostnamectl set-hostname slave2
bash
2.主机映射

三台都要执行

vi /etc/hosts

末尾写入

192.168.26.148 master
192.168.26.149 slave1
192.168.26.150 slave2
3.ssh免密

仅master执行
master输入

ssh-keygen -t rsa

三次回车出现密文如下(大概这个样子,不完全一样)

+--[ RSA 2048]----+
|  .o.+.   ..     |
|. . o..   .E.    |
|.o.++  . * .     |
|..+o..  + =      |
|   +.   S+       |
|    .. .  .      |
|      .          |
|                 |
|                 |
+-----------------+
ssh-copy-id -i /root/.ssh/id_rsa.pub master
ssh-copy-id -i /root/.ssh/id_rsa.pub slave1
ssh-copy-id -i /root/.ssh/id_rsa.pub slave2

依次输入 yes和root 用户的密码
依次验证

ssh master
ssh slave1
ssh slave2

登录一次就及时退出一次(exit)

4.关闭防火墙

三台都要执行

systemctl stop firewalld
systemctl disable firewalld

查看防火墙状态

 systemctl status firewalld
5.安装jdk

仅master执行
解压缩

tar -zxvf jdk-8u152-linux-x64.tar.gz  -C  /usr/

过去改个好名字
直接改成jdk

6.安装hadoop

仅master执行
解压缩

tar -zxvf hadoop-2.7.1.tar.gz -C  /usr

过去改个好名字
直接改成hadoop

7.hadoop配置

仅master执行
①进入目录

cd /usr/hadoop/etc/hadoop

② core-site.xml


 
 fs.defaultFS 
 hdfs://master:8020 
  
 
 hadoop.tmp.dir
 /var/log/hadoop/tmp
 

③ hadoop-env.sh

export JAVA_HOME=/usr/jdk

④ hdfs-site.xml



 dfs.namenode.name.dir
 file:///data/hadoop/hdfs/name


 dfs.datanode.data.dir
 file:///data/hadoop/hdfs/data


 dfs.namenode.secondary.http-address
 master:50090


 dfs.replication
 3


⑤ mapred-site.xml
目录中默认没有该文件,需要先通过如下命令将文件复制并重命名为“mapred-site.xml”

cp mapred-site.xml.template mapred-site.xml


 mapreduce.framework.name
 yarn



 mapreduce.jobhistory.address
 master:10020


 mapreduce.jobhistory.webapp.address
 master:19888


⑥ yarn-site.xml


 
 yarn.resourcemanager.hostname
 master
  
 
 yarn.resourcemanager.address
 ${yarn.resourcemanager.hostname}:8032
 
 
 yarn.resourcemanager.scheduler.address
 ${yarn.resourcemanager.hostname}:8030
 
 
 yarn.resourcemanager.webapp.address
 ${yarn.resourcemanager.hostname}:8088
 
 
 yarn.resourcemanager.webapp.https.address
 ${yarn.resourcemanager.hostname}:8090
 
 
 yarn.resourcemanager.resource-tracker.address
 ${yarn.resourcemanager.hostname}:8031
 
 
 yarn.resourcemanager.admin.address
 ${yarn.resourcemanager.hostname}:8033
 
 
 yarn.nodemanager.local-dirs
 /data/hadoop/yarn/local
 
 
 yarn.log-aggregation-enable
 true
 
 
 yarn.nodemanager.remote-app-log-dir
 /data/tmp/logs
 
 
yarn.log.server.url 
http://master:19888/jobhistory/logs/
URL for job history server


 yarn.nodemanager.vmem-check-enabled
 false
 

 yarn.nodemanager.aux-services
 mapreduce_shuffle
 
 
 yarn.nodemanager.aux-services.mapreduce.shuffle.class
 org.apache.hadoop.mapred.ShuffleHandler
 
 
 yarn.nodemanager.resource.memory-mb 
 2048 
 
 
 yarn.scheduler.minimum-allocation-mb 
 512 
 
 
 yarn.scheduler.maximum-allocation-mb 
 4096 
 
 
 mapreduce.map.memory.mb 
 2048 
 
 
 mapreduce.reduce.memory.mb 
 2048 
 
 
 yarn.nodemanager.resource.cpu-vcores 
 1 


⑦ yarn-env.sh

export JAVA_HOME=/usr/jdk

⑧ slaves
删除原有内容
添加

slave1
slave2

⑨配置环境变量

vi /etc/profile

写入

#jdk
export JAVA_HOME=/usr/jdk
export PATH=$PATH:$JAVA_HOME/bin

#hadoop
export HADOOP_HOME=/usr/hadoop
export PATH=$HADOOP_HOME/bin:$PATH:$HADOOP_HOME/sbin

刷新环境变量

source /etc/profile
8.分发文件

jdk

scp -r /usr/jdk slave1:/usr/
scp -r /usr/jdk slave2:/usr/

hadoop

scp -r /usr/hadoop slave1:/usr/
scp -r /usr/hadoop slave2:/usr/

环境变量

scp -r /etc/profile slave1:/etc/profile
scp -r /etc/profile slave2:/etc/profile

刷新环境变量

source /etc/profile
9.格式化
hdfs namenode -format
10.启动集群
start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh start historyserver
11.查看集群状态

①jps
master

NameNode
SecondaryNameNode
JobHistoryServer
Jps
ResourceManager

slave

Jps
DataNode
NodeManager

②web

http://192.168.26.148:50070
http://192.168.26.148:8088
12.执行MapReduce词频统计任务

本地创建文件

cd /root
vi 1.txt

写入

Give me the strength lightly to bear my joys and sorrows.Give me the strength to make my love fruitful in service.Give me the strength never to disown the poor or bend my knees before insolent might.Give me the strength to raise my mind high above daily trifles.And give me the strength to surrender my strength to thy will with love.

创建文件夹并上传文件

hadoop fs -mkdir /input
hadoop fs -put /root/1.txt /input

执行命令

cd /usr/hadoop/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-examples-2.7.1.jar wordcount  /input/1.txt /output

此处的hadoop-mapreduce-examples-2.7.1.jar 根据实际版本填写
查看结果

hadoop fs -ls /output  
hadoop fs -cat /output/part-r-00000

如下

Give	1
above	1
and	1
bear	1
before	1
bend	1
daily	1
disown	1
fruitful	1
give	1
high	1
in	1
insolent	1
joys	1
knees	1
lightly	1
love	1
love.	1
make	1
me	5
might.Give	1
mind	1
my	5
never	1
or	1
poor	1
raise	1
service.Give	1
sorrows.Give	1
strength	6
surrender	1
the	6
thy	1
to	6
trifles.And	1
will	1
with	1
转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/287943.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号