栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

大数据环境搭建hadoop(hadoop平台搭建)

大数据环境搭建hadoop(hadoop平台搭建)

先安装JDK与Hadoop

cd $HADOOP_HOME/etc/hadoop

=============core-site.xml


	 
		fs.default.name 
		hdfs://master:9000 
	 
	 
		hadoop.tmp.dir 
		file:/opt/module/hadoop-3.3.1/temp 
	 
	 
		io.file.buffer.size 
		131072 
	

===========hdfs-site.xml


	
		dfs.namenode.name.dir
		file:/opt/module/hadoop-3.3.1/data/dfs/data
	
	
		dfs.datanode.data.dir
		file:/opt/module/hadoop-3.3.1/data/dfs/data
	
	
		dfs.replication
		2
	
	
		dfs.permissions
		false need not permissions
	
	
		dfs.namenode.http-address
		master:50070
	
	
        		dfs.namenode.secondary.http-address
        		slave:9868
    	

==============yarn-site.xml


	 
	 
		yarn.resourcemanager.hostname 
		slave 
	 
	 
		The address of the applications manager interface in the RM. 
		yarn.resourcemanager.address 
		slave:8032 
	 
	 
		The address of the scheduler interface. 
		yarn.resourcemanager.scheduler.address 
		slave:8030 
	 
	 
		The http address of the RM web application. 
		yarn.resourcemanager.webapp.address 
		slave:18088 
	 
	 
		The https adddress of the RM web application. 
		yarn.resourcemanager.webapp.https.address 
		slave:18090 
	 
	 
		yarn.resourcemanager.resource-tracker.address 
		slave:8031 
	 
	 
		The address of the RM admin interface. 
		yarn.resourcemanager.admin.address 
		slave:8033 
	 
	 
		yarn.nodemanager.aux-services 
		mapreduce_shuffle 
	

=========mapred-site.xml

 
	 
		mapreduce.framework.name 
		yarn 
	 
	 
		mapred.job.tracker 
		master:9001 
	

==========只需要添加worker节点的信息即可
vim /opt/module/hadoop-3.1.3/etc/hadoop/workers

情况:master : master_ip
worker : slave_ip

slave_ip(这种情况只需要添加这个IP到两个节点的worker配置文件中)

=====执行生效
source /etc/profile

======== 配置文件地址
/opt/module/hadoop-3.3.1/etc/hadoop

====添加主机列表,每个节点只需要添加其他节点的信息就好,不需要添加本身的ip。
vi /etc/hosts
master_ip master
slave_ip slave

========######第一次启动NameNode需要先进行格式化
cd /opt/module/hadoop-3.3.1/
/opt/module/hadoop-3.3.1/bin/hdfs namenode -format

=====配置在/etc/profile环境变量中
export HADOOP_COMMON_LIB_NATIVE_DIR= H A D O O P H O M E / l i b / n a t i v e e x p o r t H A D O O P O P T S = " − D j a v a . l i b r a r y . p a t h = HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path= HADOOPH​OME/lib/nativeexportHADOOPO​PTS="−Djava.library.path=HADOOP_HOME/lib/native"

=====将节点间的公钥进行分享,即可使用主机名相互访问。
SSH免密登录
SSH免密登录是为了各个服务器之间访问不在需要密码。两台台服务器分别执行一遍如下命令。执行完该命令会在root/.ssh下生成密钥。
ssh-keygen -t rsa

#在slave id_rsa.pub发送到主机上,并重新命令
scp id_rsa.pub root@master:~/.ssh/id_rsa.pub.slave

在主机root/.ssh下把id_rsa.pub、id_rsa.pub.slave追加到authorized_keys中。
cat id_rsa.pub >> authorized_keys
cat id_rsa.pub.slave >> authorized_keys

然后把authorized_keys传回到slave1
scp authorized_keys root@slave:~/.ssh

最后修改文件权限。

chmod 755 ~
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys

====添加信息到$HADOOP_HOME/etc/hadoop/hadoop-env.sh
export HADOOP_SHELL_EXECNAME=root
export HDFS_NAMENODE_USER=root

====启动
/opt/module/hadoop-3.3.1/sbin/start-dfs.sh
/opt/module/hadoop-3.3.1/sbin/start-yarn.sh
/opt/module/hadoop-3.3.1/sbin/start-all.sh

====关闭
/opt/module/hadoop-3.3.1/sbin/stop-dfs.sh
/opt/module/hadoop-3.3.1/sbin/stop-yarn.sh
/opt/module/hadoop-3.3.1/sbin/stop-all.sh

=====NameNode节点的访问地址

http://master:50070/
http://slave:8088/

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/772356.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号