前言
在ubuntu20桌面上顺利编译hadoop3.3.1各种源码,编译命令如下都通过了,接下来是找到编译后的文件启动一个节点的hadoop集群。
Building distributions: Create binary distribution without native code and without documentation: $ mvn package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true Create binary distribution with native code and with documentation: $ mvn package -Pdist,native,docs -DskipTests -Dtar Create source distribution: $ mvn package -Psrc -DskipTests Create source and binary distributions with native code and documentation: $ mvn package -Pdist,native,docs,src -DskipTests -Dtar Create a local staging version of the website (in /tmp/hadoop-site) $ mvn clean site -Preleasedocs; mvn site:stage -DstagingDirectory=/tmp/hadoop-site Note that the site needs to be built in a second pass after other artifacts.
编译后的程序目录
参照文档
主要还是build.txt中的如下章节
Installing Hadoop
Look for these HTML files after you build the document by the above commands.
* Single Node Setup:
hadoop-project-dist/hadoop-common/SingleCluster.html
* Cluster Setup:
hadoop-project-dist/hadoop-common/ClusterSetup.html
SingleCluster.md.vm文件
该文件可以在编译后的项目中搜索到,按照文件内容操作即可
第一步
$ sudo apt-get install ssh
$ sudo apt-get install pdsh
第二步
root用户,进入到这里目录/hadoop-3.3.1-src/hadoop-dist/target/hadoop-3.3.1,并检查好JAVA_HOME变量是否可用
第三步
etc/hadoop/core-site.xml 和etc/hadoop/hdfs-site.xml文件加上
fs.defaultFS
hdfs://localhost:9000
第四步
配置localhost免密
$ ssh-keygen -t rsa -P ‘’ -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
第五步
初始化namenode
bin/hdfs namenode -format
第六步
配置sbin/start-dfs.sh和sbin/stop-dfs.sh,增加变量如下
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
第七步
配置etc/hadoop-env.sh加如下变量
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_144
export PDSH_RCMD_TYPE=ssh
第八步
启动hadoop
sbin/start-dfs.sh
第九步
http://localhost:9870/访问hdfs的webui
总结
按照上步骤操作基本不会出问题



