hadoop 安装
1.上传 hadoop 安装包
2.解压 hadoop
[hadoop @master apps]$ tar -zxvf hadoop-2.8.3.tar.gz
改名:
[hadoop @master apps]$ mv hadoop-2.8.3 hadoop
[hadoop @master apps]$ ls hadoop hadoop-2.8.3.tar.gz java jdk-8u121-linux-x64.tar.gz
3.配置 hadoop 环境变量
[hadoop @master apps]$sudo vim /etc/profile
在原来配置的 java 下面加上
export HADOOP_HOME=/opt/apps/hadoop export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
4.刷新配置
[hadoop @master apps]$ source /etc/profile
5.测试
[hadoop @master apps]$ hadoop version Hadoop 2.8.3 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r b3fe56402d908019d99af1f1f4fc65cb1d1436a2 Compiled by jdu on 2017-12-05T03:43Z Compiled with protoc 2.5.0 From source with checksum 9ff4856d824e983fa510d3f843e3f19d This command was run using /home/hd/apps/hadoop/share/hadoop/common/hadoop-common-2.8.3.jar
6.修改$HADOOP_HOME/etc/hadoop 的配置文件
cd /opt/apps/hadoop/etc/hadoop
修改配置 hadoop 的 7 个文件
第一个:hadoop-env.sh
vim hadoop-env.sh
#找到 JAVA_HOME 并修改
export JAVA_HOME=/opt/apps/java
第二个:core-site.xml
fs.defaultFS hdfs://master:9000 hadoop.tmp.dir /opt/apps/hadoop/tmpdata
第三个:hdfs-site.xml
dfs.replication 2 dfs.namenode.http-address master:50070 dfs.namenode.secondary.http-address master:50090 dfs.namenode.name.dir /opt/apps/hadoop/namenode dfs.datanode.data.dir /opt/apps/hadoop/datanode
第四个:mapred-site.xml
mv mapred-site.xml.template mapred-site.xml vim mapred-site.xml
mapreduce.framework.name yarn
第五个:yarn-site.xml
yarn.resourcemanager.hostname master yarn.nodemanager.aux-services mapreduce_shuffle
第六个文件:slaves
[hadoop @master hadoop]$ vim slaves master slave01 slave02
第七个文件:yarn-env.sh
[hadoop@master hadoop]$ vim yarn-env.sh export JAVA_HOME=/opt/apps/java
7.格式化 namenode(是对 namenode 进行初始化)
NameNode 格式化 hdfs namenode -format 或者 hadoop namenode -format
DataNode 格式化 hdfs datanode -format 或者 hadoop datanode -format
8.拷贝 hadoop 到节点的服务器
[hadoop @master apps]$ scp -r hadoop hadoop@slave01:/opt/apps/ [hadoop @master apps]$ scp -r hadoop hadoop@slave02:/opt/apps/ [hadoop @master apps]$ scp /etc/profile hadoop@slave01:/etc/ [hadoop @master apps]$ scp /etc/profile hadoop@slave02:/etc/
9.启动分布式文件系统
start-dfs.sh 启动分布式文件系统 start-yarn.sh 启动资源管理 stop-dfs.sh 停止分布式文件系统 stop-yarn.sh 停止资源管理
10.jps 查看 hadoop 进程
namenode: 4505 NameNode 5177 ResourceManager 5578 Jps 4699 SecondaryNameNode datanode: 3702 Jps 3304 DataNode 3704 Jps 3405 NodeManager



