jdk_env hadoop_env 简单的环境配置
export JAVA_HOME=/export/servers/jdk1.8.0_162 export PATH=$JAVA_HOME/bin:$PATH export HADOOP_HOME=//export/servers/hadoop-2.7.1 export PATH=$HADOOP_HOME/bin:$PATH验证是否已经配置了ssh登录
ssh localhost
基本是都是要输入密码进行验证的所以我们接下来配置免密登录
ssh-keygen -r rsa #生成密钥 ssh-copy-id localhost #复制密钥到指定的主机名上配置core-site.xml 文件
配置hdfs-site.xml 文件#指定集群的文件系统存放位置 fs.defaultFS hdfs://node01:8020 #hadoop文件存放的临时目录 hadoop.tmp.dir /export/servers/hadoop-2.7.1/hadoopDatas/tempDatas #hadoop缓冲区的大小 io.dile.buffer.size 4096 #开启hdfs的垃圾桶机制,删除的可以从这里面回收 单位是分钟 fs.trash.interval 10080
dfs.replication 1 dfs.namenode.secondary.http-address node01:50090 dfs.namenode.http-address node01:50070 dfs.namenode.name.dir file:///export/servers/hadoop-2.7.1/hadoopDatas/namenodeDatas,file:///export/servers/hadoop-2.7.1/hadoopDatas/namenodeDatas2 dfs.datanode.data.dir file:///export/servers/hadoop-2.7.1/hadoopDatas/datanodeDatas,file:///export/servers/hadoop-2.7.1/hadoopDatas/datanodeDatas2 dfs.namenode.edits.dir file:///export/servers/hadoop-2.7.1/hadoopDatas/nn/edits dfs.permissions false dfs.blocksize 134217728
将 mapred-site.xml.template 重新命名为
mapred-site.xml配置yarn-site.xml 文件mapreduce.framework.name yarn mapreduce.application.classpath /export/servers/hadoop-2.7.1 /share/hadoop/mapreduce/*:/export/servers/hadoop-2.7.1 /share/hadoop/mapreduce/lib/*
配置yarn-site.xml 文件#指定辅助服务名称 yarn.nodemanager.aux-services mapreduce_shuffle #指定yarn的resourcemanager地址 yarn.resourcemanager.hostname node01
配置mapred-env.sh#指定辅助服务名称 yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname node01
export JAVA_HOME=/export/servers/jdk1.8.0_162配置yarn-env.sh
export JAVA_HOME=/export/servers/jdk1.8.0_162创建文件
mkdir -p /export/servers/hadoop-2.7.1/hadoopDatas/tempDatas mkdir -p /export/servers/hadoop-2.7.1/hadoopDatas/namenodeDatas mkdir -p /export/servers/hadoop-2.7.1/hadoopDatas/namenodeDatas2 mkdir -p /export/servers/hadoop-2.7.1/hadoopDatas/datanodeDatas mkdir -p /export/servers/hadoop-2.7.1/hadoopDatas/datanodeDatas2 mkdir -p /export/servers/hadoop-2.7.1/hadoopDatas/nn/edits mkdir -p /export/servers/hadoop-2.7.1/hadoopDatas/snn/name mkdir -p /export/servers/hadoop-2.7.1/hadoopDatas/dfs/snn/edits格式化namenode
bin/hdfs namenode -format启动hdfs
sbin/start-dfs.sh sbin/start-yarn.sh sbin/start-all.shHadoop配置后没有NameNode进程是怎么回事? 网上查了一种解决方法:
1、先运行stop-all.sh
2、格式化namdenode,不过在这之前要先删除原目录,即core-site.xml下配置的hadoop.tmp.dir所指向的目录,删除后切记要重新建立配置的空目录,然后运行hadoop namenode -format
3、运行start-all.sh
已经解决~



