在配置前,需要各个虚拟机之间和本机能够ping通,防火墙全部关闭,Java和Hadoop环境准备好
| Master01 | NameNode DataNode |
| Slave01 | DataNode |
| Slave02 | DataNode |
需要配置的文件如下:
hadoop-env.sh
core-site.xml
hdfs-site.xml
yarn-site.xml
slaves
mapred-site.xml
fs.defaultFS hdfs://Master01:8020 io.file.buffer.size 4096 hadoop.tmp.dir /root/software/hadoop-2.7.1/hdfs/tmp
dfs.replication 2 dfs.block.size 134217728 dfs.namenode.name.dir file:///root/software/hadoop-2.7.1/hdfs/name dfs.datanode.data.dir file:///root/software/hadoop-2.7.1/hdfs/data fs.checkpoint.dir file:///root/software/hadoop-2.7.1/hdfs/cname fs.checkpoint.edits.dir file:///root/software/hadoop-2.7.1/hdfs/cname dfs.http.address Master01:50070 dfs.secondary.http.address Slave01:50090 dfs.webhdfs.enabled true dfs.permissions false
yarn.resourcemanager.hostname Master01 yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.address Master01:8032 yarn.resourcemanager.scheduler.address Master01:8030 yarn.resourcemanager.resource-tracker.address Master01:8031 yarn.resourcemanager.admin.address Master01:8033 yarn.resourcemanager.webapp.address Master01:8088
mapreduce.framework.name yarn true mapreduce.jobhistory.address Master01:10020 mapreduce.jobhistory.webapp.address Master01:19888
hadoop-env.sh
Master01 Slave01 Slave02



