- 基础环境配置
- 基础环境配置
- 文件配置
- core-site.xml
- hdfs-site.xml
- mapred-site.xml
- yarn-site.xml
- 解释说明
- 相关命令:
点击跳转
文件配置=========================================================
core-site.xmlha.zookeeper.quorum 192.168.120.150:2181,192.168.120.151:2181,192.168.120.152:2181 fs.defauFS hdfs://mycluster hadoop.tmp.dir /hadoop01/tmp io.file.buffer.size 4096
=========================================================
hdfs-site.xmldfs.block.size 134217728 dfs.replication 3 dfs.name.dir /hadoop01/namenode_data dfs.data.dir /hadoop01/datanode_data dfs.nameservices mycluster dfs.ha.namenodes.mycluster nn1,nn2 dfs.namenode.rpc-address.mycluster.nn1 192.168.120.150:9000 dfs.namenode.http-address.mycluster.nn1 192.168.120.150:50070 dfs.namenode.rpc-address.mycluster.nn2 192.168.120.151:9000 dfs.namenode.http-address.mycluster.nn2 192.168.120.151:50070 dfs.namenode.shared.edits.dir qjournal://192.168.120.150:8485;192.168.120.151:8485;192.168.120.152:8485/mycluster dfs.journalnode.edits.dir /hadoop01/journalnode dfs.ha.automatic-failover.enabled true dfs.client.filover.proxy.provider.mycluster org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.permissions false dfs.ha.fencing.methods sshfence shell(/bin/true) dfs.ha.fencing.ssh.private-key-files /root/.ssh/id_rsa
=========================================================
mapred-site.xmlmapreduce.framework.name yarn mapreduce.jobhistory.address 192.168.120.150:10020 mapreduce.jobhistory.webapp.address 192.168.120.150:19888 mapreduce.job.ubertark.enabled true
=========================================================
yarn-site.xmlyarn.resourcemanager.ha.enabled true yarn.resourcemanager.cluster-id hayarn yarn.resourcemanager.ha.rm-ids rm1,rm2 yarn.resourcemanager.hostname.rm1 192.168.120.151 yarn.resourcemanager.hostname.rm2 192.168.120.152 yarn.resourcemanager.recovery.enabled true yarn.resourcemanager.zk-address 192.168.120.150:2181,192.168.120.151:2181,192.168.120.152:2181 yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.store.class org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore yarn.resourcemanager.hostname 192.168.120.152
=========================================================
解释说明hadoop2.6.0
zookeeper3.4.5
192.168.120.150 ===> master 或者 hadoop(文章里面可能带有相关字眼)
192.168.120.151 ===> slave1
192.168.120.152 ===> slave2
启动zookeeper(初始化工作)
zkServer.sh start 三台机子都要启动
zkServer.sh status
hadoop-daemon.sh start journalnode 启动journalnode
hdfs namenode -format 格式化(仅master)
hdfs namenode -bootstrapStandby 同步slave1
或者把
hadoop01目录下的namenode_data发送到slave1
scp /hadoop01/namenode_data slave1:/hadoop/namenode_data
hdfs zkfc -formatZK 复制Namenode节点(master或者slave1)格式化zkfc
start-dfs.sh 在master中启动hdfs相关服务
start-yarn.sh 在slave2上启动yarn相关服务
启动master的历史任务服务器
mr-jobhistory-daemon.sh start historyserver
slave1中的ResourceManager
yarn-daemon.sh start resourcemanager
jps查看端口号,把active的kill
kill -9 端口号
登录web-ui,查看
hdfs haadmin -getServiceState nn1 查看状态
yarn rmadmin -getServiceState rm1 查看状态



