栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

高可用 hadoop HA 搭建教程

高可用 hadoop HA 搭建教程

高可用 hadoop HA 搭建教程
  • 基础环境配置
    • 基础环境配置
  • 文件配置
    • core-site.xml
    • hdfs-site.xml
    • mapred-site.xml
    • yarn-site.xml
  • 解释说明
  • 相关命令:

基础环境配置 基础环境配置

点击跳转

文件配置

=========================================================

core-site.xml


    ha.zookeeper.quorum
    192.168.120.150:2181,192.168.120.151:2181,192.168.120.152:2181



    fs.defauFS
    hdfs://mycluster



    hadoop.tmp.dir
    /hadoop01/tmp



    io.file.buffer.size
    4096

=========================================================

hdfs-site.xml


    dfs.block.size
    134217728




    dfs.replication
    3




    dfs.name.dir
    /hadoop01/namenode_data




    dfs.data.dir
    /hadoop01/datanode_data




    dfs.nameservices
    mycluster



    dfs.ha.namenodes.mycluster
    nn1,nn2



    dfs.namenode.rpc-address.mycluster.nn1
    192.168.120.150:9000


    dfs.namenode.http-address.mycluster.nn1
    192.168.120.150:50070




    dfs.namenode.rpc-address.mycluster.nn2
    192.168.120.151:9000


    dfs.namenode.http-address.mycluster.nn2
    192.168.120.151:50070




    dfs.namenode.shared.edits.dir
    qjournal://192.168.120.150:8485;192.168.120.151:8485;192.168.120.152:8485/mycluster




    dfs.journalnode.edits.dir
    /hadoop01/journalnode




    dfs.ha.automatic-failover.enabled
    true




    dfs.client.filover.proxy.provider.mycluster
    org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider




    dfs.permissions
    false




  dfs.ha.fencing.methods
  
  sshfence
  shell(/bin/true)
  





    dfs.ha.fencing.ssh.private-key-files
    /root/.ssh/id_rsa

=========================================================

mapred-site.xml


    mapreduce.framework.name
    yarn



    mapreduce.jobhistory.address
    192.168.120.150:10020



    mapreduce.jobhistory.webapp.address
    192.168.120.150:19888



    mapreduce.job.ubertark.enabled
    true

=========================================================

yarn-site.xml


    yarn.resourcemanager.ha.enabled
    true




    yarn.resourcemanager.cluster-id
    hayarn




    yarn.resourcemanager.ha.rm-ids
    rm1,rm2




    yarn.resourcemanager.hostname.rm1
    192.168.120.151




    yarn.resourcemanager.hostname.rm2
    192.168.120.152




    yarn.resourcemanager.recovery.enabled
    true




    yarn.resourcemanager.zk-address
    192.168.120.150:2181,192.168.120.151:2181,192.168.120.152:2181




    yarn.nodemanager.aux-services
    mapreduce_shuffle




    yarn.resourcemanager.store.class
    org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore




    yarn.resourcemanager.hostname
    192.168.120.152

=========================================================

解释说明

hadoop2.6.0
zookeeper3.4.5
192.168.120.150 ===> master 或者 hadoop(文章里面可能带有相关字眼)
192.168.120.151 ===> slave1
192.168.120.152 ===> slave2

相关命令:

启动zookeeper(初始化工作)
zkServer.sh start 三台机子都要启动
zkServer.sh status

hadoop-daemon.sh start journalnode 启动journalnode

hdfs namenode -format 格式化(仅master)

hdfs namenode -bootstrapStandby 同步slave1
或者把
hadoop01目录下的namenode_data发送到slave1
scp /hadoop01/namenode_data slave1:/hadoop/namenode_data

hdfs zkfc -formatZK 复制Namenode节点(master或者slave1)格式化zkfc

start-dfs.sh 在master中启动hdfs相关服务

start-yarn.sh 在slave2上启动yarn相关服务

启动master的历史任务服务器
mr-jobhistory-daemon.sh start historyserver

slave1中的ResourceManager
yarn-daemon.sh start resourcemanager

jps查看端口号,把active的kill
kill -9 端口号
登录web-ui,查看

hdfs haadmin -getServiceState nn1 查看状态
yarn rmadmin -getServiceState rm1 查看状态

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/354703.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号