栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

hbase常见错误汇总

hbase常见错误汇总

1、报错信息
2014-02-24 12:15:48,507 WARN  [Thread-2] util.DynamicClassLoader (DynamicClassLoader.java:(106)) - Failed to identify the fs of dir hdfs://fulonghadoop/hbase/lib, ignored
java.io.IOException: No FileSystem for scheme: hdfs

解决办法

在配置文件中加入

    org.apache.hadoop
    hadoop-hdfs
    2.7.2


在hdfs-site.xml或者core-site.xml中加入

    fs.hdfs.impl
 	org.apache.hadoop.hdfs.DistributedFileSystem

2、报错信息
ERROR [ClientFinalizer-shutdown-hook] hdfs.DFSClient: Failed to close inode 148879
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /hbase/oldWALs/hadoop-4%2C16020%2C1544498293590.default.1545167908560 (inode 148879): File is not open for writing. Holder DFSClient_NONMAPREDUCE_328068851_1 does not have any open files

解决办法
(1)调整HDFS中配置参数

#,默认为4096过小,导致Hbase宕机。 根据实际情况修改
dfs.datanode.max.transfer.threads

#最大传输线程数,dfs.datanode.max.transfer.threads对于datanode来说,就如同linux上的文件句柄的限制,当datanode 上面的连接数操作配置中的设置时,datanode就会拒绝连接。 一般都会将此参数调的很大,40000+左右。
dfs.datanode.max.xcievers

(2)dfs.datanode.max.xcievers值修改为8192(之前为4096)根据实际情况修改!

(3)设置打开文件的最大数值

ulimit -a
ulimit -n 65535
vim /etc/security/limits.conf
#新增内容
* soft nofile 65535
* hard nofile 65535

//修改hdfs配置

    dfs.datanode.max.transfer.threads
    40000


    dfs.datanode.max.xcievers
    65535


#dfs.datanode.max.xcievers属性表示每个datanode任一时刻可以打开的文件数量上限。此数目不能大于系统打开文件数的设置,即/etc/security/limits.conf中nofile的数值。
3、报错
INFO  [regionserver/hadoop-4/192.168.168.86:16020-SendThread(hadoop-6:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 26667ms for sessionid 0x3682c0f03c60033, closing socket connection and attempting reconnect
2019-01-09 21:54:13,016 INFO  [main-SendThread(hadoop-6:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 26669ms for sessionid 0x3682c0f03c60032, closing socket connection and attempting reconnect
2019-01-09 21:54:13,018 INFO  [LeaseRenewer:work@cluster1] retry.RetryInvocationHandler: Exception while invoking renewLease of class ClientNamenodeProtocolTranslatorPB over hadoop-1/192.168.168.83:9000. Trying to fail over immediately.
org.apache.hadoop.net.ConnectTimeoutException: Call From hadoop-4/192.168.168.86 to hadoop-1:9000 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=hadoop-1/192.168.168.83:9000]; For more details see:  http://wiki.apache.org/hadoop/SocketTimeout

解决办法(怀疑是Hbase合并hfile时导致的超时现象)

hbase修改

hbase.rpc.timeout
3600000



hdfs修改
	
        dfs.datanode.socket.write.timeout
        3600000
    

    
        dfs.socket.timeout
        3600000
     
转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/720263.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号