2014-02-24 12:15:48,507 WARN [Thread-2] util.DynamicClassLoader (DynamicClassLoader.java:(106)) - Failed to identify the fs of dir hdfs://fulonghadoop/hbase/lib, ignored java.io.IOException: No FileSystem for scheme: hdfs
解决办法
在配置文件中加入2、报错信息在hdfs-site.xml或者core-site.xml中加入 org.apache.hadoop hadoop-hdfs 2.7.2 fs.hdfs.impl org.apache.hadoop.hdfs.DistributedFileSystem
ERROR [ClientFinalizer-shutdown-hook] hdfs.DFSClient: Failed to close inode 148879 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /hbase/oldWALs/hadoop-4%2C16020%2C1544498293590.default.1545167908560 (inode 148879): File is not open for writing. Holder DFSClient_NONMAPREDUCE_328068851_1 does not have any open files
解决办法
(1)调整HDFS中配置参数
#,默认为4096过小,导致Hbase宕机。 根据实际情况修改 dfs.datanode.max.transfer.threads #最大传输线程数,dfs.datanode.max.transfer.threads对于datanode来说,就如同linux上的文件句柄的限制,当datanode 上面的连接数操作配置中的设置时,datanode就会拒绝连接。 一般都会将此参数调的很大,40000+左右。 dfs.datanode.max.xcievers
(2)dfs.datanode.max.xcievers值修改为8192(之前为4096)根据实际情况修改!
(3)设置打开文件的最大数值
ulimit -a ulimit -n 65535 vim /etc/security/limits.conf #新增内容 * soft nofile 65535 * hard nofile 65535 //修改hdfs配置3、报错dfs.datanode.max.transfer.threads 40000 #dfs.datanode.max.xcievers属性表示每个datanode任一时刻可以打开的文件数量上限。此数目不能大于系统打开文件数的设置,即/etc/security/limits.conf中nofile的数值。 dfs.datanode.max.xcievers 65535
INFO [regionserver/hadoop-4/192.168.168.86:16020-SendThread(hadoop-6:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 26667ms for sessionid 0x3682c0f03c60033, closing socket connection and attempting reconnect 2019-01-09 21:54:13,016 INFO [main-SendThread(hadoop-6:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 26669ms for sessionid 0x3682c0f03c60032, closing socket connection and attempting reconnect 2019-01-09 21:54:13,018 INFO [LeaseRenewer:work@cluster1] retry.RetryInvocationHandler: Exception while invoking renewLease of class ClientNamenodeProtocolTranslatorPB over hadoop-1/192.168.168.83:9000. Trying to fail over immediately. org.apache.hadoop.net.ConnectTimeoutException: Call From hadoop-4/192.168.168.86 to hadoop-1:9000 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=hadoop-1/192.168.168.83:9000]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout
解决办法(怀疑是Hbase合并hfile时导致的超时现象)
hbase修改hdfs修改 hbase.rpc.timeout 3600000 dfs.datanode.socket.write.timeout 3600000 dfs.socket.timeout 3600000



