进入新节点查看是否有自带的压缩库
ll /usr/lib64 | grep snappy
删除掉自带的snappy库
yum -y remove snappy
安装所需依赖
sudo yum -y install gcc c++ autoconf automake libtool二、下载snappy,版本要和旧节点的一样
wget https://src.fedoraproject.org/repo/pkgs/snappy/snappy-1.1.4.tar.gz/sha512/873f655713611f4bdfc13ab2a6d09245681f427fbd4f6a7a880a49b8c526875dbdd623e203905450268f542be24a2dc9dae50e6acc1516af1d2ffff3f96553da/snappy-1.1.4.tar.gz
安装snappy
tar zxvf snappy-1.1.4.tar.gz -C /usr/local/snappy cd /usr/local/snappy/snappy-1.1.4 ./autogen.sh ./configure #如果make失败增加一步 autoreconf --force --install make make install注:默认安装到/usr/local/lib目录
添加Snappy本地库至/usr/lib64目录下
cp -d /usr/local/lib/* /usr/lib64三、安装hadoop-snappy
注:需要下载hadoop-snappy项目,需要用到maven进行编译
git clone git://github.com/electrum/hadoop-snappy cd hadoop-snappy/ # 注:如果不做libjvm.so软链接,编译时会报错 ln -s /usr/local/tools/java-se-8u40-ri/jre/lib/amd64/server/libjvm.so /usr/local/lib/ mvn package
这里编译的时候可能会报错,首先检查以下maven的配置文件里是否配置了阿里云的仓库,我这里试了几下不知道为什么报错了,因为时间的原因,这一步索性就没做,因为旧的节点里面有编译好的文件和jar包,直接拷贝就行。
所需文件和jar包以上传至百度云
链接:https://pan.baidu.com/s/17CPQ_yuOFmjp33ZI6SkNwQ
提取码:1107
注意:文件通过编译出来的是链接,复制后变成一个文件了,不过我这里新节点也是从旧的节点拷贝过来的,也变成文件了,不过也能用。
jar包
文件
1、添加Snappy本地库至$HADOOP_HOME/lib/native/目录下
cp -d /usr/local/lib/* /usr/local/hadoop/hadoop-3.1.3/lib/native
2、将hadoop-snappy-0.0.1-SNAPSHOT.jar和snappy的library分别拷贝到
和HADOOP_HOME/lib/native/目录下即可
这部分拷贝的jar包和文件就是上面链接里的,不想编译的话直接下载解压上传进去就行,下载上面的文件后可省略此步
cp /home/hadoop/snappy/hadoop-snappy/target/hadoop-snappy-0.0.1-SNAPSHOT.jar $HADOOP_HOME/lib cp /home/hadoop/snappy/hadoop-snappy/target/hadoop-snappy-0.0.1-SNAPSHOT-tar/hadoop-snappy-0.0.1-SNAPSHOT/lib/native/Linux-amd64-64/* $HADOOP_HOME/lib/native/
3、配置hadoop-env.sh和core-site.xml,mapred-site.xml
添加以下内容
vim hadoop-env.sh export LD_LIBRARY_PATH=/usr/local/hadoop/hadoop-3.1.3/lib/native:/usr/local/lib/
vim core-site.xml
io.compression.codecs
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec
io.compression.codec.lzo.class
org.apache.hadoop.io.compress.SnappyCodec
vim mapred-site.xml
mapreduce.output.fileoutputformat.compress
true
mapreduce.map.output.compress
true
mapreduce.output.fileoutputformat.compress.codec
org.apache.hadoop.io.compress.SnappyCodec
4、验证
hadoop jar /usr/local/hadoop/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output
可以看到文件已经进行了压缩
将hadoop-snappy-0.0.1-SNAPSHOT.jar和snappy的library拷贝到$Hbase_HOME/lib目录下即可
也是上面链接里的jar包文件
cp /home/hadoop/snappy/hadoop-snappy/target/hadoop-snappy-0.0.1-SNAPSHOT.jar $Hbase_HOME/lib mkdir -p $Hbase_HOME/lib/native/Linux-amd64-64/ cp /home/hadoop/snappy/hadoop-snappy/target/hadoop-snappy-0.0.1-SNAPSHOT-tar/hadoop-snappy-0.0.1-SNAPSHOT/lib/native/Linux-amd64-64/* $Hbase_HOME/lib/native/Linux-amd64-64/
配置hbase-env.sh和hbase-site.xml
vim hbase-env.sh
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/hadoop/hadoop-3.1.3/lib/native/:/usr/local/lib export Hbase_LIBRARY_PATH=$Hbase_LIBRARY_PATH:/usr/local/hbase/hbase-2.1.7/lib/native/Linux-amd64-64/:/usr/local/lib/ export CLASSPATH=$CLASSPATH:$Hbase_LIBRARY_PATH
vim hbase-site.xml
hbase.regionserver.codecs snappy
验证snappy
hbase org.apache.hadoop.hbase.util.CompressionTest file:///home/hadoop/ouput snappy
配置成功!
hbase shell
create 'company', { NAME => 'department', COMPRESSION => 'snappy'}
describe 'company'



