1.上一个写的是高可用的搭建,可能对刚开始学的伙伴来说有一点难度,所以我又写了一个单例模式的Hadoop的脚本,欢迎伙伴给予建议。
Linux安装前面也有,就直接跳过了。
2.我把jdk,zookeeper和Hadoop的包放在了install文件夹里
3.创建一个文件夹,放脚本代码
4 .脚本里的内容
#!/bin/bash
jdk=false
zk=true
hadoop=true
hostname=`hostname`
echo "current host name is $hostname"
whoami=`whoami`
echo "current user is $whoami"
installdir=/opt/soft
if [ ! -d "$installdir" ]
then mkdir $installdir
fi
if [ "$jdk" = true ]
then echo " ---------安装java JDK------------"
tar -zxf /opt/install/jdk-8u111-linux-x64.tar.gz -C /opt/soft/
mv /opt/soft/jdk1.8.0_111 /opt/soft/jdk180
echo "#jdk" >> /etc/profile
echo 'export JAVA_HOME=/opt/soft/jdk180' >> /etc/profile
echo 'export CLASSPATH=.:$JAVA_HOMEb/lib/dt.jar:$JAVA_HOME/lib/tools.jar' >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin' >> /etc/profile
fi
if [ "$zk" = true ];then
echo "----------- 安装zookeeper --------"
tar -zxvf zookeeper-3.4.5-cdh5.14.2.tar.gz -C /opt/soft
mv /opt/soft/zookeeper-3.4.5-cdh5.14.2 /opt/soft/zookeeper345
cp /opt/soft/zookeeper345/conf/zoo_sample.cfg /opt/soft/zookeeper345/conf/zoo.cfg
sed i '/^dataDir=/opt/soft/zookeeper345/datatmp' /opt/soft/zookeeper345/conf/zoo.cfg
echo "server.1=$hostname:2888:3888" >> /opt/soft/zookeeper345/conf/zoo.cfg
mkdir -p /opt/soft/zookeeper345/datatmp/
echo "1"> /opt/soft/zookeeper345/datatmp/myid
echo 'export ZOOKEEPER_HOME=/opt/soft/zookeeper345' >> /etc/profile
echo 'export PATH=$PATH:$ZOOKEEPER_HOME/bin' >> /etc/profile
fi
if [ "$hadoop" = true ];then
echo '------------ 安装hadoop ------------'
tar -zxf /opt/install/hadoop-2.6.0-cdh5.14.2.tar.gz -C /opt/soft/
mv /opt/soft/hadoop-2.6.0-cdh5.14.2 /opt/soft/hadoop260
echo '------------ 修改配置hadoop-env.sh ------------'
sed -i "/^export JAVA_HOME=/cexport JAVA_HOME=$JAVA_HOME/" /opt/soft/hadoop260/etc/hadoop/hadoop-env.sh
echo '------------ 修改配置mapred-env.sh ------------'
sed -i "/^# export JAVA_HOME=/cexport JAVA_HOME=$JAVA_HOME/" /opt/soft/hadoop260/etc/hadoop/mapred-env.sh
echo '------------ 修改配置yarn-env.sh ------------'
sed -i "/^# export JAVA_HOME=/cexport JAVA_HOME=$JAVA_HOME/" /opt/soft/hadoop260/etc/hadoop/yarn-env.sh
echo '------------ 修改配置core.site.xml ------------'
core_path="/opt/soft/hadoop260/etc/hadoop/core-site.xml"
sed -i '19ahadoop.proxyuser.bigdata.groups * ' $core_path
sed -i '19ahadoop.proxyuser.bigdata.hosts * ' $core_path
sed -i '19ahadoop.tmp.dir /opt/soft/hadoop260/hadooptmp/ ' $core_path
sed -i "19afs.defaultFS hdfs://$hostname:9000 " $core_path
echo '------------ 修改配置 hdfs-site.xml ------------'
hdfs_path="/opt/soft/hadoop260/etc/hadoop/hdfs-site.xml"
sed -i '19adfs.replication 1 ' $hdfs_path
sed -i "19adfs.namenode.secondary.http-address $hostname:50090 " $hdfs_path
echo '------------ 修改配置 mapred-site.xml ------------'
mapred_path="/opt/soft/hadoop260/etc/hadoop/mapred-site.xml"
cp /opt/soft/hadoop260/etc/hadoop/mapred-site.xml.template /opt/soft/hadoop260/etc/hadoop/mapred-site.xml
sed -i "19amapreduce.jobhistory.webapp.address $hostname:19888 " $mapred_path
sed -i "19amapreduce.jobhistory.address $hostname:10020 " $mapred_path
sed -i '19amapreduce.framework.name yarn ' $mapred_path
echo '------------ 修改配置 yarn-site.xml ------------'
yarn_path="/opt/soft/hadoop260/etc/hadoop/yarn-site.xml"
sed -i "15ayarn.log-aggregation.retain-seconds 86400 " $yarn_path
sed -i "15ayarn.log-aggregation-enable true " $yarn_path
sed -i "15ayarn.resourcemanager.hostname $hostname " $yarn_path
sed -i "15ayarn.nodemanager.aux-services mapreduce_shuffle " $yarn_path
sed -i "15ayarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler " $yarn_path
echo '------------ 修改配置 slaves.xml ------------'
sed -i "s/localhost/$hostname/g" /opt/soft/hadoop260/etc/hadoop/slaves
echo "#hadoop" >> /etc/profile
echo 'export HADOOP_HOME=/opt/soft/hadoop260' >> /etc/profile
echo 'export HADOOP_MAPRED_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_COMMON_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_HDFS_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export YARN_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native' >> /etc/profile
echo 'export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin' >> /etc/profile
/opt/soft/hadoop260/bin/hadoop namenode -format
fi
总结:这样安装单机版就方便了,chmod给与脚本权力,直接运行。高可用因为写的脚本代码特多,就不写脚本了,可以看我上一个文章高可用搭建。希望可以给伙伴带来一点帮助,要是有更简单的脚本可以交流下。



