本次安装为3台节点集群模式
10.84.185.208 ck01 10.84.185.211 ck02 10.84.185.212 ck03 |
1.安装JDK
| rpm -ivh jdk-8u301-linux-x64.rpm |
2.安装zookeeper
1.1 下载zookeeper压缩包,
官网地址:
Index of /zookeeper/zookeeper-3.6.3
1.2 解压,配置zookeeper,zoo.cfg
[root@ck01 conf]# more /data/apache-zookeeper-3.6.3-bin/conf/zoo.cfg
| # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. #dataDir=/tmp/zookeeper dataDir=/data/apache-zookeeper-3.6.3-bin/data dataLogDir=/data/apache-zookeeper-3.6.3-bin/logs # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=ck01:2888:3888 server.2=ck02:2888:3888 server.3=ck03:2888:3888 ## Metrics Providers # # https://prometheus.io Metrics Exporter #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider #metricsProvider.httpPort=7000 #metricsProvider.exportJvmInfo=true |
1.3 配置myid
mkdir -p /data/apache-zookeeper-3.6.3-bin/data
[root@ck01 data]# more /data/apache-zookeeper-3.6.3-bin/data/myid
scp zookeeper文件到另外两台节点
ck02节点 myid 改成 2
ck03节点 myid 改成 3
1.4 zookeeper启动脚本
[root@ck01 data]# more /data/shell/zk_start.sh
| #!/bin/sh age="Usage: $0 (start|stop|status)" if [ $# -lt 1 ]; then echo $age exit 1 fi behave=$1 echo "$behave zkServer cluster" hosts="ck01 ck02 ck03" for host in $hosts do echo "~~~~~~~~~~~~~~~~$behave $host" ssh $host "source /etc/profile; /data/apache-zookeeper-3.6.3-bin/bin/zkServer.sh $behave" done |
1.5 启动和验证zookeeper
| sh /data/shell/zk_start.sh start sh /data/shell/zk_start.sh status |
3.安装clickhouse
3.1 下载clickhouse安装包
https://repo.clickhouse.tech/rpm/lts/x86_64/
本次使用了2021年9月发布的21.8.5.7-2最新lts稳定版
| rpm安装包 | 说明 |
|---|
| clickhouse-common-static-21.8.5.7-2.x86_64.rpm | ClickHouse编译的二进制文件 |
| clickhouse-client-21.8.5.7-2.noarch.rpm | 包含 clickhouse-client 应用程序,它是交互式ClickHouse控制台客户端 |
| clickhouse-server-21.8.5.7-2.noarch.rpm | 包含要作为服务端运行的ClickHouse配置文件 |
| clickhouse-common-static-dbg-21.8.5.7-2.x86_64.rpm | 带有调试信息的ClickHouse二进制文件 |
3.2 三台节点都执行安装命令
| rpm -ivh clickhouse-client-21.8.5.7-2.noarch.rpm clickhouse-common-static-dbg-21.8.5.7-2.x86_64.rpm clickhouse-server-21.8.5.7-2.noarch.rpm clickhouse-common-static-21.8.5.7-2.x86_64.rpm |
3.3 目录结构
| 目录 | 说明 |
|---|
| /etc/clickhouse-server | 服务端的配置文件目录,包括全局配置config.xml和用户配置users.xml等 |
| /data/clickhouse/data | 数据存储目录(通常会修改默认路径配置,将数据保存到大容量磁盘挂载的路径) |
| /data/clickhouse/logs | 保存日志的目录(通常会修改路径配置,将日志保存到大容量磁盘挂载的路径) |
| /data/clickhouse/rpm | rpm安装包存放路径 |
| /data/clickhouse/shell | 启动脚本和停止脚本,start_ck.sh,stop_ck.sh |
3.4 安装日志
| [root@VSJFWJSYY-VLC1-027 clickhouse]# rpm -ivh clickhouse-client-21.8.5.7-2.noarch.rpm clickhouse-common-static-dbg-21.8.5.7-2.x86_64.rpm clickhouse-server-21.8.5.7-2.noarch.rpm clickhouse-common-static-21.8.5.7-2.x86_64.rpm warning: clickhouse-client-21.8.5.7-2.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID e0c56bd4: NOKEY Preparing... ################################# [100%] Updating / installing... 1:clickhouse-common-static-21.8.5.7################################# [ 25%] 2:clickhouse-client-21.8.5.7-2 ################################# [ 50%] 3:clickhouse-server-21.8.5.7-2 ################################# [ 75%] ClickHouse binary is already located at /usr/bin/clickhouse Symlink /usr/bin/clickhouse-server already exists but it points to /clickhouse. Will replace the old symlink to /usr/bin/clickhouse. Creating symlink /usr/bin/clickhouse-server to /usr/bin/clickhouse. Symlink /usr/bin/clickhouse-client already exists but it points to /clickhouse. Will replace the old symlink to /usr/bin/clickhouse. Creating symlink /usr/bin/clickhouse-client to /usr/bin/clickhouse. Symlink /usr/bin/clickhouse-local already exists but it points to /clickhouse. Will replace the old symlink to /usr/bin/clickhouse. Creating symlink /usr/bin/clickhouse-local to /usr/bin/clickhouse. Symlink /usr/bin/clickhouse-benchmark already exists but it points to /clickhouse. Will replace the old symlink to /usr/bin/clickhouse. Creating symlink /usr/bin/clickhouse-benchmark to /usr/bin/clickhouse. Symlink /usr/bin/clickhouse-copier already exists but it points to /clickhouse. Will replace the old symlink to /usr/bin/clickhouse. Creating symlink /usr/bin/clickhouse-copier to /usr/bin/clickhouse. Symlink /usr/bin/clickhouse-obfuscator already exists but it points to /clickhouse. Will replace the old symlink to /usr/bin/clickhouse. Creating symlink /usr/bin/clickhouse-obfuscator to /usr/bin/clickhouse. Creating symlink /usr/bin/clickhouse-git-import to /usr/bin/clickhouse. Symlink /usr/bin/clickhouse-compressor already exists but it points to /clickhouse. Will replace the old symlink to /usr/bin/clickhouse. Creating symlink /usr/bin/clickhouse-compressor to /usr/bin/clickhouse. Symlink /usr/bin/clickhouse-format already exists but it points to /clickhouse. Will replace the old symlink to /usr/bin/clickhouse. Creating symlink /usr/bin/clickhouse-format to /usr/bin/clickhouse. Symlink /usr/bin/clickhouse-extract-from-config already exists but it points to /clickhouse. Will replace the old symlink to /usr/bin/clickhouse. Creating symlink /usr/bin/clickhouse-extract-from-config to /usr/bin/clickhouse. Creating clickhouse group if it does not exist. --自动创建clickhouse用户组和用户,clickhouse安装默认是使用的clickhouse用户 groupadd -r clickhouse Creating clickhouse user if it does not exist. useradd -r --shell /bin/false --home-dir /nonexistent -g clickhouse clickhouse Will set ulimits for clickhouse user in /etc/security/limits.d/clickhouse.conf. Creating config directory /etc/clickhouse-server/config.d that is used for tweaks of main server configuration. Creating config directory /etc/clickhouse-server/users.d that is used for tweaks of users configuration. Config file /etc/clickhouse-server/config.xml already exists, will keep it and extract path info from it. /etc/clickhouse-server/config.xml has /var/lib/clickhouse/ as data path. /etc/clickhouse-server/config.xml has /var/log/clickhouse-server/ as log path. Users config file /etc/clickhouse-server/users.xml already exists, will keep it and extract users info from it. chown --recursive clickhouse:clickhouse '/etc/clickhouse-server' Creating log directory /var/log/clickhouse-server/. Creating data directory /var/lib/clickhouse/. Creating pid directory /var/run/clickhouse-server. chown --recursive clickhouse:clickhouse '/var/log/clickhouse-server/' chown --recursive clickhouse:clickhouse '/var/run/clickhouse-server' chown clickhouse:clickhouse '/var/lib/clickhouse/' groupadd -r clickhouse-bridge useradd -r --shell /bin/false --home-dir /nonexistent -g clickhouse-bridge clickhouse-bridge chown --recursive clickhouse-bridge:clickhouse-bridge '/usr/bin/clickhouse-odbc-bridge' chown --recursive clickhouse-bridge:clickhouse-bridge '/usr/bin/clickhouse-library-bridge' Enter password for default user: --设置默认用户default密码 Password for default user is saved in file /etc/clickhouse-server/users.d/default-password.xml. Setting capabilities for clickhouse binary. This is optional. ClickHouse has been successfully installed. Start clickhouse-server with: sudo clickhouse start Start clickhouse-client with: clickhouse-client --password Created symlink from /etc/systemd/system/multi-user.target.wants/clickhouse-server.service to /etc/systemd/system/clickhouse-server.service. 4:clickhouse-common-static-dbg-21.8################################# [100%] |
3.5 修改配置文件config.xml
vim /etc/clickhouse-server/config.xml(配置文件非全部内容,请找到对应项进行修改)
| /data/clickhouse/clickhouse-server/clickhouse-server.log /data/clickhouse/clickhouse-server/clickhouse-server.err.log 8123 9000 ::1 ck01 /data/clickhouse/data /data/clickhouse/data/tmp/ /data/clickhouse/data/user_files/ /data/clickhouse/data/access/ Asia/Shanghai /etc/clickhouse-server/metrika.xml /data/clickhouse/data/format_schemas/ |
3.6 修改配置文件,新建metrika.xml
[root@ck01 data]# more /etc/clickhouse-server/metrika.xml
| 1 true ck01 9000 default Ebscn@sjzx123 1 true ck02 9000 default Ebscn@sjzx123 1 true ck03 9000 default Ebscn@sjzx123 01 01 ck01 ::/0 ck01 2181 ck02 2181 ck03 2181 10000000000 0.01 lz4 |
4.启动、停止和登录Clickhouse
每个节点都执行
4.1 启动命令
| nohup sudo -u clickhouse clickhouse-server restart --config-file=/etc/clickhouse-server/config.xml & |
4.2 停止命令
| sudo -u clickhouse clickhouse stop |
4.3 启动脚本,ck_start.sh
| #!/bin/sh age="Usage: $0 (start|stop|restart)" hosts="ck01 ck02 ck03" if [ $# -lt 1 ]; then echo $age exit 1 fi behave=$1 if [ $behave = "stop" ] ; then for host in $hosts do echo "~~~~~~~~~~~~~~~~$behave $host clickhouse-server" ssh $host "sudo -u clickhouse clickhouse-server $behave" done elif [ $behave = "start" -o $behave = "restart" ] ;then for host in $hosts do echo "~~~~~~~~~~~~~~~~$behave $host clickhouse-server" ssh $host "source /etc/profile; nohup sudo -u clickhouse clickhouse-server $behave --config-file=/etc/clickhouse-server/config.xml &" done else echo $age fi |
4.4 登录命令
| clickhouse-client --password 或 clickhouse-client -u default --password --port 9000 -hck01 |
5.查看集群信息
| select * from system.clusters; |
6.测试clickhouse
6.1 创建本地表
创建表时加上 ON CLUSTER ck_cluster 会在集群的每一个节点上都创建表,不然的话就需要每一台节点分别执行
CREATE TABLE test.user_table ON CLUSTER ck_cluster (id UInt64, name String, age UInt16 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/replicated/{shard}/user_table', '{replica}') PARTITION BY age ORDER BY id ; |
6.2 创建分布式表
| CREATE TABLE test.user_table_all ON CLUSTER ck_cluster AS test.user_table engine = Distributed(ck_cluster, test, user_table, rand()); |
6.3 插入数据
数据会分散写入到三台节点的本地表中,查询分布式表,会汇聚所有数据。
insert into test.user_table_all values (1,'ss',22); insert into test.user_table_all values (2,'ss',23); insert into test.user_table_all values (3,'ss',24); insert into test.user_table_all values (4,'ss',25); insert into test.user_table_all values (5,'ss',26); |
6.4 查询数据
ck01节点:
查询user_table
查询user_table_all
ck02节点:
查询user_table
查询user_table_all
ck03节点:
查询user_table
查询user_table_all