clickhouse-operator创建、配置和管理在kubernetes上运行的clickhouse集群。ck-operator提供如下功能:
创建基于自定义资源ClickHouse集群规范前提
自定义存储配置(VolumeClaim 模板)
自定义 pod 模板
为端点定制服务模板
ClickHouse 配置和设置(包括 Zookeeper 集成)
灵活的模板
ClickHouse 集群扩展,包括自动模式传播
ClickHouse 版本升级
将 ClickHouse 指标导出到 Prometheus
主要分为3步骤:安装clickhouse-operator,k8s上部署zookeeper,配置clickhouse集群
1. 安装clickhouse-operator
github源码文件:https://github.com/radondb/radondb-clickhouse-kubernetes/clickhouse-operator-install.yaml
源码文件需要注意:
1.需要自行部署部署zookeeper,源码文件没有。
2.clickhouse数据库默认账号密码,可更改账号密码、
3.若需 Operator 监控所有的 Kubernetes namespace,则需将其部署在 kube-system namespace 下。否则只会监控部署到的 namespace。
部署clickhouse-operator
root@master01 radondb-clickhouse-kubernetes-main]# kubectl -n kube-system apply -f clickhouse-operator-install.yml customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.radondb.com created customresourcedefinition.apiextensions.k8s.io/clickhouseinstallationtemplates.clickhouse.radondb.com created customresourcedefinition.apiextensions.k8s.io/clickhouseoperatorconfigurations.clickhouse.radondb.com created serviceaccount/clickhouse-operator created clusterrole.rbac.authorization.k8s.io/clickhouse-operator-kube-system created clusterrolebinding.rbac.authorization.k8s.io/clickhouse-operator-kube-system created configmap/etc-clickhouse-operator-files created configmap/etc-clickhouse-operator-confd-files created configmap/etc-clickhouse-operator-configd-files created configmap/etc-clickhouse-operator-templatesd-files created configmap/etc-clickhouse-operator-usersd-files created deployment.apps/clickhouse-operator created service/clickhouse-operator-metrics created
查看状态
[root@master02 ~]# kubectl get pod -n kube-system |grep clickhouse-operator clickhouse-operator-6dd5d46c98-f85lh 2/2 Running 0 20m [root@master02 ~]# kubectl get svc -n kube-system |grep clickhouse-operator clickhouse-operator-metrics ClusterIP 10.108.49.2148888/TCP 21m
2. 配置zookeeper
查看上一篇文章:k8s部署zookeeper集群_xiaxia2022的博客-CSDN博客
这里就不在部署,查看zookeeper状态
[root@master02 ~]# kubectl get pod -n jyy-test |grep zookeeper zookeeper-0 1/1 Running 0 3m23s zookeeper-1 1/1 Running 0 3m23s zookeeper-2 1/1 Running 0 3m23s [root@master02 ~]# kubectl get svc -n jyy-test |grep zookeeper zookeeper ClusterIP 10.98.90.2062181/TCP,7000/TCP 3m47s zookeeper-headless ClusterIP None 2888/TCP,3888/TCP 3m47s [root@master02 ~]# kubectl get sts -n jyy-test |grep zookeeper zookeeper 3/3 149m
3.部署clickhouse集群
[root@master01 radondb-clickhouse-kubernetes-main]# kubectl -n jyy-test apply -f clickHouse.yaml clickhouseinstallation.clickhouse.radondb.com/jyy created
创建分布式表
# 创建分布式表
chi-jyy-clickhouse-0-0-0.chi-jyy-clickhouse-0-0.jyy-test.svc.cluster.local :) CREATE TABLE test AS system.one ENGINE = Distributed('clickhouse', 'system', 'one'
CREATE TABLE test AS system.one
ENGINE = Distributed('clickhouse', 'system', 'one')
Query id: c2952221-d4bc-4256-91d8-1560198a05e1
Ok.
0 rows in set. Elapsed: 0.161 sec.
# 查询数据--没有返回数据
chi-jyy-clickhouse-0-0-0.chi-jyy-clickhouse-0-0.jyy-test.svc.cluster.local :) select * from test
SELECt *
FROM test
Query id: 6d54283c-7ef6-4013-bd07-795807a8518a
┌─dummy─┐
│ 0 │
└───────┘
┌─dummy─┐
│ 0 │
└───────┘
2 rows in set. Elapsed: 0.084 sec.
# 查看哪些分片返回了数据 ---使用hostname()
# 第1次
chi-jyy-clickhouse-0-0-0.chi-jyy-clickhouse-0-0.jyy-test.svc.cluster.local :) select hostName() from test;
SELECt hostName()
FROM test
Query id: 111502a0-c241-4d8d-8c53-6137230d175b
┌─hostName()───────────────┐
│ chi-jyy-clickhouse-0-0-0 │
└──────────────────────────┘
┌─hostName()───────────────┐
│ chi-jyy-clickhouse-1-0-0 │
└──────────────────────────┘
2 rows in set. Elapsed: 0.009 sec.
# 第2次
chi-jyy-clickhouse-0-0-0.chi-jyy-clickhouse-0-0.jyy-test.svc.cluster.local :) select hostName() from test;
SELECt hostName()
FROM test
Query id: f61ef2aa-76cc-4f59-927a-dc2079a87a3f
┌─hostName()───────────────┐
│ chi-jyy-clickhouse-0-0-0 │
└──────────────────────────┘
┌─hostName()───────────────┐
│ chi-jyy-clickhouse-1-1-0 │
└──────────────────────────┘
2 rows in set. Elapsed: 0.005 sec.
# 2次查询,都返回了数据的分片



