栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

kafka 2.8.x学习笔记-多租户、安全和其它

kafka 2.8.x学习笔记-多租户、安全和其它

多租户实现

kafka没有体系化的租户管理,但可以通过一些手段进行用户和topic管理实现部分租户能力

Creating user spaces for tenants (sometimes called namespaces)

通过ACL控制用户只能访问指定前缀的topic通过自定义CreateTopicPolicy/AlterConfigPolicy (cf. KIP-108 and the setting create.topic.policy.class.name)强制用户只能创建符合前缀要求的topic关闭topic自动创建

Configuring topics with data retention policies and more:设置topic级别的 retention.bytes (size) and retention.ms,实现数据生命周期管理

Securing topics and clusters with encryption, authentication, and authorization

Encryption :data transferred between Kafka brokers and Kafka clients, between brokers, between brokers and ZooKeeper nodes, and between brokers and other, optional tools.AuthenticationAuthorization 限制用户对topic的访问,使用以下类似脚本
bin/kafka-acls.sh
–bootstrap-server broker1:9092
–add --allow-principal User:Alice
–producer
–resource-pattern-type prefixed --topic acme.infosec.

Isolating tenants with quotas and rate limits

Client quotas:Kafka supports different types of (per-user principal) client quotas.

2020引入controller_mutations_rate ,使用token机制控制topic变化频率,参考urlhttps://cwiki.apache.org/confluence/display/KAFKA/KIP-599%3A+Throttle+Create+Topic%2C+Create+Partition+and+Delete+Topic+OperationsNetwork Bandwidth Quotas:By default, each unique client group receives a fixed quota in bytes/sec as configured by the cluster.在用户或者client 级别进行限流,参考:https://kafka.apache.org/28/documentation.html#quotas

bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1
bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-name clientA

Server quotas

限制连接数量

限制每个IP的连接数量

max.connections.per.ip.overrides comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is “hostName:100,127.0.0.1:200”max.connections.per.ip 默认每个ip最大连接数 限制每个broker总的连接数量:max.connections新连接创建rate max.connection.creation.rate

listener(端口)级别连接限制

listener.name.internal.max.connectionslistener.name.internal.max.connection.creation.rate
listener 参考urlhttps://blog.csdn.net/lidelin10/article/details/105316252

Monitoring and metering: 按租户要求自定义metric收集、发送和告警

Inter-cluster data sharing (cf. geo-replication)

Security Overview

Authentication of connections to brokers from clients (producers and consumers), other brokers and tools, using either SSL or SASL. Kafka supports the following SASL mechanisms:

SASL/OAUTHBEARER - starting at version 2.0SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512 - starting at version 0.10.2.0SASL/PLAIN - starting at version 0.10.0.0SASL/GSSAPI (Kerberos) - starting at version 0.9.0.0 Authentication of connections from brokers to ZooKeeper
Encryption of data transferred between brokers and clients, between brokers, or between brokers and tools using SSL (Note that there is a performance degradation when SSL is enabled, the magnitude of which depends on the CPU type and the JVM implementation.)Authorization of read / write operations by clients
参考url [https://www.cnblogs.com/rexcheny/articles/12884990.html],支持:

GSSAPI:使用的Kerberos认证,可以集成目录服务,比如AD。Kafka最小版本 0.9

PLAIN:使用简单用户名和密码形式,Kafka最小版本 0.10

SCRAM:主要解决PLAIN动态更新问题以及安全机制,Kafka最小版本 0.10.2

OAUTHBEARER:基于OAuth 2认证框架,Kafka最小版本 2.0

(https://www.cnblogs.com/rexcheny/articles/12884990.html)

Authorization is pluggable and integration with external authorization services is supported 自定义认证 authorizer.class.name=kafka.security.authorizer.AclAuthorizer 其它

kafka connect

用途,与其它系统集成,进行数据导入导出,参考https://blog.csdn.net/wjandy0211/article/details/93642257

features

A common framework for Kafka connectorsDistributed and standalone modesREST interfaceAutomatic offset managementDistributed and scalable by default :builds on the existing group management protocolStreaming/batch integration

支持的sink、resource-connectors

https://www.confluent.io/hub/?_ga=2.208833618.1729877910.1646979664-160366992.1641452267

https://docs.confluent.io/home/connect/self-managed/supported.html
kafka STREAMS
测试框架-Trogdor

Workloads

ConsumeBenchRoundTripWorkloadProduceBench

Faults

ProcessStopFaultNetworkPartitionFault:implemented using iptables.

External Processes

ExternalCommandWorker

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/761104.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号