栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

ELK + kafka + filebeat日志采集平台搭建

ELK + kafka + filebeat日志采集平台搭建

ELK + kafka + filebeat日志采集平台搭建
  • 环境信息
    • 软件装备
    • 所有节点安装jdk
    • kafkazookeeper安装
      • 解压软件包
      • 安装zookeeper
      • kafka安装
    • 安装elasticsearch并配置
    • 安装filebeat并配置
    • 安装logstash并配置
    • 安装kibana并设置

环境信息
节点ip功能主机名
192.168.205.6kafkazookeeperfilebeatnode01
192.168.205.7kafkazookeeperfilebeatnode02
192.168.205.8kafkazookeeperfilebeatnode03
192.168.205.9elasticsearchlogstashkibanaes-node1
192.168.205.10elasticsearchlogstashes-node2
192.168.205.11elasticsearchlogstashes-node3
软件装备

需要装备的软件有
1)jdk
下载地址为
https://mirrors.huaweicloud.com/java/jdk/

2)kafka(包含zookeeper模块)
下载地址为
http://kafka.apache.org/downloads

3)Elasticsearch、Kibana、filebeat、Logstash
下载地址为
https://www.elastic.co/cn/downloads/

所有节点安装jdk

将jdk软件包上传到机器后安装jdk

rpm -ivh jdk-8u202-linux-x64.rpm

编辑环境变量vim /etc/profile

export JAVA_HOME=/usr/java/jdk1.8.0_202-amd64
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
export PATH=$PATH:${JAVA_PATH}


查看是否安装完成
java -version

kafkazookeeper安装 解压软件包

将软件包上传到kafkazookeeper节点后解压

mkdir /home/kafka
tar -xzvf kafka_2.12-3.0.0.tgz -C /home/kafka
cd /home/kafka
mv kafka_2.12-3.0.0 kafka
安装zookeeper
cd /home/kafka
mkdir zkdata
mkdir zklog

vim /home/kafka/kafka/config/zookeeper.properties修改成以下项

dataDir=/home/kafka/zkdata
dataLogDir=/home/kafka/zklog
clientPort=2181
maxClientCnxns=100
tickTime=2000
initLimit=10
syncLimit=5
server.1=192.168.205.6:12888:13888
server.2=192.168.205.7:12888:13888
server.3=192.168.205.8:12888:13888

192.168.205.6节点:
echo 1 > /home/kafka/zkdata/myid
192.168.205.7节点:
echo 2 > /home/kafka/zkdata/myid
192.168.205.8节点:
echo 3 > /home/kafka/zkdata/myid

设置systemd管理zookeeper
vim /usr/lib/systemd/system/zookeeper.service添加以下内容

[Unit]
Description=Zookeeper service
After=network.target

[Service]
Type=simple
User=root
Group=root
ExecStart=/home/kafka/kafka/bin/zookeeper-server-start.sh /home/kafka/kafka/config/zookeeper.properties
ExecStop=/home/kafka/kafka/bin/zookeeper-server-stop.sh
Restart=on-failure

[Install]
WantedBy=multi-user.target


启动zookeeper并设置开机自启

systemctl start zookeeper.service
systemctl enable zookeeper.service
systemctl status zookeeper.service

kafka安装

mkdir /home/kafka/kafkalog
vim /home/kafka/kafka/config/server.properties修改成以下形式

broker.id=1
prot = 9092
host.name = 192.168.205.6
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/home/kafka/kafkalog
num.partitions=16
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.205.6:2181,192.168.205.7:2181,192.168.205.8:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

其中三个节点的broker.id分别是1、2、3,host.name和zookeeper.connect按实际情况填写
设置systemd管理kafka
vim /usr/lib/systemd/system/kafka.service添加以下内容

[Unit]
Description=Apache Kafka server (broker)
After=network.target  zookeeper.service

[Service]
Type=simple
User=root
Group=root
ExecStart=/home/kafka/kafka/bin/kafka-server-start.sh /home/kafka/kafka/config/server.properties
ExecStop=/home/kafka/kafka/bin/kafka-server-stop.sh
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动kafka并设置kafka开机自启
systemctl start kafka
systemctl enable kafka
systemctl status kafka

安装elasticsearch并配置

安装

 rpm -ivh elasticsearch-7.15.1-x86_64.rpm
 mkdir -p /data/nodes
 chown elasticsearch:elasticsearch /data/nodes
 mkdir -p /var/log/elasticsearch
 chown -R elasticsearch.elasticsearch   /var/log/elasticsearch

配置 JVM内存

vim /etc/sysconfig/elasticsearch
ES_HEAP_SIZE=4g

配置yml文件
vim /etc/elasticsearch/elasticsearch.yml修改成以下配置

#集群名称
cluster.name: ES-Cluster
#节点名称
node.name: ES-node1
#是否是master节点
node.master: true
#是否允许该节点存储索引数据
node.data: true
#日志目录
path.logs: /var/log/elasticsearch
#绑定地址
network.host: 0.0.0.0
#http端口
http.port: 9200
# 集群tcp 传输端口
transport.tcp.port: 9300
#集群主机列表
discovery.seed_hosts: ["192.168.205.9:9300","192.168.205.10:9300","192.168.205.11:9300"]
#启动全新的集群时需要此参数,再次重新启动时此参数可免
cluster.initial_master_nodes: ["192.168.205.9"]
#集群内同时启动的数据任务个数,默认是2个
cluster.routing.allocation.cluster_concurrent_rebalance: 32
#添加或删除节点及负载均衡时并发恢复的线程个数,默认4个
cluster.routing.allocation.node_concurrent_recoveries: 32
#初始化数据恢复时,并发恢复线程的个数,默认4个
cluster.routing.allocation.node_initial_primaries_recoveries: 32
#存储位置
path.data: /data
#是否开启跨域访问
http.cors.enabled: true
#开启跨域访问后的地址限制,*表示无限制
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elastic-certificates.p12

其中vim node.name安装实际情况填写主节点的node.name配置成node.master: true

启动elasticsearch并设置开机自启
systemctl start elasticsearch
systemctl enable elasticsearch

创建ca证书(一个节点操作)
cd /usr/share/elasticsearch
bin/elasticsearch-certutil ca(一直按回车完成创建)
创建私钥
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
将私钥上传到其他节点elasticsearch

scp /usr/share/elasticsearch/elastic-certificates.p12  192.168.205.10:/usr/share/elasticsearch/
scp /usr/share/elasticsearch/elastic-certificates.p12  192.168.205.11:/usr/share/elasticsearch/

创建存放证书目录,并移动私钥
cd /usr/share/elasticsearch
mkdir /etc/elasticsearch/certs
mv elastic-certificates.p12 /etc/elasticsearch/certs/
chmod 777 /etc/elasticsearch/certs*.log fields: log_topic: k8stopic name: "192.168.205.6" output.kafka: enabled: true hosts: ["192.168.205.6:9092", "192.168.205.7:9092", "192.168.205.8:9092"] version: "0.10" topic: '%{[fields][log_topic]}' partition.round_robin: reachable_only: true worker: 2 required_acks: 1 compression: gzip max_message_bytes: 10000000 logging.level: info

其中paths里的日志文件按照实际情况填写,本人监控的为节点上calico网络的日志和k8s集群中容器的日志
name和hosts按照实际情况填写

启动filebeat并设置开机自启
systemctl start filebeat
systemctl enable filebeat

查看filebeat状态
systemctl status filebeat

安装logstash并配置

安装logstash
rpm -ivh logstash-7.15.1-x86_64.rpm
配置logstash
vim /etc/logstash/conf.d/kafka_os_into_es.conf

input {
    kafka {
        bootstrap_servers => "192.168.205.6:9092,192.168.205.7:9092,192.168.205.8:9092"
        topics => ["k8stopic"]
        codec => "json"
          }
      }

output {
    elasticsearch {
        hosts => ["192.168.205.9:9200","192.168.205.10:9200","192.168.205.11:9200"]
        user => "elastic"
        password => "P@ssw0rd"
        index => "kubernetes-%{+YYYY-MM-dd}"
                  }

其中
bootstrap_servers中的地址和端口为kafka集群的地址和端口,
topics为filebeat中配置的topic,
hosts为elasticsearch集群的ip和端口,
password为elasticsearch设置的elastic用户密码

其中logstash并设置开机自启
systemctl start logstash
systemctl enable logstash

查看logstash服务状态
systemctl status logstash

安装kibana并设置

安装kibana
rpm -ivh kibana-7.15.1-x86_64.rpm

编辑kibana配置文件
vim /etc/kibana/kibana.yml修改为以下内容

i18n.locale: "zh-CN"
server.port: 5601
server.host: "0.0.0.0"
#elasticsearch.url: "http://192.168.205.9:9200"
elasticsearch.hosts: ["http://192.168.205.9:9200","http://192.168.205.10:9200","http://192.168.205.11:9200"]
kibana.index: ".kibana"
logging.dest: /var/log/kibana.log
elasticsearch.username: "kibana_system"
elasticsearch.password: "P@ssw0rd"
server.publicbaseUrl: "http://192.168.205.9:5601"

其中elasticsearch.hosts为部署的elasticsearch集群url,
elasticsearch.password为设置的kibana_system用户密码
server.publicbaseUrl为节点http://节点ip:端口,按实际情况填写

启动kibana并设置开机自启
systemctl start kibana
systemctl enable kibana

查看kibana状态
systemctl status kibana

登录kibana界面
http://192.168.205.9:5601/

输入设置的elastic用户名和设置的密码登录

点击discover

点击创建索引模式

索引模式设置为kubernetes*,就可以配置logstash中设置indexkubernetes-%{+YYYY-MM-dd}的所有输出

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/443590.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号