栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Java

ELK收集spring-boot日志

Java 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

ELK收集spring-boot日志

ELK收集spring-boot日志
    • ELK集群搭建
      • rpm包安装方式(后面有tar包安装方式)
      • tar包安装ELK集群
        • tar包安装Elasticsearch
      • 安装Kibana
      • 安装logstash
    • Spring-boot 日志输出配置
    • 收集日志到ELK
      • filebeat收集日志到kafka
      • filebeat收集本地日志到logstash
      • logstash传输日志到ELasticsearch
    • ELK索引周期控制

ELK集群搭建 rpm包安装方式(后面有tar包安装方式)
rpm包下载
	wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/yum/7.6.2/elasticsearch-7.6.2-x86_64.rpm
	
配置文件 /etc/elasticsearch
ll /etc/elasticsearch/
total 40
-rw-rw---- 1 root elasticsearch   199 May  6 14:01 elasticsearch.keystore
-rw-rw---- 1 root elasticsearch  2847 Mar 26 14:41 elasticsearch.yml
-rw-rw---- 1 root elasticsearch  2373 Mar 26 14:41 jvm.options
-rw-rw---- 1 root elasticsearch 17545 Mar 26 14:41 log4j2.properties
-rw-rw---- 1 root elasticsearch   473 Mar 26 14:41 role_mapping.yml
-rw-rw---- 1 root elasticsearch   197 Mar 26 14:41 roles.yml
-rw-rw---- 1 root elasticsearch     0 Mar 26 14:41 users
-rw-rw---- 1 root elasticsearch     0 Mar 26 14:41 users_roles

node-1配置
cat elasticsearch.yml|grep -v "^#"
cluster.name: chauncy-elk
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
gateway.recover_after_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true

node-2配置
cat elasticsearch.yml|grep -v "^#"
cluster.name: chauncy-elk
node.name: node-2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
gateway.recover_after_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
node-3 配置
cat elasticsearch.yml|grep -v "^#"
cluster.name: chauncy-elk
node.name: node-3
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
gateway.recover_after_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true

调整jvm参数
vim jvm.options
-Xms4g
-Xmx4g

调整内核参数
vim /etc/sysctl.conf
vm.max_map_count=655360

vim /etc/security/limits.conf 
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 131072
* hard nofile 131072

vim /etc/security/limits.d/90-nproc.conf 
* soft nproc 4096

sysctl -p 


在systemd文件中添加LimitMEMLOCK=infinity
vim /usr/lib/systemd/system/elasticsearch.service
LimitMEMLOCK=infinity

启动
systemctl daemon-reload
systemctl restart elasticsearch


报错解决
加入集群报错处理
	删除出错节点数据
		 rm -rf /var/lib/elasticsearch/nodes

查看集群状态
	curl -sXGET http://localhost:9200/_cluster/health?pretty=true
	如果设置里内置用户密码
		curl -u elastic:meifute@elastic -XGET 'http://localhost:9200/_cat/nodes?pretty'

查看集群节点
curl -XGET 'http://localhost:9200/_cat/nodes?pretty'
172.20.5.15 8 96 0 0.00 0.01 0.05 dilm - node-3
172.20.5.13 5 95 0 0.00 0.01 0.05 dilm * node-1
172.20.5.14 8 96 0 0.00 0.01 0.05 dilm - node-2
tar包安装ELK集群 tar包安装Elasticsearch
安装Elasticsearch
node1
tar xf elasticsearch-7.6.2-linux-x86_64.tar.gz -C /opt/
mkdir /opt/elasticsearch-7.6.2/data
useradd es

配置文件
vim /opt/elasticsearch-7.6.2/config/elasticsearch.yml
cluster.name: chauncy-elk
node.name: node1
path.data: /opt/elasticsearch-7.6.2/data/
path.logs: /opt/elasticsearch-7.6.2/logs/
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node1", "node2", "node3"]
gateway.recover_after_nodes: 2
# 缓存回收大小,无默认值
# 有了这个设置,最久未使用(LRU)的 fielddata 会被回收为新数据腾出空间
# 控制fielddata允许内存大小,达到HEAP 20% 自动清理旧cache
indices.fielddata.cache.size: 20%
indices.breaker.total.use_real_memory: false
# fielddata 断路器默认设置堆的 60% 作为 fielddata 大小的上限。
indices.breaker.fielddata.limit: 40%
# request 断路器估算需要完成其他请求部分的结构大小,例如创建一个聚合桶,默认限制是堆内存的 40%。
indices.breaker.request.limit: 40%
# total 揉合 request 和 fielddata 断路器保证两者组合起来不会使用超过堆内存的 70%(默认值)。
indices.breaker.total.limit: 95%

内核参数修改
vim /etc/sysctl.conf
vm.max_map_count=655360
fs.file-max=655350
vim jvm.options
-Xms4g
-Xmx4g
vim /etc/security/limits.conf 
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 131072
* hard nofile 131072

vim /etc/security/limits.d/90-nproc.conf 
* soft nproc 655360
sysctl -p
为es用户授权
chown -R es:es /opt
后台启动
su - es
cd elasticsearch-7.6.2/
./bin/elasticsearch -d -p ./es.pid


node2
tar xf elasticsearch-7.6.2-linux-x86_64.tar.gz -C /opt/
mkdir /opt/elasticsearch-7.6.2/data
useradd es
配置文件
vim /opt/elasticsearch-7.6.2/config/elasticsearch.yml
cluster.name: chauncy-elk
node.name: node2
path.data: /opt/elasticsearch-7.6.2/data/
path.logs: /opt/elasticsearch-7.6.2/logs/
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node1", "node2", "node3"]
gateway.recover_after_nodes: 2

内核参数修改
vim /etc/sysctl.conf
vm.max_map_count=655360
fs.file-max=655350
vim jvm.options
-Xms4g
-Xmx4g
vim /etc/security/limits.conf 
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 131072
* hard nofile 131072

vim /etc/security/limits.d/90-nproc.conf 
* soft nproc 655360
sysctl -p
为es用户授权
chown -R es:es /opt
后台启动
su - es
cd elasticsearch-7.6.2/
./bin/elasticsearch -d -p ./es.pid
查看集群状态
curl -sXGET http://localhost:9200/_cluster/health?pretty=true

node3
tar xf elasticsearch-7.6.2-linux-x86_64.tar.gz -C /opt/
mkdir /opt/elasticsearch-7.6.2/data
useradd es

配置文件
vim /opt/elasticsearch-7.6.2/config/elasticsearch.yml
cluster.name: chauncy-elk
node.name: node3
path.data: /opt/elasticsearch-7.6.2/data/
path.logs: /opt/elasticsearch-7.6.2/logs/
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node1", "node2", "node3"]
gateway.recover_after_nodes: 2
内核参数修改
vim /etc/sysctl.conf
vm.max_map_count=655360
fs.file-max=655350
vim jvm.options
-Xms4g
-Xmx4g
vim /etc/security/limits.conf 
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 131072
* hard nofile 131072
vim /etc/security/limits.d/90-nproc.conf 
* soft nproc 655360
sysctl -p

为es用户授权
chown -R es:es /opt
后台启动
su - es
cd elasticsearch-7.6.2/
./bin/elasticsearch -d -p ./es.pid
使用opt用户启动
cd /opt/elasticsearch-7.10.2
su - opt -c "`pwd`/bin/elasticsearch -d -p ./es.pid"
查看集群状态
curl -sXGET http://localhost:9200/_cluster/health?pretty=true

安装Kibana
yum 安装
	yum install https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/yum/7.6.2/filebeat-7.6.2-x86_64.rpm
	配置文件
		server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://172.20.5.11:9200", "http://172.20.5.12:9200", "http://172.20.5.13:9200"]
i18n.locale: "zh-CN"

tar包启动(如果用tar包安装)
		nohup su - opt -c "`pwd`/bin/kibana" &>/dev/null &
安装logstash
安装logstash
yum安装
	yum install https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/yum/7.6.2/logstash-7.6.2.rpm
Spring-boot 日志输出配置

日志流向: filebeat --> logstash --> elasticsearch

收集日志到ELK filebeat收集日志到kafka
kafka创建topic
kafka-topics.sh --zookeeper 192.168.11.111:2181 --create --topic app-log-collector --partitions 3  --replication-factor 2
kafka-topics.sh --zookeeper 192.168.11.111:2181 --create --topic error-log-collector --partitions 3  --replication-factor 3

kafka-topics.sh --zookeeper 192.168.11.111:2181 --list

filebeat配置:
filebeat.prospectors:
- input_type: log
  paths:
    ## app-服务名称.log, 为什么写死,防止发生轮转抓取历史数据
    - /home/prod/logs/app-collector.log
  #定义写入 ES 时的 _type 值
  document_type: "app-log"
  multiline:
    #pattern: '^s*(d{4}|d{2})-(d{2}|[a-zA-Z]{3})-(d{2}|d{4})'   # 指定匹配的表达式(匹配以 2017-11-15 08:04:23:889 时间格式开头的字符串)
    pattern: '^['                              # 指定匹配的表达式(匹配以 "{ 开头的字符串)
    negate: true                                # 是否匹配到
    match: after                                # 合并到上一行的末尾
    max_lines: 2000                             # 最大的行数
    timeout: 2s                                 # 如果在规定时间没有新的日志事件就不等待后面的日志
  fields:
    logbiz: collector
    logtopic: app-log-collector   ## 按服务划分用作kafka topic
    evn: dev

- input_type: log

  paths:
    - /usr/local/logs/error-collector.log
  document_type: "error-log"
  multiline:
    #pattern: '^s*(d{4}|d{2})-(d{2}|[a-zA-Z]{3})-(d{2}|d{4})'   # 指定匹配的表达式(匹配以 2017-11-15 08:04:23:889 时间格式开头的字符串)
    pattern: '^['                              # 指定匹配的表达式(匹配以 "{ 开头的字符串)
    negate: true                                # 是否匹配到
    match: after                                # 合并到上一行的末尾
    max_lines: 2000                             # 最大的行数
    timeout: 2s                                 # 如果在规定时间没有新的日志事件就不等待后面的日志
  fields:
    logbiz: collector
    logtopic: error-log-collector   ## 按服务划分用作kafka topic
    evn: dev
    
output.kafka:
  enabled: true
  hosts: ["192.168.11.51:9092"]
  topic: '%{[fields.logtopic]}'
  partition.hash:
    reachable_only: true
  compression: gzip
  max_message_bytes: 1000000
  required_acks: 1
logging.to_files: true

检查配置
./filebeat -c filebeat.yml -configtest
启动
/usr/local/filebeat-6.6.0/filebeat &
filebeat收集本地日志到logstash
# filebeat是跟spring-boot项目在一台主机的
cat /etc/filebeat/filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /home/prod/logs/m-mall-admin/m-mall-admin.log
  scan_frequency: 5s
  multiline.pattern: '^d{4}-d{2}-d{2}'
  multiline.negate: true
  multiline.match: after
  tags: ["m-mall-admin-b-01"]
# 主机有多个收集的日志可以多写几个 -type: log标签
filebeat.config.modules:
  path: ${path.config}/modules.d_ilm/explain
转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/872283.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号