Elasticsearch + Logstash(插件) + Kibana 分布式日志收集系统(版本要保持一致)
ELK可以将上十台上百台的服务器日志集中的收集到一台服务器上,然后在通过Kibana web页面展现给我们,并实时进行收集。
在日常运维过程中系统和业务的日志为什么很重要?
日志主要是包括系统日志、应用程序日志、安全日志。运维人员和开发人员都需要看日志,通过分析日志可以了解服务器的负荷、性能安全性、故障,业务报错等,从而纠正错误。
传统的日志收集问题 :
传统项目当中日志文件分散在多个不同节点上进行存储,在搜索日志的时候非常繁琐。
日志一般不会存储到db中的
一般的日志存储到文件,Mogodb,ES中
ELK分布式日志收集原理
1.每台服务器集群节点安装Logstash日志搜集系统插件
2.每台服务器节点将日志输入到Logstash中
3. Logstash将日志格式化为json格式,根据每天创建不同的索引,输出到Elasticsearch中
4.浏览器使用安装Kibana查询日志信息
AppServer:应用服务器
每一台应用服务器都需要安装Logstash
ES是集群存在的,如果一个死了还有其他的在运行
Kibana可以展示页面 进行数据检索
安装顺序 ES -- Logstash -- Kibana
安装elasticsearch (弹性搜索,日志存储)
1.Download Elasticsearch | Elastic
2. 将elasticsearch压缩包解压缩;
3.配置文件elasticsearch.yml ElasticSearch单机多实例环境部署 - shaomine - 博客园
# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # #cluster.name: my-application cluster.name: ictr_ElasticSearch # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # #node.name: node-1 #换个节点名字 node.name: ictr_node1 node.master: true # # Add custom attributes to the node: # #node.attr.rack: r1 node.attr.rack: r1 # #这个配置限制了单节点上可以开启的ES存储实例的个数,我们需要开多个实例,因此需要把这个配置写到配置文件中,并为这个配置赋值为2或者更高 node.max_local_storage_nodes: 2 # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # #path.data: /path/to/data # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # By default Elasticsearch is only accessible on localhost. Set a different # address here to expose this node on the network: # #network.host: 192.168.0.1 network.host: 127.0.0.1 # # By default Elasticsearch listens for HTTP traffic on the first free port it # finds starting at 9200. Set a specific HTTP port here: # #http.port: 9200 http.port: 9200 transport.tcp.port: 9301 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # #discovery.seed_hosts: ["host1", "host2"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # #cluster.initial_master_nodes: ["node-1", "node-2"] # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligibl #e nodes / 2 + 1):# discovery.zen.minimum_master_nodes: 1 # # For more information, consult the discovery and cluster formation module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true # # ---------------------------------- Security ---------------------------------- # # *** WARNING *** # # Elasticsearch security features are not enabled by default. # These features are free, but require configuration changes to enable them. # This means that users don’t have to provide credentials and can get full access # to the cluster. Network connections are also not encrypted. # # To protect your data, we strongly encourage you to enable the Elasticsearch security features. # Refer to the following documentation for instructions. # # https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html #开启x-pack验证 账号是elastic 密码是123456 xpack.security.enabled: true xpack.license.self_generated.type: basic xpack.security.transport.ssl.enabled: true http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-methods : "OPTIONS, HEAD, GET, POST, PUT, DELETe" http.cors.allow-headers : "X-Requested-With, X-Auth-Token, Content-Type, Content-Length, Authorization"
4. 在elasticsearch目录下执行命令:./bin/elasticsearch即可;
5. http://127.0.0.1:9200 要输入 账号是elastic 密码是123456
6.Elasticsearch设置密码Elasticsearch设置密码 - 简书
Logstash插件操作工作原理 (搬运工:把本地的日志文件格式化一下存放到Elasticsearch中去)
Logstash部署到产生日志的应用服务器上,用于收集日志。
Logstash 是免费且开放的服务器端数据处理管道,能够从多个来源采集数据,转换数据,然后将数据发送到您最喜欢的“存储库”中。对日志进行收集、过滤、分析。
核心流程 Logstash三个核心阶段 input - filters - output 是一个接收 处理 转发日志的工具。支持系统日志,websphere日志,错误日志,应用日志,所有日志类型。
安装Logstash
1.Download Logstash Free | Get Started Now | Elastic
2. 将logstash压缩包解压缩;
3. 并在解压后的logstash/bin目录下传入logstash.conf文件
# input output可以有多个
input {
file {
# path 为数据集的位置,需要输入的log文件的路径
path => "/Users/bsnpc3x/documents/logs/malaysia_pb.error.log"
start_position => "beginning"
type => "pb" #具体的业务需要的名字即可
codec => "json"
}
# file {
# # path 为数据集的位置,需要输入的log文件的路径
# path => "/Users/bsnpc3x/documents/logs/malaysia_pb.log"
# start_position => "beginning"
# type => "pb" #具体的业务需要的名字即可
# codec => "json"
# }
}
# filter {
#
# }
output {
#如果配置的elasticsearch服务器没有启动会报错
#Connect to 127.0.0.1:9200 [/127.0.0.1] failed: Connection refused (Connection refused)"}
elasticsearch {
hosts => "http://127.0.0.1:9200"
#日志的命令规则就行 根据每天创建索引
index => "logstash-%{type}-%{+YYYY.MM.dd}"
#document_id => "%{id}"
user => "elastic"
password => "123456"
}
#标准输出stdout {} 输出进行格式化 采用ruby来解析日志
#stdout {codec =>rubydebug}
}
4. 在logstash/bin 目录下执行命令下面的命令
4.1启动测试配置试一下:./logstash -f logstash.conf --config.test_and_exit
4.2 启动命令 ./logstash -f logstash.conf
logstash日志窗口:
elasticsearch 日志窗口输出下面的信息才证明:Logstash收集log数据,输出到elasticsearch写入成功,否则写入失败
4.2 也可以执行 ./logstash -f logstash.conf --config.reload.automatic
–config.reload.automatic选项启用自动配置重新加载,因此您不必在每次修改配置文件时停止并重新启动Logstash。
4.3 访问 http://localhost:9600/ 启动成功
5.报错
5.1报错信息如下
Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
解决办法:需要删除logstash/data/.lock
5.2 Logstash启动正常,Logstash收集log数据,输出到elasticsearch写入失败,报错信息如下
5.2.1 LogStash::Json::ParserError 日志中有空格
解决办法如下:
默认情况下,如果处理的字符中含有tn等字符,是不生效的,我们需要开启logstash的字符转义功能,如下:
修改logstash的config/logstash.yml,找到config.support_escapes,去掉之前的注释,将值改为true,默认是false
config.support_escapes: true
也可以加过滤器
Logstash Filter 配置_云淡风轻-CSDN博客
Logstash filter 的使用 - kszsa - 博客园
【Spring Boot】Spring Boot之利用Logstash将日志转换成以JSON的格式存储和输出 - N!CE波 - 博客园
5.2.2 could not index event to Elasticsearch
解决办法如下
Kibana下载安装
1.Download Kibana Free | Get Started Now | Elastic
2.将kibana压缩包解压缩;
3.配置文件kibana.yml kibana配置文件kibana.yml参数详解 - 三度 - 博客园
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"
server.host: "127.0.0.1"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewritebasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewritebasePath: false
# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicbaseUrl: ""
# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
server.name: "kibana"
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]
elasticsearch.hosts: ["http://127.0.0.1:9200"]
elasticsearch.username: "elastic"
elasticsearch.password: "123456"
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "home"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"
# Kibana can also authenticate to Elasticsearch via "service account tokens".
# If may use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
elasticsearch.requestTimeout: 99999
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false
# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid
# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
i18n.locale: "zh-CN"
4.在kibana目录下执行命令:./bin/kibana即可;
5.http://127.0.0.1:5601/ 要输入 账号是elastic 密码是123456



