栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 前沿技术 > 大数据 > 大数据系统

【日志】docker环境搭建elk

【日志】docker环境搭建elk

docker搭建elk日志

利用 sebp/elk 搭建

官方文档 https://elk-docker.readthedocs.io/#usage

注意事项

至少要分配4G内存给docker

下载镜像
docker pull sebp/elk:7.13.2
启动容器

这里保证对应的端口都未占用

docker run -it -d -p 5601:5601 -p 9200:9200 -p 5044:5044 --name elk sebp/elk:7.13.2
# 给logstash的日志挂载出来
docker run -it -d -p 5601:5601 -p 9200:9200 -p 5044:5044 -v /mnt/d/dockerv/elk/logstash/conf.d:/etc/logstash/conf.d --name elk sebp/elk:7.13.2

我这里启动没有那么顺利,因为我没有调整那个内存
启动报错

 * Starting periodic command scheduler cron        
[ OK ]
 * Starting Elasticsearch Server        
[fail]
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
...
ERROR: [1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /var/log/elasticsearch/elasticsearch.log
waiting for Elasticsearch to be up (24/30)
waiting for Elasticsearch to be up (25/30)
...
waiting for Elasticsearch to be up (30/30)
Couldn't start Elasticsearch. Exiting.
Elasticsearch log follows below.
[2022-02-18T01:40:07,680][INFO ][o.e.n.Node               ] [elk] version[7.13.2], pid[245], build[default/tar/4d960a0733be83dd2543ca018aa4ddc42e956800/2021-06-10T21:01:55.251515791Z], OS[Linux/4.15.0-163-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/16/16+36]
[2022-02-18T01:40:07,685][INFO ][o.e.n.Node               ] [elk] JVM home [/opt/elasticsearch/jdk], using bundled JDK [true]
[2022-02-18T01:40:07,685][INFO ][o.e.n.Node               ] [elk] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-4274621583916089140, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms31744m, -Xmx31744m, -XX:MaxDirectMemorySize=16642998272, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=25, -Des.path.home=/opt/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=true]
...
[2022-02-18T01:40:26,656][ERROR][o.e.b.Bootstrap          ] [elk] node validation exception
[1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2022-02-18T01:40:26,658][INFO ][o.e.n.Node               ] [elk] stopping ...
[2022-02-18T01:40:26,677][INFO ][o.e.n.Node               ] [elk] stopped
[2022-02-18T01:40:26,677][INFO ][o.e.n.Node               ] [elk] closing ...
[2022-02-18T01:40:26,690][INFO ][o.e.n.Node               ] [elk] closed
[2022-02-18T01:40:26,692][INFO ][o.e.x.m.p.NativeController] [elk] Native controller process has stopped - no new native processes can be started

这个报错,查了下,发现是内存的问题
从报错信息vm.max_map_count看出内存太小了 所以 需要修改vm.max_map_count的内存大小 切换到root账户 命令 su root
修改sysctl.conf文件 命令: vim /etc/sysctl.conf 如下:

root@anqi:/home/anqi# vim /etc/sysctl.conf
root@anqi:/home/anqi# sysctl -p
vm.max_map_count = 655360
root@anqi:/home/anqi# 

再次启动,搞定,牛皮

 * Starting periodic command scheduler cron        
[ OK ]
 * Starting Elasticsearch Server        
[ OK ]
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
waiting for Elasticsearch to be up (3/30)
waiting for Elasticsearch to be up (4/30)
waiting for Elasticsearch to be up (5/30)
waiting for Elasticsearch to be up (6/30)
waiting for Elasticsearch to be up (7/30)
waiting for Elasticsearch to be up (8/30)
waiting for Elasticsearch to be up (9/30)
waiting for Elasticsearch to be up (10/30)
Waiting for Elasticsearch cluster to respond (1/30)
logstash started.
 * Starting Kibana5        
[ OK ]
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-02-18T01:44:33,596][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding component template [synthetics-mappings]
[2022-02-18T01:44:33,730][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding index template [.watch-history-13] for index patterns [.watcher-history-13*]
[2022-02-18T01:44:33,811][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding index template [ilm-history] for index patterns [ilm-history-5*]
[2022-02-18T01:44:33,902][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding index template [.slm-history] for index patterns [.slm-history-5*]
[2022-02-18T01:44:33,979][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding template [.monitoring-alerts-7] for index patterns [.monitoring-alerts-7]
[2022-02-18T01:44:34,071][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding template [.monitoring-es] for index patterns [.monitoring-es-7-*]
[2022-02-18T01:44:34,153][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-7-*]
[2022-02-18T01:44:34,241][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-7-*]
[2022-02-18T01:44:34,364][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding template [.monitoring-beats] for index patterns [.monitoring-beats-7-*]
[2022-02-18T01:44:34,504][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding index template [logs] for index patterns [logs-*-*]
==> /var/log/logstash/logstash-plain.log <==
==> /var/log/kibana/kibana5.log <==
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-02-18T01:44:34,622][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding index template [metrics] for index patterns [metrics-*-*]
[2022-02-18T01:44:34,765][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding index template [synthetics] for index patterns [synthetics-*-*]
[2022-02-18T01:44:34,849][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [ml-size-based-ilm-policy]
[2022-02-18T01:44:34,947][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [metrics]
[2022-02-18T01:44:35,155][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [logs]
[2022-02-18T01:44:35,227][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [synthetics]
[2022-02-18T01:44:35,328][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [watch-history-ilm-policy]
[2022-02-18T01:44:35,445][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [ilm-history-ilm-policy]
[2022-02-18T01:44:35,513][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [slm-history-ilm-policy]
[2022-02-18T01:44:35,588][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [.fleet-actions-results-ilm-policy]
[2022-02-18T01:44:35,780][INFO ][o.e.l.LicenseService     ] [elk] license [5c5a737e-faad-4bd2-8a48-11e926bd6f6c] mode [basic] - valid
[2022-02-18T01:44:35,781][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [elk] Active license is now [BASIC]; Security is disabled
[2022-02-18T01:44:35,782][WARN ][o.e.x.s.s.SecurityStatusChangeListener] [elk] Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.13/security-minimal-setup.html to enable security.
[2022-02-18T01:44:54,319][INFO ][o.e.c.m.metadataCreateIndexService] [elk] [.kibana_task_manager_7.13.2_001] creating index, cause [api], templates [], shards [1]/[1]
[2022-02-18T01:44:54,365][INFO ][o.e.c.r.a.AllocationService] [elk] updating number_of_replicas to [0] for indices [.kibana_task_manager_7.13.2_001]
[2022-02-18T01:44:54,634][INFO ][o.e.c.m.metadataCreateIndexService] [elk] [.kibana_7.13.2_001] creating index, cause [api], templates [], shards [1]/[1]
[2022-02-18T01:44:54,639][INFO ][o.e.c.r.a.AllocationService] [elk] updating number_of_replicas to [0] for indices [.kibana_7.13.2_001]
[2022-02-18T01:44:54,937][INFO ][o.e.c.r.a.AllocationService] [elk] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_7.13.2_001][0], [.kibana_task_manager_7.13.2_001][0]]]).
[2022-02-18T01:44:59,001][INFO ][o.e.c.m.metadataCreateIndexService] [elk] [.apm-custom-link] creating index, cause [api], templates [], shards [1]/[1]
[2022-02-18T01:44:59,003][INFO ][o.e.c.r.a.AllocationService] [elk] updating number_of_replicas to [0] for indices [.apm-custom-link]
[2022-02-18T01:45:01,160][INFO ][o.e.c.m.metadataCreateIndexService] [elk] [.apm-agent-configuration] creating index, cause [api], templates [], shards [1]/[1]
[2022-02-18T01:45:01,163][INFO ][o.e.c.r.a.AllocationService] [elk] updating number_of_replicas to [0] for indices [.apm-agent-configuration]
[2022-02-18T01:45:01,607][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding template [.management-beats] for index patterns [.management-beats]
==> /var/log/logstash/logstash-plain.log <==
[2022-02-18T01:45:03,340][INFO ][logstash.runner          ] Log4j configuration path used is: /opt/logstash/config/log4j2.properties
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-02-18T01:45:03,614][INFO ][o.e.c.m.metadataMappingService] [elk] [.kibana_task_manager_7.13.2_001/B-MmtmUGQpq59-4kAaGCUA] update_mapping [_doc]
[2022-02-18T01:45:03,653][INFO ][o.e.c.m.metadataMappingService] [elk] [.kibana_7.13.2_001/JENLq0O3TBa6LKt60ENWjg] update_mapping [_doc]
[2022-02-18T01:45:04,410][INFO ][o.e.c.r.a.AllocationService] [elk] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.apm-agent-configuration][0]]]).
==> /var/log/logstash/logstash-plain.log <==
[2022-02-18T01:45:03,766][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.13.2", "jruby.version"=>"jruby 9.2.16.0 (2.5.7) 2021-03-03 f82228dc32 OpenJDK 64-Bit Server VM 11.0.11+9 on 11.0.11+9 +indy +jit [linux-x86_64]"}
[2022-02-18T01:45:03,858][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
[2022-02-18T01:45:03,987][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-02-18T01:45:05,617][INFO ][o.e.c.m.metadataMappingService] [elk] [.kibana_7.13.2_001/JENLq0O3TBa6LKt60ENWjg] update_mapping [_doc]
[2022-02-18T01:45:05,635][INFO ][o.e.c.m.metadataMappingService] [elk] [.kibana_7.13.2_001/JENLq0O3TBa6LKt60ENWjg] update_mapping [_doc]
[2022-02-18T01:45:05,965][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [kibana-event-log-policy]
[2022-02-18T01:45:06,257][INFO ][o.e.c.m.metadataIndexTemplateService] [elk] adding template [.kibana-event-log-7.13.2-template] for index patterns [.kibana-event-log-7.13.2-*]
[2022-02-18T01:45:06,413][INFO ][o.e.c.m.metadataCreateIndexService] [elk] [.kibana-event-log-7.13.2-000001] creating index, cause [api], templates [.kibana-event-log-7.13.2-template], shards [1]/[1]
[2022-02-18T01:45:06,415][INFO ][o.e.c.r.a.AllocationService] [elk] updating number_of_replicas to [0] for indices [.kibana-event-log-7.13.2-000001]
==> /var/log/logstash/logstash-plain.log <==
[2022-02-18T01:45:06,442][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"335aed0b-951c-4ec2-a3e3-3d91c9bd7948", :path=>"/opt/logstash/data/uuid"}
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-02-18T01:45:06,852][INFO ][o.e.x.i.IndexLifecycleTransition] [elk] moving index [.kibana-event-log-7.13.2-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [kibana-event-log-policy]
[2022-02-18T01:45:07,078][INFO ][o.e.x.i.IndexLifecycleTransition] [elk] moving index [.kibana-event-log-7.13.2-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [kibana-event-log-policy]
[2022-02-18T01:45:07,495][INFO ][o.e.c.r.a.AllocationService] [elk] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana-event-log-7.13.2-000001][0]]]).
[2022-02-18T01:45:07,754][INFO ][o.e.x.i.IndexLifecycleTransition] [elk] moving index [.kibana-event-log-7.13.2-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [kibana-event-log-policy]
==> /var/log/logstash/logstash-plain.log <==
[2022-02-18T01:45:08,511][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2022-02-18T01:45:09,587][INFO ][org.reflections.Reflections] Reflections took 357 ms to scan 1 urls, producing 24 keys and 48 values 
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-02-18T01:45:09,859][INFO ][o.e.c.m.metadataCreateIndexService] [elk] [.ds-ilm-history-5-2022.02.18-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
[2022-02-18T01:45:09,863][INFO ][o.e.c.m.metadataCreateDataStreamService] [elk] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-2022.02.18-000001] and backing indices []
[2022-02-18T01:45:09,972][INFO ][o.e.x.i.IndexLifecycleTransition] [elk] moving index [.ds-ilm-history-5-2022.02.18-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
[2022-02-18T01:45:10,071][INFO ][o.e.x.i.IndexLifecycleTransition] [elk] moving index [.ds-ilm-history-5-2022.02.18-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [ilm-history-ilm-policy]
[2022-02-18T01:45:10,217][INFO ][o.e.c.r.a.AllocationService] [elk] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2022.02.18-000001][0]]]).
[2022-02-18T01:45:10,293][INFO ][o.e.x.i.IndexLifecycleTransition] [elk] moving index [.ds-ilm-history-5-2022.02.18-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ilm-history-ilm-policy]
==> /var/log/logstash/logstash-plain.log <==
[2022-02-18T01:45:11,123][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2022-02-18T01:45:11,516][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2022-02-18T01:45:11,692][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2022-02-18T01:45:11,812][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.13.2) {:es_version=>7}
[2022-02-18T01:45:11,814][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-02-18T01:45:12,228][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>20, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2500, "pipeline.sources"=>["/etc/logstash/conf.d/02-beats-input.conf", "/etc/logstash/conf.d/10-syslog.conf", "/etc/logstash/conf.d/11-nginx.conf", "/etc/logstash/conf.d/30-output.conf"], :thread=>"#"}
[2022-02-18T01:45:13,653][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.42}
[2022-02-18T01:45:13,693][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2022-02-18T01:45:14,171][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-02-18T01:45:14,259][INFO ][org.logstash.beats.Server][main][3345ebe133ac5535dc322adb01ae100ae72e039cc30f0496e6819a92c88c4745] Starting server on port: 5044
[2022-02-18T01:45:14,269][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
查看效果

地址栏访问 http://192.168.110.106:9200/

访问kibana http://192.168.110.106:5601/

创建虚拟日志,来看看效果吧 进入该容器
docker exec -it elk /bin/bash
进入容器后执行下面的命令
/opt/logstash/bin/logstash --path.data /tmp/logstash/data 
    -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'

这里说下,如果不给参数–path.data,那么启动会报错,说默认的已经启动了,想再启动实例,就需要重新指定这个path.data

执行后输出如下结果

root@anqi:/home/anqi# docker exec -it elk /bin/bash
root@1a1adaef56ea:/# /opt/logstash/bin/logstash --path.data /tmp/logstash/data 
>     -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'
Using bundled JDK: /opt/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /opt/logstash/logs which is now configured via log4j2.properties
[2022-02-18T02:01:30,502][INFO ][logstash.runner          ] Log4j configuration path used is: /opt/logstash/config/log4j2.properties
[2022-02-18T02:01:30,510][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.13.2", "jruby.version"=>"jruby 9.2.16.0 (2.5.7) 2021-03-03 f82228dc32 OpenJDK 64-Bit Server VM 11.0.11+9 on 11.0.11+9 +indy +jit [linux-x86_64]"}
[2022-02-18T02:01:30,525][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/tmp/logstash/data/queue"}
[2022-02-18T02:01:30,527][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/tmp/logstash/data/dead_letter_queue"}
[2022-02-18T02:01:30,765][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-02-18T02:01:30,786][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"5d762117-1720-4a13-876e-6416c1f5b240", :path=>"/tmp/logstash/data/uuid"}
[2022-02-18T02:01:31,408][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601}
[2022-02-18T02:01:31,678][INFO ][org.reflections.Reflections] Reflections took 36 ms to scan 1 urls, producing 24 keys and 48 values 
[2022-02-18T02:01:32,563][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2022-02-18T02:01:32,840][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2022-02-18T02:01:32,970][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2022-02-18T02:01:33,008][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.13.2) {:es_version=>7}
[2022-02-18T02:01:33,010][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-02-18T02:01:33,072][WARN ][logstash.outputs.elasticsearch][main] Configuration is data stream compliant but due backwards compatibility Logstash 7.x will not assume writing to a data-stream, default behavior will change on Logstash 8.0 (set `data_stream => true/false` to disable this warning)
[2022-02-18T02:01:33,073][WARN ][logstash.outputs.elasticsearch][main] Configuration is data stream compliant but due backwards compatibility Logstash 7.x will not assume writing to a data-stream, default behavior will change on Logstash 8.0 (set `data_stream => true/false` to disable this warning)
[2022-02-18T02:01:33,109][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2022-02-18T02:01:33,128][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>20, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2500, "pipeline.sources"=>["config string"], :thread=>"#"}
[2022-02-18T02:01:33,222][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"logstash"}
[2022-02-18T02:01:33,742][INFO ][logstash.outputs.elasticsearch][main] Created rollover alias {:name=>""}
[2022-02-18T02:01:33,782][INFO ][logstash.outputs.elasticsearch][main] Installing ILM policy {"policy"=>{"phases"=>{"hot"=>{"actions"=>{"rollover"=>{"max_size"=>"50gb", "max_age"=>"30d"}}}}}} {:name=>"logstash-policy"}
[2022-02-18T02:01:33,963][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.83}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.jrubystdinchannel.StdinChannelLibrary$Reader (file:/opt/logstash/vendor/bundle/jruby/2.5.0/gems/jruby-stdin-channel-0.2.0-java/lib/jruby_stdin_channel/jruby_stdin_channel.jar) to field java.io.FilterInputStream.in
WARNING: Please consider reporting this to the maintainers of com.jrubystdinchannel.StdinChannelLibrary$Reader
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[2022-02-18T02:01:34,044][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2022-02-18T02:01:34,088][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
test

这里等待输入的内容,输入上test,然后去看kibana,就可以看到日志啦。

接下来配置logstash,读取redis的日志

这里这个集成的elk,logstash的配置文件在容器内部,我这边给拷贝出来了,然后挂在出来了,这个会默认读取容器内部的
/etc/logstash/conf.d 路径下的配置文件。
这里加了两个配置文件,一个input,一个output
具体内容不贴了
等有时间改下再贴上来
然后重启容器即可。

参考

https://www.jianshu.com/p/3578f570d255

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/742187.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号