栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 面试经验 > 面试问答

Logstash:将日志文件中的复杂多行JSON解析为ElasticSearch

面试问答 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

Logstash:将日志文件中的复杂多行JSON解析为ElasticSearch

抱歉,我无法发表评论,因此将发布答案。 您

document_type
elaticsearch
配置中缺少a ,否则将如何推导?


好吧,在查看了logstash参考并与@Ascalonian紧密合作之后,我们想到了以下配置:

input {     file {        # in the input you need to properly configure the multiline prec.        # You need to match the line that has the timestamp at the start,         # and then say 'everything that is NOT this line should go to the previous line'.        # the pattern may be improved to handle case when json array starts at the first         # char of the line, but it is sufficient currently        prec => multiline {  pattern => "^["  negate => true  what => previous  max_lines => 2000         }        path => [ '/var/log/logstash.log']         start_position => "beginning"         sincedb_path => "/dev/null"     } }filter {    # extract the json part of the message string into a separate field    grok {         match => { "message" => "^.*?(?<logged_json>{.*)" }     }    # replace newlines in the json string since the json filter below    # can not deal with those. Also it is time to delete unwanted fields    mutate {         gsub => [ 'logged_json', 'n', '' ]         remove_field => [ "message", "@timestamp", "host", "path", "@version", "tags"]     }    # parse the json and remove the string field upon success    json {         source => "logged_json"         remove_field => [ "logged_json" ]     } }output {     stdout {         prec => rubydebug     }     elasticsearch {         cluster => "logstash"         index => "logstashjson"     } }


转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/413054.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号