抱歉,我无法发表评论,因此将发布答案。 您document_type
在elaticsearch
配置中缺少a ,否则将如何推导?
好吧,在查看了logstash参考并与@Ascalonian紧密合作之后,我们想到了以下配置:
input { file { # in the input you need to properly configure the multiline prec. # You need to match the line that has the timestamp at the start, # and then say 'everything that is NOT this line should go to the previous line'. # the pattern may be improved to handle case when json array starts at the first # char of the line, but it is sufficient currently prec => multiline { pattern => "^[" negate => true what => previous max_lines => 2000 } path => [ '/var/log/logstash.log'] start_position => "beginning" sincedb_path => "/dev/null" } }filter { # extract the json part of the message string into a separate field grok { match => { "message" => "^.*?(?<logged_json>{.*)" } } # replace newlines in the json string since the json filter below # can not deal with those. Also it is time to delete unwanted fields mutate { gsub => [ 'logged_json', 'n', '' ] remove_field => [ "message", "@timestamp", "host", "path", "@version", "tags"] } # parse the json and remove the string field upon success json { source => "logged_json" remove_field => [ "logged_json" ] } }output { stdout { prec => rubydebug } elasticsearch { cluster => "logstash" index => "logstashjson" } }


