前言Elasticsearchlogstash安装kibana安装设置用户密码参考
前言前段时间服务器遭受到攻击,发现的时间比较晚,于是我转行做起运维。解决方案是代码里面加了日志跟踪,使用ELK(Elasticsearch+Logstash+Kibana)收集Nginx的日志作为监控。ELK在前东家的时候使用过,觉得还是挺好用的,现在自己尝试一下搭建起来。Nginx在docker上已经部署起来了,接下来是采用docker把ELK搭建起来。Elasticsearch在大数据分析的时候听的比较多,Logstash是负责收集Nignx日志然后推送到Elasticsearch,Kibana负责展示这些数据,接下来一起关注怎么把这三者搭建起来。
ElasticsearchElasticsearch在此不详细讲述了,网络上有很多详细的资料。我们一起关注下通过docker怎么搭建Elasticsearch,dockerhub的官网:https://hub.docker.com/_/elasticsearch。具体的操作步骤如下:
- 拉取镜像:docker pull Elasticsearch修改虚拟内存区域大小:sysctl -w vm.max_map_count=262144修改elasticsearch.yml文件如下:
cluster.name: "docker-cluster" network.host: 0.0.0.0 http.cors.enabled: true http.cors.allow-origin: "*"
这里要特别注意一点是:key: value冒号和value中间是有空格的!!!
- 启动Elasticsearch服务,并挂载目录。
docker cp elasticsearch:/usr/share/elasticsearch/data /data/elk7/elasticsearch/ docker cp elasticsearch:/usr/share/elasticsearch/logs /data/elk7/elasticsearch/ chmod 777 -R /data/elk7/elasticsearch/ docker rm -f elasticsearch docker run -d --name=elasticsearch --restart=always -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -v /data/elk7/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/elk7/elasticsearch/data:/usr/share/elasticsearch/data -v /data/elk7/elasticsearch/logs:/usr/share/elasticsearch/logs elasticsearch:7.5.1
访问9200端口,出现以下截图说明成功安装elasticsearch。
5. 启动elasticsearch head 插件
docker run -d --name=elasticsearch-head --restart=always -p 9100:9100 docker.io/mobz/elasticsearch-head:5-alpine
访问9100端口:
- 启动logstash
docker run -d --name=logstash logstash:7.5.1
- 挂载目录
docker cp logstash:/usr/share/logstash /data/elk7/ mkdir /data/elk7/logstash/config/conf.d chmod 777 -R /data/elk7/logstash
- 修改配置文件
vim /data/elk7/logstash/config/logstash.yml
内容如下:
http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: [ "http://elasticseaerch:9200" ] path.config: /usr/share/logstash/config/conf.d/*.conf path.logs: /usr/share/logstash/logs
- 新建syslog.congf文件,用于收集/var/log/messages
vim /data/elk7/logstash/config/conf.d/syslog.conf
内容如下:
input {
file {
#标签
type => "systemlog-localhost"
#采集点
path => "/var/log/messages"
#开始收集点
start_position => "beginning"
#扫描间隔时间,默认是1s,建议5s
stat_interval => "5"
}
}
output {
elasticsearch {
hosts => ["elasticsearch的ip:9200"]
index => "logstash-system-localhost-%{+YYYY.MM.dd}"
}
}
- 设置日志文件的读取权限
chmod 644 /var/log/messages
完成后访问9200端口:
- 拉取镜像
docker pull kibana:7.5.1
- 修改配置文件
mkdir -p /data/elk7/kibana/config/ vi /data/elk7/kibana/config/kibana.yml
- 启动容器
docker run -d --name=kibana --restart=always -p 5601:5601 -v /data/elk7/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:7.5.1
访问5601端口:
- 设置elasticsearch的用户密码,具体配置文件如下:
cluster.name: "docker-cluster" network.host: 0.0.0.0 http.cors.enabled: true http.cors.allow-origin: "*" xpack.security.enabled: true xpack.security.transport.ssl.enabled: true
添加完成后需要设置进入容器,到bin目录下运行elasticsearch-setup-passwords interactive。设置一系列用户的密码:
Enter password for [elastic]: Reenter password for [elastic]: Enter password for [apm_system]: Reenter password for [apm_system]: Enter password for [kibana_system]: Reenter password for [kibana_system]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Enter password for [remote_monitoring_user]: Reenter password for [remote_monitoring_user]: Changed password for user [apm_system] Changed password for user [kibana_system] Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [remote_monitoring_user] Changed password for user [elastic]
设置完成后重启elasticsearch容器。
- 设置kibana的用户密码,具体配置文件如下:
elasticsearch.username: "elastic" elasticsearch.password: "********" i18n.locale: "zh-CN"
完成后重启容器即可。
- 配置logstash的用户密码。在配置文件下配置即可。
xpack.monitoring.elasticsearch.username: "logstash_system" xpack.monitoring.elasticsearch.password: "123123"参考
- ELK分析Nginx日志和可视化展示docker安装Elasticsearch+Kibana+密码配置



