ELK相关常用配置解析

一、filebeat配置采集多个目录的日志

采集多个目录日志,自己的配置:

- type: log

  enabled: true
  
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /opt/nginx-1.14.0/logs/stars/star.access.log  #指明读取的日志文件的位置
  tags: ["nginx-access"]    #使用tag来区分不同的日志

- type: log

  enabled: true
  
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /srv/www/runtime/logs/app.log
  tags: ["app"]    #使用tag来区分不同的日志
    

二、filebeat解析多行日志合并成一行

收集日志的时候,如果是像自己的项目日志,往往是多行trace日志,这个时候,就需要配置多行匹配,filebeat提供了multiline ooptions用来解析多行日志合并成一行。

multiline options 主要是三个主要参数:

multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after

//上面配置的意思是:不以时间格式开头的行都合并到上一行的末尾(正则写的不好,忽略忽略)

所以,最终,想要配置收集不同目录的日志,并且多行日志匹配,具体配置为:

- type: log

  enabled: true
  
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /opt/nginx-1.14.0/logs/stars/star.access.log
  tags: ["nginx-access"]

- type: log

  enabled: true
  
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /srv/www/runtime/logs/app.log
  tags: ["app"] 
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after

关键在于使用filelds中的type进行不同日志的区分

filebeat服务重启:systemctl restart filebeat

参考链接:

https://www.cnblogs.com/zhjh2...

https://blog.csdn.net/m0_3788...

三、logstash配置grok的match,output等

logstash配置文件,对不同的日志进行不同的filter匹配

#Logstash通过type字段进行判断
input {
        beats {
                host => '0.0.0.0'
                port => 5401 
        }
}

filter {
  if "nginx-access" in [tags]{   #对nginx-access进行匹配
    grok {  #grok插件对日志进行匹配,主要是正则表达式
          match => { "message" => "%{IPORHOST:remote_ip} - %{IPORHOST:host} - \[%{HTTPDATE:access_time}\] \"%{WORD:http_method} %{DATA:url} HTTP/%{NUMBER:http_version}\" - %{DATA:request_body} - %{INT:http_status} %{INT:body_bytes_sent} \"%{DATA:refer}\" \"%{DATA:user_agnet}\" \"%{DATA:x_forwarded_for}\" \"%{DATA:upstream_addr}\" \"response_location:%{DATA:response_location}\"" }
       }
  }
  else if "app" in [tags]{ 
    grok {
      match => {
        "message" => "%{DATESTAMP:log_time} \[%{IP:remote_ip}\]\[%{INT:uid}\]\[%{DATA:session_id}\]\[%{WORD:log_level}\]\[%{DATA:category}\] %{GREEDYDATA:message_text}"
      }
    }
}

output{
  if "nginx-access" in [tags]{
    elasticsearch {
      hosts => ["http://xxx.xxx.xxx.xx:9200"]
      index => "star_nginx_access_index_pattern-%{+YYYY.MM.dd}"
      user => "elastic"
      password => "!@#j3C"
    }
  }
  else if "app" in [tags]{
    elasticsearch {
      hosts => ["http://xxx.xxx.xxx.xx:9200"]
      index => "star_app_index_pattern-%{+YYYY.MM.dd}"
      user => "elastic"
      password => "!@#j3C"
    }
  }
}

主要是对grok的正则表达式进行匹配,为了将日志一条一条的匹配出来然后在Kibana中展示

重启logstash:systemctl restart logstash

四、grok校验配置的正确与否

使用grok在线调试校验:http://grokdebug.herokuapp.com/

image-20200623190022167.png

五、kibana设置具体的index pattern

image-20200630180808995.png

image-20200630181004613.png

六、注意事项

  1. 修改配置之前备份原来的配置
  2. 修改配置之后记得重启服务
  3. filebeat收集日志时候,如果是多行日志,可以合并一行

繁星落眼眶
626 声望54 粉丝