helm3安装了logstash配置好了logback,但是日志记录一直不对,如何解决?

logstash配置:

global:
  storageClass: alibabacloud-cnfs-nas
service:
  type: NodePort
  ports:
    - name: http
      port: 8080
      targetPort: http
      protocol: TCP
    - name: syslog-udp
      port: 1514
      targetPort: syslog-udp
      protocol: UDP
    - name: syslog-tcp
      port: 1514
      targetPort: syslog-tcp
      protocol: TCP
persistence:
  # 云盘
  #  storageClass: "alicloud-disk-ssd"
  #  size: 20Gi
  # NAS
  storageClass: alibabacloud-cnfs-nas
  size: 2Gi

input: |-
  udp {
    port => 1514
    codec => json_lines
  }
  tcp {
    port => 1514
    codec => json_lines
  }
  http { port => 8080 }

filter: |-
  json {
    source => "message"
    target => "json"
  }

output: |-
  if [env] != "" {
    elasticsearch {
      hosts => ["xxx.xxx.xxx.xxx:xxxx"]
      index => "logs33--success-%{+YYYY.MM.dd}"
    }
  } else {
    elasticsearch {
      hosts => ["xxx.xxx.xxx.xxx:xxxx"]
      index => "logs-failure-%{+YYYY.MM.dd}"
    }
  }
  stdout { codec => rubydebug }

logback配置

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false">
    
    <springProfile name="dev">
        <!--异步发送日志-->
        <appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
            <appender-ref ref="LOGSTASH"/>
        </appender>

        <!-- 日志输出级别 -->
        <root level="INFO">
            <!-- 添加logstash日志输出 -->
            <appender-ref ref="LOGSTASH"/>
        </root>
        <!-- logstash 设置 -->
        <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
            <!--            <param name="Encoding" value="UTF-8"/>-->
            <!-- logstash 服务器 -->
            <destination>xxx.xxx.xxx.xxx:xxx</destination>
            <!-- encoder is required -->
            <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
                <!--                // 索引名-->
                <customFields>{"env":"dev"}</customFields>
                <providers>
                    <timestamp>
                        <timeZone>UTC</timeZone>
                    </timestamp>
                    <pattern>
                        {
                        "serviceName": "${name}",
                        "level": "%level",
                        "message": "%message",
                        "env": "test",
                        "stack_trace": "%exception{5}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}"
                        }
                    </pattern>
                </providers>
            </encoder>
            <!-- 临界值过滤器,过滤掉低于指定临界值的日志。当日志级别等于或高于临界值时,过滤器返回NEUTRAL;当日志级别低于临界值时,日志会被拒绝,OFF>ERROR>WARN>INFO>DEBUG>TRACE>ALL -->
            <!--        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">-->
            <!--            <level>INFO</level>-->
            <!--        </filter>-->
        </appender>
        <!-- 定义日志输出级别、格式等配置 -->
    </springProfile>
</configuration>

logstash打印的日志:

[2023-09-22T02:26:50,029][INFO ][logstash.codecs.json     ][main][f3916e23ca79e9308acd3be143501936b256d568e41e841a6fd83f731839d2c0] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
{
         "event" => {
        "original" => ""
    },
          "json" => nil,
       "message" => "",
          "host" => {
        "ip" => "10.0.125.0"
    },
           "url" => {
        "path" => "/bad-request"
    },
          "http" => {
        "version" => "HTTP/1.0",
         "method" => "GET"
    },
    "@timestamp" => 2023-09-22T02:26:50.030993835Z,
      "@version" => "1"
}

从打印的日志看,输出的格式明显有问题。

阅读 1k
1 个回答

自己解决了,使用下面的logstash配置就ok了

global:
  storageClass: alibabacloud-cnfs-nas
service:
  type: NodePort
  ports:
    - name: http
      port: 8080
      targetPort: http
      protocol: TCP
    - name: syslog-udp
      port: 1514
      targetPort: syslog-udp
      protocol: UDP
    - name: syslog-tcp
      port: 1514
      targetPort: syslog-tcp
      protocol: TCP
persistence:
  enabled: true
  # NAS
  storageClass: alibabacloud-cnfs-nas
  size: 2Gi
containerPorts:
  - name: http
    containerPort: 8080
    protocol: TCP
  - name: monitoring
    containerPort: 9600
    protocol: TCP
  - name: syslog-udp
    containerPort: 1514
    protocol: UDP
  - name: syslog-tcp
    containerPort: 1514
    protocol: TCP
input: |-
  udp {
   port => 1514
   type => syslog
   codec => json_lines
  }
  tcp {
   port => 1514
   type => syslog
   codec => json_lines
  }
  http { port => 8080 }
output: |-  
  if [active] != "" {
    elasticsearch {
      hosts => ["xxx.xxx.xxx.xxx:xxxx"]
      index => "%{active}-logs-%{+YYYY.MM.dd}"
    }
  } else {
    elasticsearch {
      hosts => ["xxx.xxx.xxx.xxx:xxxx"]
      index => "ignore-logs-%{+YYYY.MM.dd}"
    }
  }
  stdout { }

我讲下我的解决过程:
1、刚开始以为是logstash的问题,但是发现使用curl测试发送消息是ok的

➜  ~ kubectl port-forward service/logstash 8080:8080 -nlogstash
➜  ~ curl -X POST -d '{"message": "Hello World","env": "dev"}' http://localhost:8080

2、既然logstash没问题我就看看是不是logback有问题,发现不管怎么配置都不行

3、我就打算换个思路既然curl发送日志可以logback不行,我就想抓包试试logback发送的日志请求报文,于是我查看logback配置的时候我发现使用的是net.logstash.logback.appender.LogstashTcpSocketAppender类,再加上我使用的是curl的http请求,我于是推导出可能我的logstash的tcp端口可能不对,于是又回到logstash配置上面

4、最后修改logstash配置让tcp端口ping通才真正解决问题,所以问题就是tcp端口不通导致的,使用下面命令测试

telnet xxx.xxx.xxx.xxx xxxx

5、总结一下就是对logstash不太熟悉导致的,不知道logback是通过tcp发送的请求到logstash,自己还一直处在curl没有问题的状态中,好在最后发现了问题所在。

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
宣传栏