1

We use a series to explain the complete practice of microservices from requirements to online, from code to k8s deployment, from logging to monitoring, etc.

The whole project uses microservices developed by go-zero, which basically includes go-zero and some middleware developed by related go-zero authors. The technology stack used is basically the self-developed components of the go-zero project team, basically go -zero the whole family bucket.

Actual project address: https://github.com/Mikaelemmmm/go-zero-looklook

Preamble

Before the introduction, let me talk about the overall idea. If your business log volume is not particularly large and you are using cloud services, then you can directly use cloud service logs. For example, Alibaba Cloud's SLS is basically just a click of a mouse. After a few steps of configuration, you can collect your logs into Alibaba Cloud's SLS. You can directly view the collected logs in Alibaba Cloud, and you don't need to bother.

If your log volume is relatively large, you can use the log system.

1. Log system

After we print the business logs to the console and file, the more commonly used methods in the market are the same basic ideas as elk and efk. Let’s take elk as an example. The basic idea is to collect and filter logstash into elasticsearch, and then kibana presents it.

However, logstash itself is developed using java, and the resource consumption is very high. We use go to do business. In addition to being fast, it takes up less resources and building blocks. Now we are using logstash to waste resources. Then we use go-stash instead of logstash, go -stash is officially developed by go-zero and has been practiced online for a long time, but it is not responsible for collecting logs, but only for filtering and collecting information.

go-stash: https://github.com/kevwan/go-stash

2. Architecture scheme

filebeat collects our business logs, then outputs the logs to kafka as a buffer, go-stash gets the logs in kafka and filters the fields according to the configuration, then outputs the filtered fields to elasticsearch, and finally kibana is responsible for rendering the logs

3. Implementation plan

In the previous section of error handling, we can see that the error log we want has been printed to the console console, and now we only need to do follow-up collection

3.1 kafka

 #消息队列
kafka:
  image: wurstmeister/kafka
  container_name: kafka
  ports:
    - 9092:9092
  environment:
    KAFKA_ADVERTISED_HOST_NAME: kafka
    KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    TZ: Asia/Shanghai
  restart: always
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock
  networks:
    - looklook_net
  depends_on:
    - zookeeper

Configure kafka and zookeeper first

Then we enter kafka and first create a filebeat to collect logs to the topic of kafka

Enter the kafka container

 $  docker exec -it kafka /bin/sh

Modify the kafka listening configuration (or you can mount the configuration file to the physical machine and modify it)

 $ vi /opt/kafka/config/server.properties
listeners=PLAINTEXT://kafka:9092 # 原文件中,要加kafka listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://kafka:9092 #源文件中,要加kafka advertised.listeners=PLAINTEXT://:9092

create topic

 $  cd /opt/kafka/bin
$ ./kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 -partitions 1 --topic looklook-log

3.2 filebeat

In the docker-compose-env.yml file in the project root directory, you can see that we have configured filebeat

We mount the configuration of filebeat to deploy/filebeat/conf/filebeat.yml

 filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /var/lib/docker/containers/*/*-json.log

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

processors:
  - add_cloud_metadata: ~
  - add_docker_metadata: ~

output.kafka:
  enabled: true
  hosts: ["kafka:9092"]
  #要提前创建topic
  topic: "looklook-log"
  partition.hash:
    reachable_only: true
  compression: gzip
  max_message_bytes: 1000000
  required_acks: 1

The configuration is relatively simple. You can see that we collect all the logs and output them directly to the kafka we configured. Topic configuration can be the topic created in kafka in the previous step.

3.3 Configuring go-stash

Let's take a look at the go-stash configuration file deploy/go-stash/etc/config.yaml

 Clusters:
  - Input:
      Kafka:
        Name: gostash
        Brokers:
          - "kafka:9092"
        Topics:
          - looklook-log
        Group: pro
        Consumers: 16
    Filters:
      - Action: drop
        Conditions:
          - Key: k8s_container_name
            Value: "-rpc"
            Type: contains
          - Key: level
            Value: info
            Type: match
            Op: and
      - Action: remove_field
        Fields:
          # - message
          - _source
          - _type
          - _score
          - _id
          - "@version"
          - topic
          - index
          - beat
          - docker_container
          - offset
          - prospector
          - source
          - stream
          - "@metadata"
      - Action: transfer
        Field: message
        Target: data
    Output:
      ElasticSearch:
        Hosts:
          - "http://elasticsearch:9200"
        Index: "looklook-{{yyyy-MM-dd}}"

Configure consuming kafka and output elasticsearch, as well as fields to filter, etc.

3.4 elastic search, kibana

Visit kibana http://127.0.0.1:5601/ to create a log index

Click on the menu in the upper left corner (the three horizontal lines), find Analytics -> click discover

Then on the current page, Create index pattern->input looklook-*->Next Step->select @timestamp->Create index pattern

Then click the menu in the upper left corner, find Analytics->click discover, wait for a while, the logs are displayed (if not displayed, go to check filebeat, go-stash, use docker logs -f filebeat to view)

Let's try adding an error log to the code, the code is as follows

 func (l *BusinessListLogic) BusinessList(req types.BusinessListReq) (*types.BusinessListResp, error) {

    logx.Error("测试的日志")

    ...
}

We access this business method and go to kibana to search for data.log: "test", as shown below

4. End

At this point, the log collection is completed, and then we need to implement link tracking

project address

https://github.com/zeromicro/go-zero

Welcome go-zero and star support us!

WeChat exchange group

Follow the official account of " Microservice Practice " and click on the exchange group to get the QR code of the community group.


kevinwan
931 声望3.5k 粉丝

go-zero作者