1
头图

In the last article, we made a theoretical analysis and summary of the design of the distributed log storage scheme , the article address . In this article, we will combine one of the schemes to demonstrate the actual code. Another solution, which will be shared in the next article, is the MongoDB architectural pattern. When the article was published on Zhihu, someone mentioned the use of opentelemtry+tsdb . If you are interested, you can learn about it.

architectural pattern

Through the analysis of the previous article, we roughly summed up such an architecture design. The architecture diagram is as follows:

  1. Service A, Service B, Service C and Service D represent our actual interface addresses. When the client sends a request, the direct processing module. The generation of system log is also generated in this module.
  2. The MQ service acts as a log queue to temporarily store log messages. This is to improve the processing power of the log. In a high-concurrency business scenario, if the log is written to MongoDB in real time, it will inevitably reduce the speed of business processing.
  3. The MongoDB service is the final log landing. That is to say, store our logs to disk to achieve data persistence and avoid data loss.
  4. For system log viewing, we can directly log in to the MongoDB service to perform SQL queries. Generally, for reasons such as efficiency and security, a management interface will be provided to view MongoDB logs in real time. Here is our web presentation interface. You can query, filter, and delete logs through the web interface.

The above mentioned is a rough flow chart of an architecture. The specific code will be demonstrated below. If you need to view the code, you can get it through the Github repository address.

code demo

The code needs to operate RabbitMQ service, MongoDB service, API business logic processing and other services. Here I design the code calling logic as the following structure.

magin.go (entry file) -> api (business processing) -> rabbitmq (log producer, consumer) -> MongoDB (log persistence).
The code structure is organized as follows:

code description

The following lists several used technology stacks and corresponding versions. When using this code, you may need to pay attention to the version compatibility of these services to prevent the code from running.

  1. Go version 1.16.
  2. RabbitMQ version 3.10.0.
  3. MongoDB version v5.0.7.

The following is a brief description of a few slightly important code segments, and the complete code can be viewed directly in the Github repository .

entry file

 package main

import (
    "fmt"
    "net/http"

    "github.com/gin-gonic/gin"

    "gologs/api"
)

func main() {
    r := gin.Default()

    // 定义一个order-api的路由地址,并做对应的接口返回
    r.GET("/order", func(ctx *gin.Context) {
        orderApi, err := api.OrderApi()
        if err != nil {
            ctx.JSON(http.StatusInternalServerError, gin.H{
                "code": 1,
                "msg":  orderApi,
                "data": map[string]interface{}{},
            })
        }
        ctx.JSON(http.StatusOK, gin.H{
            "code": 1,
            "msg":  orderApi,
            "data": map[string]interface{}{},
        })
    })
    // 指定服务地址和端口号
    err := r.Run(":8081")
    if err != nil {
        fmt.Println("gin server fail, fail reason is ", err)
    }
}

Order business logic

 package api

import (
    "time"

    "gologs/rabbit"
)
// 订单业务逻辑处理,并调用Rabbit服务投递order日志
func OrderApi() (string, error) {
    orderMsg := make(map[string]interface{})
    orderMsg["time"] = time.Now()
    orderMsg["type"] = "order"
    err := rabbit.SendMessage(orderMsg)
    if err != nil {
        return "write rabbitmq log fail", err
    }
    return "", nil
}

RabbitMQ processing logs

 package rabbit

import (
    "encoding/json"

    "github.com/streadway/amqp"

    "gologs/com"
)

func SendMessage(msg map[string]interface{}) error {
    channel := Connection()
    declare, err := channel.QueueDeclare("logs", false, false, false, false, nil)
    if err != nil {
        com.FailOnError(err, "RabbitMQ declare queue fail!")
        return err
    }

    marshal, err := json.Marshal(msg)
    if err != nil {
        return err
    }
    err = channel.Publish(
        "",
        declare.Name,
        false,
        false,
        amqp.Publishing{
            ContentType:  "text/plain", // message type
            Body:         marshal,      // message body
            DeliveryMode: amqp.Persistent,
        })
    if err != nil {
        com.FailOnError(err, "rabbitmq send message fail!")
        return err
    }
    return nil
}

Consumers consume messages

 package rabbit

import (
    "encoding/json"
    "fmt"
    "time"

    "gologs/com"
    "gologs/mongo"
)

func ConsumerMessage() {
    channel := Connection()

    declare, err := channel.QueueDeclare("logs", false, false, false, false, nil)
    if err != nil {
        com.FailOnError(err, "queue declare fail")
    }

    consume, err := channel.Consume(
        declare.Name,
        "",
        true,
        false,
        false,
        false,
        nil,
    )
    if err != nil {
        com.FailOnError(err, "message consumer failt")
    }

    for d := range consume {
        msg := make(map[string]interface{})
        err := json.Unmarshal(d.Body, &msg)
        fmt.Println(msg)
        if err != nil {
            com.FailOnError(err, "json parse error")
        }
        one, err := mongo.InsertOne(msg["type"].(string), msg)
        if err != nil {
            com.FailOnError(err, "mongodb insert fail")
        }
        fmt.Println(one)
        time.Sleep(time.Second * 10)
    }
}

Call MongoDB persistent log

 package mongo

import (
    "context"
    "errors"

    "gologs/com"
)

func InsertOne(collectionName string, logs map[string]interface{}) (interface{}, error) {
    collection := Connection().Database("logs").Collection(collectionName)
    one, err := collection.InsertOne(context.TODO(), logs)

    if err != nil {
        com.FailOnError(err, "write mongodb log fail")
        return "", errors.New(err.Error())
    }

    return one.InsertedID, nil
}

Actual demonstration

The code logic is roughly shared above, and then the running effect of the code is demonstrated.

start the service

To start the service, you need to enter the log directory, main.go is the actual entry file.

Start the log consumer

Start the log consumer to ensure that once there is a log, the consumer can store the log in MongoDB in real time. Similarly, you need to execute this command in the logs directory.

 go run rabbit_consumer.go

Call API service

For demonstration, here directly use the browser to access the interface address corresponding to the order. http://127.0.0.1:8081/order . The interface returns the following information:

If the code is 1, it means that the interface is successful, otherwise it is unsuccessful, you need to pay attention when calling.

Here you can visit several times to view the queue information in RabbitMQ. If consumers consume slowly, they should see the following information:

consumer monitoring

Since we started a separate consumer thread when we started the service, this thread has been running as a background program under normal circumstances. We can view the approximate consumption data content, as shown below:

MongoDB view data

RabbitMQ consumers store log information in MongoDB, and then query directly through MongoDB.

 db.order.find();
[
  {
    "_id": {"$oid": "627675df5f796f95ddb9bbf4"},
    "time": "2022-05-07T21:36:02.374928+08:00",
    "type": "order"
  },
  {
    "_id": {"$oid": "627675e95f796f95ddb9bbf6"},
    "time": "2022-05-07T21:36:02.576065+08:00",
    "type": "order"
  }
  ................
]

Conclusion at the end of the article

This concludes the overall demonstration of the architecture. Of course, there are still many details that need to be improved. This article mainly shares a general process. In the next article, we will share how to use the ELK environment on Linux so that we can do actual code demonstrations later.


Mandy
412 声望627 粉丝