1

We use a series to explain the complete practice of microservices from requirements to online, from code to k8s deployment, from logging to monitoring, etc.

The whole project uses microservices developed by go-zero, which basically includes go-zero and some middleware developed by related go-zero authors. The technology stack used is basically the self-developed components of the go-zero project team, basically go -zero the whole family bucket.

Actual project address: https://github.com/Mikaelemmmm/go-zero-looklook

1 Overview

There are many kinds of message queues, such as rabbitmq, rocketmq, kafka, etc., among which go-queue ( https://github.com/zeromicro/go-queue ) is a message queue component officially developed by go-zero, which is divided into 2 Class, one is kq, the other is dq, kq is a message queue based on kafka, dq is a delay queue based on beanstalkd, but go-queue does not support scheduled tasks. If you want to know more about go-queue, I have written a tutorial before, you can go and see it here.

This project uses go-queue as message queue, asynq as delay queue and timing queue

Several reasons why to use asynq

  • Directly based on redis, general projects have redis, and asynq itself is based on redis, so it can maintain one less middleware
  • Support message queue, delay queue, scheduled task scheduling, because you want the project to support scheduled tasks and asynq directly supports it
  • There is a webui interface, each task can be paused, archived, viewed through the ui interface, success and failure, monitoring

Why does asynq support message queue still use go-queue?

  • Kafka's throughput is famous for its performance. If the amount in the early stage is not large, asynq can be used directly
  • No purpose, just want to show you go-queue

When we use go-zero, goctl brings us a lot of convenience, but at present go-zero only generates api and rpc. Many students in the group ask how to generate timed tasks, delay queues, and message queues, and the directory structure should be How to do it, in fact, go-zero is designed for us, it is serviceGroup, use serviceGroup to manage your services.

2. How to use

We have already demonstrated in the previous scenarios such as orders and messages, and here we will add it separately.

We still take order-mq as an example. Obviously, using goctl to generate api and rpc is not what we want, then we will use serviceGroup to transform ourselves. The directory structure is basically the same as that of api, except that the handler is changed to listen and the logic Replaced with mqs.

2.1 The code in main is as follows

 var configFile = flag.String("f", "etc/order.yaml", "Specify the config file")

func main() {
    flag.Parse()
    var c config.Config

    conf.MustLoad(*configFile, &c)
    // log, prometheus, trace, metricsUrl
    if err := c.SetUp(); err != nil {
        panic(err)
    }

    serviceGroup := service.NewServiceGroup()
    defer serviceGroup.Stop()

    for _, mq := range listen.Mqs(c) {
        serviceGroup.Add(mq)
    }

    serviceGroup.Start()
}
  • First we have to define the configuration and parse the configuration.
  • Secondly, why do we need to add SetUp here and api and rpc do not need it? Because api and rpc are written in the framework of MustNewServer, but we do not use serviceGroup management, you can manually click SetUp to see, this method includes the definitions of log, prometheus, trace, metricsUrl, a method can save a lot of things , so that we can directly modify the configuration file to achieve logging, monitoring, and link tracking.
  • Next is the serviceGroup management service of go-zero. The serviceGroup is used to manage a group of services. The service is actually an interface. The code is as follows

    Service (code in go-zero/core/service/servicegroup.go)

     // Service is the interface that groups Start and Stop methods.
    Service interface {
        Starter // Start
        Stopper // Stop
    }

    Therefore, as long as your service implements these two interfaces, you can join the serviceGroup for unified management

    Then you can see that we implement this interface for all mqs, and then put them into all list.Mqs, and start the service.

2.2 MQ classification management

Code in go-zero-looklook/app/order/cmd/mq/internal/listen directory

The code in this directory is to manage different types of mq in a unified manner, because we have to manage kq, asynq and possibly rabbitmq, rocketmq, etc., so we have classified them here to facilitate maintenance.

Unified management is in go-zero-looklook/app/order/cmd/mq/internal/listen/listen.go, and then call listen.Mqs in main to get all mq to start together

 // 返回所有消费者
func Mqs(c config.Config) []service.Service {
    svcContext := svc.NewServiceContext(c)
    ctx := context.Background()

    var services []service.Service

    // kq :消息队列.
    services = append(services, KqMqs(c, ctx, svcContext)...)
    // asynq:延迟队列、定时任务
    services = append(services, AsynqMqs(c, ctx, svcContext)...)
    // other mq ....

    return services
}

go-zero-looklook/app/order/cmd/mq/internal/listen/asynqMqs.go is the defined asynq

 // asynq
// 定时任务、延迟任务
func AsynqMqs(c config.Config, ctx context.Context, svcContext *svc.ServiceContext) []service.Service {
   return []service.Service{
      // 监听延迟队列
      deferMq.NewAsynqTask(ctx, svcContext),

      // 监听定时任务
   }
}

go-zero-looklook/app/order/cmd/mq/internal/listen/asynqMqs.go is the defined kq (kafka of go-queue)

 // kq
// 消息队列
func KqMqs(c config.Config, ctx context.Context, svcContext *svc.ServiceContext) []service.Service {
    return []service.Service{
        // 监听消费流水状态变更
        kq.MustNewQueue(c.PaymentUpdateStatusConf, kqMq.NewPaymentUpdateStatusMq(ctx, svcContext)),
        // .....
    }
}

2.3 Actual business

To write the actual business, we are under go-zero-looklook/app/order/cmd/mq/internal/listen/mqs. For the convenience of maintenance, it is also classified here.

  • deferMq : delay queue
  • kq: message queue

2.3.1 Delay Queue

 // 监听关闭订单
type AsynqTask struct {
   ctx    context.Context
   svcCtx *svc.ServiceContext
}

func NewAsynqTask(ctx context.Context, svcCtx *svc.ServiceContext) *AsynqTask {
   return &AsynqTask{
      ctx:    ctx,
      svcCtx: svcCtx,
   }
}

func (l *AsynqTask) Start() {
   fmt.Println("AsynqTask start ")

   srv := asynq.NewServer(
      asynq.RedisClientOpt{Addr: l.svcCtx.Config.Redis.Host, Password: l.svcCtx.Config.Redis.Pass},
      asynq.Config{
         Concurrency: 10,
         Queues: map[string]int{
            "critical": 6,
            "default":  3,
            "low":      1,
         },
      },
   )

   mux := asynq.NewServeMux()

   // 关闭民宿订单任务
   mux.HandleFunc(asynqmq.TypeHomestayOrderCloseDelivery, l.closeHomestayOrderStateMqHandler)

   if err := srv.Run(mux); err != nil {
      log.Fatalf("could not run server: %v", err)
   }
}

func (l *AsynqTask) Stop() {
   fmt.Println("AsynqTask stop")
}

Because asynq needs to start first, and then define routing tasks, we do unified routing management in asynqTask.go, and then we define a file under the deferMq folder for each business separately (such as "Delayed closing order: closeHomestayOrderState .go"), so that each business has one file, like go-zero's api and rpc's logic, and maintenance is very convenient

closeHomestayOrderState.go closes the order logic

 package deferMq

import (
    "context"
    "encoding/json"
    "looklook/app/order/cmd/rpc/order"
    "looklook/app/order/model"
    "looklook/common/asynqmq"
    "looklook/common/xerr"

    "github.com/hibiken/asynq"
    "github.com/pkg/errors"
)

func (l *AsynqTask) closeHomestayOrderStateMqHandler(ctx context.Context, t *asynq.Task) error {
    var p asynqmq.HomestayOrderCloseTaskPayload
    if err := json.Unmarshal(t.Payload(), &p); err != nil {
        return errors.Wrapf(xerr.NewErrMsg("解析asynq task payload err"), "closeHomestayOrderStateMqHandler payload err:%v, payLoad:%+v", err, t.Payload())
    }

    resp, err := l.svcCtx.OrderRpc.HomestayOrderDetail(ctx, &order.HomestayOrderDetailReq{
        Sn: p.Sn,
    })
    if err != nil || resp.HomestayOrder == nil {
        return errors.Wrapf(xerr.NewErrMsg("获取订单失败"), "closeHomestayOrderStateMqHandler 获取订单失败 or 订单不存在 err:%v, sn:%s ,HomestayOrder : %+v", err, p.Sn, resp.HomestayOrder)
    }

    if resp.HomestayOrder.TradeState == model.HomestayOrderTradeStateWaitPay {
        _, err := l.svcCtx.OrderRpc.UpdateHomestayOrderTradeState(ctx, &order.UpdateHomestayOrderTradeStateReq{
            Sn:         p.Sn,
            TradeState: model.HomestayOrderTradeStateCancel,
        })
        if err != nil {
            return errors.Wrapf(xerr.NewErrMsg("关闭订单失败"), "closeHomestayOrderStateMqHandler 关闭订单失败  err:%v, sn:%s ", err, p.Sn)
        }
    }

    return nil
}

2.3.2 kq message queue

Look under the go-zero-looklook/app/order/cmd/mq/internal/mqs/kq folder, because kq is not the same as asynq, it is managed by go-zero's Service, and starter and stopper have been implemented interface, so we directly define a go-queue business in /Users/seven/Developer/goenv/go-zero-looklook/app/order/cmd/mq/internal/listen/kqMqs.go and throw it to serviceGroup, go to Just leave it to main to start, our business code only needs to implement go-queue's Consumer to write our own business directly.

1) /Users/seven/Developer/goenv/go-zero-looklook/app/order/cmd/mq/internal/listen/kqMqs.go

 func KqMqs(c config.Config, ctx context.Context, svcContext *svc.ServiceContext) []service.Service {
    return []service.Service{
        // 监听消费流水状态变更
        kq.MustNewQueue(c.PaymentUpdateStatusConf, kqMq.NewPaymentUpdateStatusMq(ctx, svcContext)),
        // .....
    }
}

It can be seen that the return of kq.MustNewQueue itself is queue.MessageQueue, and queue.MessageQueue implements Start and Stop.

2) In business

/Users/seven/Developer/goenv/go-zero-looklook/app/order/cmd/mq/internal/mqs/kq/paymentUpdateStatus.go

 func (l *PaymentUpdateStatusMq) Consume(_, val string) error {
    fmt.Printf(" PaymentUpdateStatusMq Consume val : %s \n", val)
    // 解析数据
    var message kqueue.ThirdPaymentUpdatePayStatusNotifyMessage
    if err := json.Unmarshal([]byte(val), &message); err != nil {
        logx.WithContext(l.ctx).Error("PaymentUpdateStatusMq->Consume Unmarshal err : %v , val : %s", err, val)
        return err
    }

    // 执行业务..
    if err := l.execService(message); err != nil {
        logx.WithContext(l.ctx).Error("PaymentUpdateStatusMq->execService  err : %v , val : %s , message:%+v", err, val, message)
        return err
    }

    return nil
}

We only need to implement the interface Consume in paymentUpdateStatus.go to receive messages from kafka from kq, we just need to process our business in our Consumer

3. Scheduled tasks

Regarding timed tasks, currently go-zero-looklook is not used. I will also explain here.

  • If you want to use cron directly (bare metal and k8s),
  • If it is a little more complicated, you can use the https://github.com/robfig/cron package to define the time in the code
  • Use xxl-job, gocron distributed scheduled task system access
  • asynq's shedule

Because of the asynq used in the project, I will demonstrate the shedule of asynq

It is divided into client and server. The client is used to define the scheduling time. The server is triggered by the client's message to execute the business we wrote. The actual business should be written in the server, and the client is used to define the business scheduling time.

asynqtest/docker-compose.yml

 version: '3'

services:
  #asynqmon asynq延迟队列、定时队列的webui
  asynqmon:
    image: hibiken/asynqmon:latest
    container_name: asynqmon_asynq
    ports:
      - 8980:8080
    command:
      - '--redis-addr=redis:6379'
      - '--redis-password=G62m50oigInC30sf'
    restart: always
    networks:
      - asynqtest_net
    depends_on:
      - redis
  
  #redis容器
  redis:
    image: redis:6.2.5
    container_name: redis_asynq
    ports:
      - 63779:6379
    environment:
      # 时区上海
      TZ: Asia/Shanghai
    volumes:
      # 数据文件
      - ./data/redis/data:/data:rw
    command: "redis-server --requirepass G62m50oigInC30sf  --appendonly yes"
    privileged: true
    restart: always
    networks:
      - asynqtest_net

networks:
  asynqtest_net:
    driver: bridge
    ipam:
      config:
        - subnet: 172.22.0.0/16

asynqtest/shedule/client/client.go

 package main

import (
    "asynqtest/tpl"
    "encoding/json"
    "log"

    "github.com/hibiken/asynq"
)

const redisAddr = "127.0.0.1:63779"
const redisPwd = "G62m50oigInC30sf"

func main() {
    // 周期性任务
    scheduler := asynq.NewScheduler(
        asynq.RedisClientOpt{
            Addr:     redisAddr,
            Password: redisPwd,
        }, nil)

    payload, err := json.Marshal(tpl.EmailPayload{Email: "546630576@qq.com", Content: "发邮件呀"})
    if err != nil {
        log.Fatal(err)
    }

    task := asynq.NewTask(tpl.EMAIL_TPL, payload)
    // 每隔1分钟同步一次
    entryID, err := scheduler.Register("*/1 * * * *", task)

    if err != nil {
        log.Fatal(err)
    }
    log.Printf("registered an entry: %q\n", entryID)

    if err := scheduler.Run(); err != nil {
        log.Fatal(err)
    }
}

asynqtest/shedule/server/server.go

 package main

import (
    "context"
    "encoding/json"
    "fmt"
    "log"

    "asynqtest/tpl"

    "github.com/hibiken/asynq"
)

func main() {
    srv := asynq.NewServer(
        asynq.RedisClientOpt{Addr: "127.0.0.1:63779", Password: "G62m50oigInC30sf"},
        asynq.Config{
            Concurrency: 10,
            Queues: map[string]int{
                "critical": 6,
                "default":  3,
                "low":      1,
            },
        },
    )

    mux := asynq.NewServeMux()

    // 关闭民宿订单任务
    mux.HandleFunc(tpl.EMAIL_TPL, emailMqHandler)

    if err := srv.Run(mux); err != nil {
        log.Fatalf("could not run server: %v", err)
    }
}

func emailMqHandler(ctx context.Context, t *asynq.Task) error {
    var p tpl.EmailPayload
    if err := json.Unmarshal(t.Payload(), &p); err != nil {
        return fmt.Errorf("emailMqHandler err:%+v", err)
    }

    fmt.Printf("p : %+v \n", p)

    return nil
}

asynqtest/tpl/tpl.go

 package tpl

const EMAIL_TPL = "schedule:email"

type EmailPayload struct {
    Email   string
    Content string
}

Start server.go , client.go

Browser input http://127.0.0.1:8980/schedulers Here you can see all client-defined tasks

Browser input http://127.0.0.1:8990/ Here you can see our server consumption please

Console consumption

Let's talk about the idea of integrating asynq's shedule into the project. You can start a service separately as the scheduling client to define the timing task scheduling management of the system, and define the server together with the asynq of each business's own mq.

4. End

In this section, we learned to use message queues and delay queues. Kafka can be viewed through management tools. As for asynq to view webui, we have already started asynqmon in go-zero-looklook/docker-compose-env.yml. You can view it directly using http://127.0.0.1:8980

project address

https://github.com/zeromicro/go-zero

Welcome go-zero and star support us!

WeChat exchange group

Follow the official account of " Microservice Practice " and click on the exchange group to get the QR code of the community group.


kevinwan
931 声望3.5k 粉丝

go-zero作者