3

I recently came into contact with the microservice framework go-zero core/syncx/singleflight.go After reading the entire framework code, I found that the structure is clear and the code is concise, so I decided to read the source code to learn.

In go-zero SingleFlight 's role is to combine concurrent requests into one request to reduce the pressure on the underlying services.

Application scenarios

  1. When querying the cache, merge requests to improve service performance.
    Assuming that there is an IP query service, each time a user requests an IP address in the cache, it will be returned directly if there is a result in the cache, and an IP resolution operation will be performed if it does not exist.

As shown in the figure above, n user requests to query the same IP (8.8.8.8) will correspond to n Redis queries. In high concurrency scenarios, if n Redis queries can be combined into one Redis query, the performance will definitely be improved. The improvement is a lot, and SingleFlight is used to realize the request merge, and the effect is as follows:

  1. Prevent cache breakdown.
The cache breakdown problem refers to: In a high concurrency scenario, a large number of requests query a key at the same time. If the key just expires and fails, a large number of requests will be hit to the database, resulting in an increase in database connections and an increase in the load.

Through SingleFlight concurrent requests for the same key can be combined, only one of the requests can be queried in the database, and the other requests share the same result, which can greatly improve the concurrency capability.

Application method

Go directly to the code:

 func main() {
  round := 10
  var wg sync.WaitGroup
  barrier := syncx.NewSingleFlight()
  wg.Add(round)
  for i := 0; i < round; i++ {
    go func() {
      defer wg.Done()
      // 启用10个协程模拟获取缓存操作
      val, err := barrier.Do("get_rand_int", func() (interface{}, error) {
        time.Sleep(time.Second)
        return rand.Int(), nil
      })
      if err != nil {
        fmt.Println(err)
      } else {
        fmt.Println(val)
      }
    }()
  }
  wg.Wait()
}

The above code simulates 10 coroutines to request Redis to obtain the content of a key. The code is very simple, just execute the Do() method. Among them, two parameters are received. The first parameter is the identifier of the resource to be obtained, which can be the key cached in redis. The second parameter is an anonymous function that encapsulates the business logic to be done. The final result obtained is as follows:

 5577006791947779410
5577006791947779410
5577006791947779410
5577006791947779410
5577006791947779410
5577006791947779410
5577006791947779410
5577006791947779410
5577006791947779410
5577006791947779410

It can be seen from the above that all 10 coroutines have obtained the same result, that is, only one coroutine is actually executed rand.Int() obtains a random number, and other coroutines share this result.

Source code analysis

First look at the code structure:

 type (
  // 定义接口,有2个方法 Do 和 DoEx,其实逻辑是一样的,DoEx 多了一个标识,主要看Do的逻辑就够了
  SingleFlight interface {
    Do(key string, fn func() (interface{}, error)) (interface{}, error)
    DoEx(key string, fn func() (interface{}, error)) (interface{}, bool, error)
  }
  // 定义 call 的结构
  call struct {
    wg  sync.WaitGroup // 用于实现通过1个 call,其他 call 阻塞
    val interface{}    // 表示 call 操作的返回结果
    err error          // 表示 call 操作发生的错误
  }
  // 总控结构,实现 SingleFlight 接口
  flightGroup struct {
    calls map[string]*call // 不同的 call 对应不同的 key
    lock  sync.Mutex       // 利用锁控制请求
  }
)

Then look at what the core Do方法 did:

 func (g *flightGroup) Do(key string, fn func() (interface{}, error)) (interface{}, error) {
  c, done := g.createCall(key)
  if done {
    return c.val, c.err
  }

  g.makeCall(c, key, fn)
  return c.val, c.err
}

The code is very simple, using g.createCall(key) to initiate a call request to the key (actually doing one thing), if there are other coroutines already initiating a call request at this time, it will be blocked (done is true), Return directly after waiting for the result. If done is false, it means that the current coroutine is the first coroutine to initiate a call, then execute g.makeCall(c, key, fn) to actually initiate a call request (other coroutines after that are blocked at g.createCall(key) ).

As can be seen from the above figure, there are actually two key steps:

  1. Judgment is the coroutine of the first request (using map)
  2. Block all other coroutines (using sync.WaitGroup)

Let's see how g.createCall(key) is implemented:

 func (g *flightGroup) createCall(key string) (c *call, done bool) {
  g.lock.Lock()
  if c, ok := g.calls[key]; ok {
    g.lock.Unlock()
    c.wg.Wait()
    return c, true
  }

  c = new(call)
  c.wg.Add(1)
  g.calls[key] = c
  g.lock.Unlock()

  return c, false
}

First look at the first step: determine the coroutine of the first request (using map)

 g.lock.Lock()
if c, ok := g.calls[key]; ok {
  g.lock.Unlock()
  c.wg.Wait()
  return c, true
}

Here, it is judged whether the key in the map exists. If it already exists, it means that other Wait() are already requesting. At present, this sync.WaitGroup only needs to wait. Wait() method is implemented, which is still very clever here. It should be noted that map is non-concurrency safe in Go, so it needs to be locked.

Look at the second step: block all other coroutines (using sync.WaitGroup)

 c = new(call)
c.wg.Add(1)
g.calls[key] = c

Because it is the first coroutine to initiate a call, it needs to new this call, and then wg.Add(1) , which corresponds to the above wg.Wait() , blocks the remaining coroutines. Then put the call of new into the map. Note that the initialization is only completed at this time, and the call request is not actually executed. The real processing logic is in g.makeCall(c, key, fn) .

 func (g *flightGroup) makeCall(c *call, key string, fn func() (interface{}, error)) {
  defer func() {
    g.lock.Lock()
    delete(g.calls, key)
    g.lock.Unlock()
    c.wg.Done()
  }()

  c.val, c.err = fn()
}

What this method does is very simple, it executes the passed anonymous function fn() (that is, what the real call request does). Finally, the final processing (through defer) is also divided into two steps:

  1. Delete the key in the map, so that the next request can get the new value.
  2. Call wg.Done() to let all previously blocked coroutines get the results and return.

So far, the core code of SingleFlight has been parsed. Although the code is not long, the idea is still great and can be used for reference in practical work.

Summarize

  • map is non-concurrency safe, remember to lock.
  • Skillfully use sync.WaitGroup to complete the application scenario of 需要阻塞控制协程 .
  • The specific business logic is encapsulated and passed through the anonymous function fn, and the unified logic processing is completed in the upper-level function that calls fn.

project address

https://github.com/zeromicro/go-zero

Welcome go-zero and star support us!

WeChat exchange group

Follow the official account of " Microservice Practice " and click on the exchange group to get the QR code of the community group.

If you have go-zero use experience articles, or source code study notes, welcome to contact the submission through the public account!


kevinwan
931 声望3.5k 粉丝

go-zero作者