4

Introduction

Following the previous ANTS a library of Go daily , in this article we take a look at ants source.

Pool

From the previous article, we know that there are two ways to create the ants

  • p, _ := ants.NewPool(cap) p.Submit(task) object created in this way needs to call 060c2a083ca8f0 to submit the task. The task is a function with no parameters and no return value;
  • p, _ := ants.NewPoolWithFunc(cap, func(interface{})) object created in this way needs to specify the pool function, and use p.Invoke(arg) call the pool function. arg is the parameter passed to the pool function func(interface{})

In ants , these two pools are represented by different structures: ants.Pool and ants.PoolWithFunc . Let's first introduce Pool . PoolWithFunc structure of 060c2a083ca9e8 is also similar. After introducing Pool , let's compare them briefly.

Pool structure of 060c2a083caa1e is defined in file pool.go :

// src/github.com/panjf2000/ants/pool.go
type Pool struct {
  capacity int32
  running int32
  workers workerArray
  state int32
  lock sync.Locker
  cond *sync.Cond
  workerCache sync.Pool
  blockingNum int
  options *Options
}

The meaning of each field is as follows:

  • capacity : pool capacity, indicating the maximum number of goroutines ants If it is a negative number, it means that the capacity is unlimited;
  • running : The number of worker goroutines that have been created;
  • workers : Store a group of worker objects, workerArray is just an interface, which represents a worker container, which will be described in detail later;
  • state : Record the current status of the pool and whether it is closed ( CLOSED );
  • lock : lock. ants implements a spin lock by itself. Used to synchronize concurrent operations;
  • cond : Condition variables. Processing task waiting and waking up;
  • workerCache : Use sync.Pool object pool to manage and create worker objects to improve performance;
  • blockingNum : the number of tasks blocked waiting;
  • options : Option. The previous article has already introduced it in detail.

Here is a clear concept. In ants , each task is processed by a worker object, and each worker object will create a corresponding goroutine to process the task. ants is used in goWorker represent worker:

// src/github.com/panjf2000/ants/worker.go
type goWorker struct {
  pool *Pool
  task chan func()
  recycleTime time.Time
}

This section will be described in detail later. Now we only need to know that the Pool.workers field is the container for goWorker

Pool

To create a Pool object, you need to call the ants.NewPool(size, options) function. Some code for processing options is omitted, and the final code is as follows:

// src/github.com/panjf2000/ants/pool.go
func NewPool(size int, options ...Option) (*Pool, error) {
  // ...
  p := &Pool{
    capacity: int32(size),
    lock:     internal.NewSpinLock(),
    options:  opts,
  }
  p.workerCache.New = func() interface{} {
    return &goWorker{
      pool: p,
      task: make(chan func(), workerChanCap),
    }
  }
  if p.options.PreAlloc {
    if size == -1 {
      return nil, ErrInvalidPreAllocSize
    }
    p.workers = newWorkerArray(loopQueueType, size)
  } else {
    p.workers = newWorkerArray(stackType, 0)
  }

  p.cond = sync.NewCond(p.lock)

  go p.purgePeriodically()
  return p, nil
}

The code is not difficult to understand:

  • Create a Pool object, set the capacity, create a spin lock to initialize the lock field, and set options;
  • Set workerCache this sync.Pool object New method, calling sync.Pool object Get() time method, if it is not cached worker object, and then call this method to create a;
  • According to whether the pre-allocation option is set, different types of workers are created;
  • Use the p.lock lock to create a condition variable;
  • Finally, start a goroutine to periodically clean up expired workers.

Pool.workers field is of workerArray , which is actually an interface, representing a worker container:

type workerArray interface {
  len() int
  isEmpty() bool
  insert(worker *goWorker) error
  detach() *goWorker
  retrieveExpiry(duration time.Duration) []*goWorker
  reset()
}

Each method has a good understanding of the meaning from the name:

  • len() int : the number of workers;
  • isEmpty() bool : Whether the number of workers is 0;
  • insert(worker *goWorker) error : After the goroutine task is executed, put the corresponding worker back into workerArray ;
  • detach() *goWorker : Take out a worker workerArray
  • retrieveExpiry(duration time.Duration) []*goWorker : Remove all expired workers;
  • reset() : Reset the container.

workerArray has two implementations in ants workerStack and loopQueue .

workerStack

Let's first introduce workerStack , which is located in the file worker_stack.go :

// src/github.com/panjf2000/ants/worker_stack.go
type workerStack struct {
  items  []*goWorker
  expiry []*goWorker
  size   int
}

func newWorkerStack(size int) *workerStack {
  return &workerStack{
    items: make([]*goWorker, 0, size),
    size:  size,
  }
}
  • items : free worker ;
  • expiry : Expired worker .

After the goroutine completes the task, the Pool pool will return the corresponding worker to workerStack , and call workerStack.insert() directly from append to items :

func (wq *workerStack) insert(worker *goWorker) error {
  wq.items = append(wq.items, worker)
  return nil
}

When a new task arrives, it will call workerStack.detach() to retrieve an idle worker from the container:

func (wq *workerStack) detach() *goWorker {
  l := wq.len()
  if l == 0 {
    return nil
  }

  w := wq.items[l-1]
  wq.items[l-1] = nil // avoid memory leaks
  wq.items = wq.items[:l-1]

  return w
}

The last worker is always returned here, and each time insert() is also append to the end, which conforms to the characteristics of the stack, so it is called workerStack .

There is a detail here. Since the underlying structure of the slice is an array, as long as there is a pointer to the array, the elements in the array will not be released. After the last element of the slice is taken out, the pointer of the corresponding array element is set to nil , and the reference is actively released.

As mentioned above Pool object is created, a goroutine will be created to periodically check and clean up expired workers. Get the list of expired workers by calling workerArray.retrieveExpiry() workerStack implemented as follows:

func (wq *workerStack) retrieveExpiry(duration time.Duration) []*goWorker {
  n := wq.len()
  if n == 0 {
    return nil
  }

  expiryTime := time.Now().Add(-duration)
  index := wq.binarySearch(0, n-1, expiryTime)

  wq.expiry = wq.expiry[:0]
  if index != -1 {
    wq.expiry = append(wq.expiry, wq.items[:index+1]...)
    m := copy(wq.items, wq.items[index+1:])
    for i := m; i < n; i++ {
      wq.items[i] = nil
    }
    wq.items = wq.items[:m]
  }
  return wq.expiry
}

The implementation uses binary search to find the nearest worker that has expired. Since the expiration time is calculated according to the idle time after the goroutine executes the task, and the workerStack.insert() enqueue order is determined, their expiration time is from early to late. So you can use binary search:

func (wq *workerStack) binarySearch(l, r int, expiryTime time.Time) int {
  var mid int
  for l <= r {
    mid = (l + r) / 2
    if expiryTime.Before(wq.items[mid].recycleTime) {
      r = mid - 1
    } else {
      l = mid + 1
    }
  }
  return r
}

The binary search is for the worker that has expired recently, and the one before the worker that is about to expire. It and the workers before it have all expired.

If you find an index index , the items from beginning to index all worker (including) copied to expiry field. Then index to the slice header, where the copy function is used. copy returns the actual number of copies, that is, the number of unexpired workers m . Then set all the elements of the items starting from m nil to avoid memory leaks because they have been copied to the head. Finally, cut the items slice, and return the expired worker slice.

loopQueue

loopQueue based on a circular queue, and the structure is defined in the file worker_loop_queue :

type loopQueue struct {
  items  []*goWorker
  expiry []*goWorker
  head   int
  tail   int
  size   int
  isFull bool
}

func newWorkerLoopQueue(size int) *loopQueue {
  return &loopQueue{
    items: make([]*goWorker, size),
    size:  size,
  }
}

Since it is a circular queue, a slice size The circular queue has a queue head pointer head , which points to the first position with an element, and a queue tail pointer tail , which points to the next position where the element can be stored. So the initial state is as follows:

Add an element at tail tail pointer moves back. Take out the element at head , after taking out the head pointer also moves back. After a period of operation, the queue status is as follows:

head or tail end of the queue and needs to wrap around. So this situation may occur:

When the tail pointer catches up with the head pointer, the queue is full:

When the head pointer catches up with the tail pointer, the queue is empty again:

According to the schematic diagram, let's look loopQueue the operation method of 060c2a083cb689 is very simple.

Since head and tail are equal there may be a queue is empty, also possible that the queue is full, so loopQueue added a isFull field to show the distinction. After the goroutine completes the task, it will put the corresponding worker object back to loopQueue , and execute the insert() method:

func (wq *loopQueue) insert(worker *goWorker) error {
  if wq.size == 0 {
    return errQueueIsReleased
  }

  if wq.isFull {
    return errQueueIsFull
  }
  wq.items[wq.tail] = worker
  wq.tail++

  if wq.tail == wq.size {
    wq.tail = 0
  }
  if wq.tail == wq.head {
    wq.isFull = true
  }

  return nil
}

This method executes the enqueue process of the circular queue. Note that if tail==head is inserted after insertion, the queue is full. Set the isFull field.

The arrival of a new task calls the loopQueeue.detach() method to obtain an idle worker structure:

func (wq *loopQueue) detach() *goWorker {
  if wq.isEmpty() {
    return nil
  }

  w := wq.items[wq.head]
  wq.items[wq.head] = nil
  wq.head++
  if wq.head == wq.size {
    wq.head = 0
  }
  wq.isFull = false

  return w
}

isFull process of the circular queue. Note that after each dequeue, the queue is definitely not full, and 060c2a083cb746 should be reset to false .

workerStack structure of 060c2a083cb771, the first entry worker object expires early and the last entry is late. The method of obtaining expired workers workerStack , except that binary search is not used. I won't go into details here.

Look at the creation of Pool

End introduce two workerArray after implementation, look Pool create a function in workers field is set:

if p.options.PreAlloc {
  if size == -1 {
    return nil, ErrInvalidPreAllocSize
  }
  p.workers = newWorkerArray(loopQueueType, size)
} else {
  p.workers = newWorkerArray(stackType, 0)
}

newWorkerArray() defined in the file worker_array.go :

type arrayType int

const (
  stackType arrayType = 1 << iota
  loopQueueType
)

func newWorkerArray(aType arrayType, size int) workerArray {
  switch aType {
  case stackType:
    return newWorkerStack(size)
  case loopQueueType:
    return newWorkerLoopQueue(size)
  default:
    return newWorkerStack(size)
  }
}

That is, if the pre-allocation option is set, the loopQueue structure is used. Otherwise, use the structure of stack

worker structure

After introducing Pool , let's take a look at the structure of the worker. In ants , the worker is represented by the structure goWorker , which is defined in the file worker.go . Its structure is very simple:

// src/github.com/panjf2000/ants/worker.go
type goWorker struct {
  pool *Pool
  task chan func()
  recycleTime time.Time
}

The meaning of the specific fields is obvious:

  • pool : Holds a reference to the goroutine pool;
  • task : Task channel, through this channel, the function of func () goWorker as a task;
  • recyleTime : This field records goWorker is put back into the pool (that is, when it starts to be idle). After it completes the task, it is set when it is put back into the goroutine pool.

goWorker created, the run() method will be called, and run() method to process the task. run() main process of 060c2a083cb980 is very simple:

func (w *goWorker) run() {
  go func() {
    for f := range w.task {
      if f == nil {
        return
      }
      f()
      if ok := w.pool.revertWorker(w); !ok {
        return
      }
    }
  }()
}

This method starts a new goroutine, and then continuously task channel, and then executes the task. After the task is executed, the revertWorker() method of the goWorker object back into the pool so that it can be taken out to process a new task next time . revertWorker() method will be analyzed in detail later.

Note here that, in fact for f := range w.task this cycle until the channel task off or removed as nil task will be terminated. So this goroutine has been running, which is the key to the high performance of ants Each goWorker will only start the goroutine once, and reuse this goroutine later. The goroutine will be put back into the pool every time it executes only one task.

There is one detail, if returned to operation fails, it will call return , ending it makes goroutine run, goroutine prevent leakage .

Here f == nil is true, return is also a detailed point. We will introduce it in detail later when we talk about pool closure.

Let's take a look at the exception handling of the run()

defer func() {
  w.pool.workerCache.Put(w)
  if p := recover(); p != nil {
    if ph := w.pool.options.PanicHandler; ph != nil {
      ph(p)
    } else {
      w.pool.options.Logger.Printf("worker exits from a panic: %v\n", p)
      var buf [4096]byte
      n := runtime.Stack(buf[:], false)
      w.pool.options.Logger.Printf("worker exits from panic: %s\n", string(buf[:n]))
    }
  }
  w.pool.cond.Signal()
}()

To put it simply, in defer , the recover() function is used to capture the panic thrown during the execution of the task. At this time, the task execution failed and the goroutine ended. But the goWorker object can still be reused, so the defer function calls w.pool.workerCache.Put(w) beginning to put the goWorker object back into the sync.Pool pool.

The next step is to process panic panic processor is specified in the options, call this processor directly. Otherwise, ants call options set Logger record some logs, such as the stack, panic information.

Finally, you need to call w.pool.cond.Signal() notify that there is now a free goWorker . Because we actually run goWorker amount due panic one less, but the pool may have other tasks waiting to be processed.

Submit task

Next, the entire process can be chained up by submitting tasks. From the previous article, we know that you can call the Submit() method of the pool object to submit a task:

func (p *Pool) Submit(task func()) error {
  if p.IsClosed() {
    return ErrPoolClosed
  }
  var w *goWorker
  if w = p.retrieveWorker(); w == nil {
    return ErrPoolOverload
  }
  w.task <- task
  return nil
}

First determine whether the pool is closed, then call the retrieveWorker() method to obtain an idle worker, and then send the task task to the task channel of the worker. The following is the implementation retrieveWorker()

func (p *Pool) retrieveWorker() (w *goWorker) {
  p.lock.Lock()

  w = p.workers.detach()
  if w != nil {
    p.lock.Unlock()
  } else if capacity := p.Cap(); capacity == -1 || capacity > p.Running() {
    p.lock.Unlock()
    spawnWorker()
  } else {
    if p.options.Nonblocking {
      p.lock.Unlock()
      return
    }
  Reentry:
    if p.options.MaxBlockingTasks != 0 && p.blockingNum >= p.options.MaxBlockingTasks {
      p.lock.Unlock()
      return
    }
    p.blockingNum++
    p.cond.Wait()
    p.blockingNum--
    var nw int
    if nw = p.Running(); nw == 0 {
      p.lock.Unlock()
      if !p.IsClosed() {
        spawnWorker()
      }
      return
    }
    if w = p.workers.detach(); w == nil {
      if nw < capacity {
        p.lock.Unlock()
        spawnWorker()
        return
      }
      goto Reentry
    }

    p.lock.Unlock()
  }
  return
}

This method is a bit more complicated, let's look at it a little bit. First call p.workers.detach() obtain the goWorker object. p.workers is a loopQueue or workerStack object, they all implement the detach() method, which has been introduced before.

If a goWorker object is returned, indicating that there is an idle goroutine, return directly.

Otherwise, the pool capacity has not been used up (that is, the capacity is greater than the goWorker ), call spawnWorker() create a new goWorker , and execute its run() method:

spawnWorker := func() {
  w = p.workerCache.Get().(*goWorker)
  w.run()
}

Otherwise, the pool capacity has been used up. If the non-blocking option is set, return directly. Otherwise, if the upper limit of the maximum blocking queue length is set, and the number of currently blocked waiting tasks has reached this upper limit, return directly. Otherwise, the number of blocked waits is +1, and p.cond.Wait() called to wait.

Then after goWorker.run() completes a task, call the revertWorker() method of the goWorker :

func (p *Pool) revertWorker(worker *goWorker) bool {
  if capacity := p.Cap(); (capacity > 0 && p.Running() > capacity) || p.IsClosed() {
    return false
  }
  worker.recycleTime = time.Now()
  p.lock.Lock()

  if p.IsClosed() {
    p.lock.Unlock()
    return false
  }

  err := p.workers.insert(worker)
  if err != nil {
    p.lock.Unlock()
    return false
  }

  p.cond.Signal()
  p.lock.Unlock()
  return true
}

Here provided goWorker of recycleTime field, for determining expired. Then put goWorker back into the pool. workers of insert() front approach has also been analyzed before.

Then call p.cond.Signal() wake up the wait in the retrieveWorker() retrieveWorker() method continues to execute, and the number of blocking waits is -1. Here, the current goWorker (that is, the number of goroutines) is judged. If the number is equal to 0, it is very likely that the pool has just executed Release() shutdown. At this time, it is necessary to determine whether the pool is in the closed state, and if so, return directly. Otherwise, call spawnWorker() create a new goWorker and execute its run() method.

If the current goWorker is not 0, call p.workers.detach() take out an idle goWorker return. This operation may fail because there may be multiple goroutines waiting at the same time, and only some goroutines can get goWorker when waking up. If it fails, and its capacity has not been used up, create a new goWorker directly, otherwise re-execute the blocking waiting logic.

There are a lot of locking and unlocking logics, and it's hard to read and understand mixed with the semaphore. In fact, it is very simple to know only one point, that is, p.cond.Wait() will internally suspend the current goroutine, and then unlock the lock it holds, which will call p.lock.Unlock() . This is also the reason why revertWorker() can be successfully locked in p.lock.Lock() Then p.cond.Signal() or p.cond.Broadcast() will wake up p.cond.Wait() , but the goroutine Signal()/Broadcast() is located needs to call the unlock method. p.cond.Wait() the goroutine that calls 060c2a083cbcf5 is awakened, the internal lock operation will be executed again (that is, p.lock.Lock() is called), so p.cond.Wait() is still executed in a locked state.

Finally, put the overall flow chart:

Clean up expired goWorker

In the NewPool() function, a goroutine will be started to periodically clean up the expired goWorker :

func (p *Pool) purgePeriodically() {
  heartbeat := time.NewTicker(p.options.ExpiryDuration)
  defer heartbeat.Stop()

  for range heartbeat.C {
    if p.IsClosed() {
      break
    }

    p.lock.Lock()
    expiredWorkers := p.workers.retrieveExpiry(p.options.ExpiryDuration)
    p.lock.Unlock()

    for i := range expiredWorkers {
      expiredWorkers[i].task <- nil
      expiredWorkers[i] = nil
    }

    if p.Running() == 0 {
      p.cond.Broadcast()
    }
  }
}

If the pool is closed, exit the goroutine directly. The cleaning interval is set by the option ExpiryDuration . If this option is not set, the default value of 1s is used:

// src/github.com/panjf2000/ants/pool.go
func NewPool(size int, options ...Option) (*Pool, error) {
  if expiry := opts.ExpiryDuration; expiry < 0 {
    return nil, ErrInvalidPoolExpiry
  } else if expiry == 0 {
    opts.ExpiryDuration = DefaultCleanIntervalTime
  }
}

// src/github.com/panjf2000/ants/pool.go
const (
  DefaultCleanIntervalTime = time.Second
)

Then every clean-up cycle, call the p.workers.retrieveExpiry() method to take out the expired goWorker . Because these goWorker also blocked in the channel goroutine start task on, so would like to send the channel a nil value, goWorker.run() process receives a value nil task will return , goroutine end, to avoid the leakage goroutine .

If all goWorker have been cleared out, then there might goroutine blocked in retrieveWorker() method p.cond.Wait() on, so there need to call p.cond.Broadcast() wake of these goroutine.

Dynamic capacity modification

During operation, the capacity of the pool can be dynamically modified. Call the p.Tune(size int) method:

func (p *Pool) Tune(size int) {
  if capacity := p.Cap(); capacity == -1 || size <= 0 || size == capacity || p.options.PreAlloc {
    return
  }
  atomic.StoreInt32(&p.capacity, int32(size))
}

Here is just a simple setting of the new capacity, which does not affect the currently executing goWorker , and if the pre-allocation option is set, the capacity cannot be set again.

The next time revertWorker() is executed, the new capacity will be used to determine whether it can be goWorker . The retrieveWorker()

Shut down and restart Pool

After use, you need to close Pool to avoid goroutine leakage. Release() method of the pool object to close:

func (p *Pool) Release() {
  atomic.StoreInt32(&p.state, CLOSED)
  p.lock.Lock()
  p.workers.reset()
  p.lock.Unlock()
  p.cond.Broadcast()
}

Call p.workers.reset() end loopQueue or wokerStack and do some cleanup work. At the same time, to prevent goroutine from blocking on p.cond.Wait() , execute p.cond.Broadcast() once.

workerStack loopQueue of reset() , that is, send the nil to task channel to end the goroutine, and then reset each field:

// loopQueue 版本
func (wq *loopQueue) reset() {
  if wq.isEmpty() {
    return
  }

Releasing:
  if w := wq.detach(); w != nil {
    w.task <- nil
    goto Releasing
  }
  wq.items = wq.items[:0]
  wq.size = 0
  wq.head = 0
  wq.tail = 0
}

// stack 版本
func (wq *workerStack) reset() {
  for i := 0; i < wq.len(); i++ {
    wq.items[i].task <- nil
    wq.items[i] = nil
  }
  wq.items = wq.items[:0]
}

After the pool is closed, you can also call Reboot() restart:

func (p *Pool) Reboot() {
  if atomic.CompareAndSwapInt32(&p.state, CLOSED, OPENED) {
    go p.purgePeriodically()
  }
}

Since p.purgePeriodically() p.Release() , it exits directly. Here you need to re-open a goroutine to clean up regularly.

PoolWithFunc and WorkWithFunc

In the previous article, we also introduced another way to create Pool , that is, NewPoolWithFunc() , and specify a function. When submitting the task later, call p.Invoke() provide parameters to execute the function. The Pool and Woker structures created in this way are as follows:

type PoolWithFunc struct {
  workers []*goWorkerWithFunc
  poolFunc func(interface{})
}

type goWorkerWithFunc struct {
  pool *PoolWithFunc
  args chan interface{}
  recycleTime time.Time
}

It is similar to the Pool and goWorker introduced earlier, except that PoolWithFunc saves the passed-in function object and uses an array to save the worker. goWorkerWithFunc takes interface{} as the data type of the args understand, because there are already functions, and you only need to pass in data as a parameter to run:

func (w *goWorkerWithFunc) run() {
  go func() {
    for args := range w.args {
      if args == nil {
        return
      }
      w.pool.poolFunc(args)
      if ok := w.pool.revertWorker(w); !ok {
        return
      }
    }
  }()
}

Receive function parameters from the channel and execute the function object saved in the pool.

Other details

task buffer channel

Remember to create p.workerCache this sync.Pool object code it:

p.workerCache.New = func() interface{} {
  return &goWorker{
    pool: p,
    task: make(chan func(), workerChanCap),
  }
}

When sync.Pool is no goWorker object in 060c2a083cbff3, call the New() method to create one. Note that the task channel workerChanCap as the capacity. This variable is defined in the ants.go file:

var (
  // workerChanCap determines whether the channel of a worker should be a buffered channel
  // to get the best performance. Inspired by fasthttp at
  // https://github.com/valyala/fasthttp/blob/master/workerpool.go#L139
  workerChanCap = func() int {
    // Use blocking channel if GOMAXPROCS=1.
    // This switches context from sender to receiver immediately,
    // which results in higher performance (under go1.5 at least).
    if runtime.GOMAXPROCS(0) == 1 {
      return 0
    }

    // Use non-blocking workerChan if GOMAXPROCS>1,
    // since otherwise the sender might be dragged down if the receiver is CPU-bound.
    return 1
  }()
)

In order to facilitate comparison, I also put the notes. ants refers to the implementation of the well-known Web framework fasthttp When GOMAXPROCS is 1 (that is, the number of operating system threads is 1), sending to channel task will suspend sending goroutines and shift the execution flow to receiving goroutines, which can improve reception processing performance. If GOMAXPROCS greater than 1, ants uses a buffered channel, in order to prevent receiving goroutines from being CPU-intensive, causing the sending goroutine to be blocked. The following is the relevant code fasthttp

// src/github.com/valyala/fasthttp/workerpool.go
var workerChanCap = func() int {
  // Use blocking workerChan if GOMAXPROCS=1.
  // This immediately switches Serve to WorkerFunc, which results
  // in higher performance (under go1.5 at least).
  if runtime.GOMAXPROCS(0) == 1 {
    return 0
  }

  // Use non-blocking workerChan if GOMAXPROCS>1,
  // since otherwise the Serve caller (Acceptor) may lag accepting
  // new connections if WorkerFunc is CPU-bound.
  return 1
}()

Spin lock

ants uses atomic.CompareAndSwapUint32() implement a spin lock. Unlike other types of locks, spin locks will not wait immediately after the lock fails, but will continue to try. This can greatly improve performance for applications that can quickly acquire locks, because it can avoid thread switching caused by locking and unlocking:

type spinLock uint32

func (sl *spinLock) Lock() {
  backoff := 1
  for !atomic.CompareAndSwapUint32((*uint32)(sl), 0, 1) {
    for i := 0; i < backoff; i++ {
      runtime.Gosched()
    }
    backoff <<= 1
  }
}

func (sl *spinLock) Unlock() {
  atomic.StoreUint32((*uint32)(sl), 0)
}

// NewSpinLock instantiates a spin-lock.
func NewSpinLock() sync.Locker {
  return new(spinLock)
}

index is used here to , first wait for 1 cycle, and tell the runtime to switch to other goroutines runtime.Gosched() If the lock still cannot be obtained, wait 2 more cycles. If it still doesn't work, wait 4, 8, 16,... and so on. This can prevent the lock from being unable to be acquired in a short time, resulting in wasted CPU time.

to sum up

ants short and concise and does not reference any other third-party libraries. Various details of processing and various performance optimization points are worthy of our careful taste. It is strongly recommended that everyone read the source code. Reading excellent source code can greatly improve your coding literacy.

If you find a fun and useful Go language library, welcome to submit an issue on the Go Daily Library GitHub😄

reference

  1. ants GitHub:github.com/panjf2000/ants
  2. Go a library GitHub every day: https://github.com/darjun/go-daily-lib

I

My blog: https://darjun.github.io

Welcome to follow my WeChat public account [GoUpUp], learn together and make progress together~


darjun
2.9k 声望358 粉丝