前言

golang 的sync包下有种锁,一种是sync.RWMutex,另一种是sync.Mutex,本文将讲解下sync.RWMutex是如何实现的?适用于什么场景?如何避免读/写 饥饿问题?就让我们带着这些问题来看源码是如何实现的

例子

package main

import (
   "fmt"
   "math/rand"
   "sync"
)

type Content struct {
   rw  sync.RWMutex
   val int
}

func (c *Content) Read() int {
   c.rw.RLock()
   defer c.rw.RUnlock()
   return c.val
}
func (c *Content) Write(v int) {
   c.rw.Lock()
   defer c.rw.Unlock()
   c.val = v
}

func main() {
   const (
      readerNum = 100
      writerNum = 3
   )
   content := new(Content)
   var wg sync.WaitGroup
   for i := 0; i < writerNum; i++ {
      wg.Add(1)
      go func() {
         defer wg.Done()
         content.Write(rand.Intn(10))
      }()
   }
   for i := 0; i < readerNum; i++ {
      wg.Add(1)
      go func() {
         defer wg.Done()
         fmt.Println(content.Read())
      }()
   }

}

互斥性

  • 读读不互斥
  • 读写互斥
  • 写写互斥

源码

type RWMutex struct {
    w           Mutex  // held if there are pending writers //当要获取写锁时,需要对w加锁
    writerSem   uint32 // semaphore for writers to wait for completing readers //writers使用的信号量,用于等待readers完成读操作
    readerSem   uint32 // semaphore for readers to wait for completing writers //readers使用的信号量,用于等待writers完成写请求
    readerCount int32  // number of pending readers  //当前正在读的readers数量,也即已经获取读锁成功的数量
    readerWait  int32  // number of departing readers //等待readers完成读操作的数量,从readerCount拷贝过来,用于写锁请求时,表示还剩多少读锁未释放
}

获取读锁

func (rw *RWMutex) RLock() {
   ...
   if atomic.AddInt32(&rw.readerCount, 1) < 0 { 
      // A writer is pending, wait for it.
      runtime_SemacquireMutex(&rw.readerSem, false, 0)
   }
   ...
}

readerCount大于0时,说明已经有reader获取读锁,那么直接返回成功,表示获取读锁成功,若atomic.AddInt32(&rw.readerCount, 1)<0表示已经有写锁再排队,此时写锁会将readerCount置为一个很小的负数(下文源码会解释),那么这个时候有reader来获取读锁时,只能在 readerSem中排队,这样就不会导致写锁饥饿.

获取写锁

func (rw *RWMutex) Lock() {
   ...
   // First, resolve competition with other writers.
   rw.w.Lock()
   // Announce to readers there is a pending writer.
   r := atomic.AddInt32(&rw.readerCount, -rwmutexMaxReaders) + rwmutexMaxReaders  //注: rwmutexMaxReaders = 1 << 30
   // Wait for active readers.
   if r != 0 && atomic.AddInt32(&rw.readerWait, r) != 0 {
      runtime_SemacquireMutex(&rw.writerSem, false, 0)
   }
   ...
}

writer 获取写锁是,首先w进行加锁,这样就可以避免其他的writer 也来获取写锁。

atomic.AddInt32(&rw.readerCount, -rwmutexMaxReaders)readerCount置为一个很小的负数,这样就可以阻止reader直接获取读锁,从而在 readerSem中排队。

已经阻止了后来的writer和reader,那么需要等待已经成功获取读锁的reader 释放读锁,这里才能获取写锁, 这里将readerCount 拷贝到readerWait,然后本次writer 进入 writerSem中排队,等待已经获取读锁的reader释放读锁,并通知这个writer.

释放读锁

func (rw *RWMutex) RUnlock() {
   ...
   if r := atomic.AddInt32(&rw.readerCount, -1); r < 0 {
      // Outlined slow-path to allow the fast-path to be inlined
      rw.rUnlockSlow(r)
   }
   ...
}

func (rw *RWMutex) rUnlockSlow(r int32) {
    if r+1 == 0 || r+1 == -rwmutexMaxReaders {
        throw("sync: RUnlock of unlocked RWMutex")
    }
    // A writer is pending.
    if atomic.AddInt32(&rw.readerWait, -1) == 0 {
        // The last reader unblocks the writer.
        runtime_Semrelease(&rw.writerSem, false, 1)
    }
}

由上面获取读锁可知,每次获取一个读锁,readerCount加一,所以这里需要减一,如果减一之后小于0,说明有writer正在获取锁。那么,需要调用rUnlockSlow进行后续操作。

  1. 判断readerWait是否等于0,也即是否还有reader 还没有释放读锁。
  2. 若等于0,则表示在writer 获取写锁开始,全部的reader已经释放读锁,这时就需要通知唤醒之前那个还阻塞在获取写锁的writer

释放写锁

func (rw *RWMutex) Unlock() {
   ...
   // Announce to readers there is no active writer.
   r := atomic.AddInt32(&rw.readerCount, rwmutexMaxReaders)
   if r >= rwmutexMaxReaders {
      throw("sync: Unlock of unlocked RWMutex")
   }
   // Unblock blocked readers, if any.
   for i := 0; i < int(r); i++ {
      runtime_Semrelease(&rw.readerSem, false, 0)
   }
   // Allow other writers to proceed.
   rw.w.Unlock()
   ...
}

这里主要通过atomic.AddInt32(&rw.readerCount, rwmutexMaxReaders)恢复readerCount,恢复后的值就是当前阻塞在获取读锁的reader数量,这时就需要

runtime_Semrelease(&rw.readerSem, false, 0)将这些reader 全部唤醒,表示他们获取到读锁。

性能比较

以下数据来自参考文献[1]中作者benchmark 的数据,这里使用sync.Locksync.RWMutex来比展示使用读写锁性能优势,其中writeRadio 表示 reader:writer 的比值,耗时减低相对sync.Lock而言。说明在读多写少的场景中,读写锁能大幅提升性能。

writeRatio31020501001000
耗时降低24%71.3%83.7%90.9%93.5%95.7%

参考文献

  1. https://segmentfault.com/a/11...

John
13 声望4 粉丝