Redisson's distributed lock is based on redis, and curator's distributed lock is based on zookeeper.
The distributed lock of curator has been explained in the distributed lock of zookeeper . This article does not talk about the specific implementation of the source code, but about the process of their realization of the distributed lock.
Redisson's distributed lock
Suppose there is now a thread A of a service that applies for a lock to Redis. He needs to provide the name of the lock (assume lock), his own information (assume Thread-A) and the time when the key expires (assume 30s, if the expiration time is not set, when thread A exits abnormally, the key will always be saved in Redis , when other threads apply for locks, they will be blocked all the time, resulting in no locks being applied).
When applying for a lock to Redis, thread A will send a Lua script to Redis.
Considering the reentrancy of the lock, this data structure is a hash, the value corresponding to the key is the name of the lock, and the hash is the lock holder and the value of the lock.
After Redis receives the request, it will determine whether there is this key. Since thread A applied for this key for the first time, it is obvious that it does not, so it creates a new hash table and sets the corresponding value and expiration time.
If thread A applies for the lock again, Redis already has the key at this time, so it will check whether the holder of this key is the current applicant. At this time, the current holder is thread A, so it will put The value increases by 1, from 1 to 2, and then resets the expiration time.
We have set an expiration time of 30s above. If the program executes for 30s, and it expires before it is finished, the lock will be invalid, and other threads will get the lock, resulting in the failure of mutual exclusion, so Redisson has a monitor Mechanism of the dog.
Our expiration time is 30s, then the monitoring dog (in fact, a thread is opened) will send a request to Redis every 30s/3=10s. If the holder of this key is the current thread, it will reset the expiration time to 30s.
When thread A holds the lock, thread B also applies for the lock.
Since this key exists and is not a lock held by thread B, it fails to acquire the lock.
After failure, he will subscribe to the unlock broadcast information of the lock and create a semaphore.
When thread A releases the lock, it will determine whether it is the current lock holder. Since it is thread A, the value is subtracted from 2 by 1, indicating that thread A still holds the lock.
When thread A releases the lock again, this value is reduced to 0, indicating that thread A no longer needs the lock. At this time, Redis will publish the unlock broadcast of the lock.
Thread B has been subscribing to the unlock broadcast of this lock. When A is unlocked, thread B will receive the subscription, so it releases the semaphore.
When the semaphore was released at that time, thread B began to re-apply. The process is the same as above. If the lock is still not acquired, it will continue to subscribe to the broadcast information of the lock. Of course, it is not an infinite wait. Acquire the lock.
Distributed lock of curator
Also assume that there is now a thread A of a service that applies for a lock to Zookeeper. He needs to provide the path to this lock and ask Zookeeper to generate a temporary ordered node. Since this lock has not yet created a temporary node, it is assumed that the temporary ordered node is /lock/0001. After Zookeeper is created, it will return this path to thread A.
After thread A gets the path, it will also get all the nodes under /lock (in this case /lock0001) and sort them.
After sorting, see if the current node is the first. Since it is the first, thread A is equivalent to getting the lock, saving the thread information and the path of the node in the memory, and the default lockCount value is 1 .
If thread A applies for a lock again, it will first check whether the memory has information about the current thread and this path. If so, increase the lockCount by 1, and the lockCount is 2 at this time.
When thread A holds the lock, thread B also applies for the lock.
Since this lock already has a temporary ordered node of 0001, Zookeeper will create lock 0002 and return it to thread B.
After thread B gets the path, it will also get all the nodes under /lock (there are /lock0001 and /lock0002 at this time), and sort them.
Obviously he is not the first node, so he can't get the lock, so he listens to lock0001 and enters wait.
When thread A releases the lock, it will decrement the lockCount in the memory by 1, and then judge whether the lockCount is greater than 0. If it is greater than 0, it means that the lock has been re-entrant before, and the lock needs to be held. If it is less than 0, it means that it has been released. If it is equal to 0, it means that there is no need to lock, then remove the monitor and delete the temporary ordered node.
When Zookeeper deletes lock0001, Zookeeper's watch mechanism will notify thread B, and thread B will wake up at this time. Thread B will go to Zookeeper again to pull the list of child nodes and sort them. At this time, lock0002 is the first one, so he will After the lock is acquired, the thread information and the path of the node are stored in the memory, and the default lockCount value is 1.
The process of Zookeeper distributed lock is roughly as above.
So why is it a temporary node? When thread A exits abnormally, this temporary node will also be deleted, so that the service monitoring this node can be notified through the watch mechanism.
Why sequential nodes? Ephemeral nodes can also guarantee mutual exclusivity, but then there will be many services competing to create this node.
Therefore, the temporary node of Zookeeper does not need to be deadlocked by expiration time like Redis, and the Watch mechanism does not need to monitor the dog to maintain the lock like Redis.
There are also differences in reentrancy. Zookeeper saves the client, and Redis saves it in Redis.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。