background
Before JDK 1.5, synchronized was a new solution to the Java concurrency problem:
- Ordinary synchronization method, lock the current instance object
- Static synchronization method, lock the current class Class object
- Synchronization block, lock the object configured in the brackets
Take the synchronization block as an example:
public void test(){
synchronized (object) {
i++;
}
}
The instructions compiled by javap -v
are as follows:
monitorenter
instruction is inserted at the beginning of the synchronization code block after compilation; monitorexit
is inserted at the end of the method and the position of the exception (try-finally is actually hidden), and each object has a monitor associated with it. When a thread executes When the monitorenter command is monitor
, the ownership of 061cbb32170ea5 corresponding to the object will be obtained, and the lock of the object will be obtained
When another thread executes to a synchronized block, because it does not have monitor
, it will be blocked. At this time, control can only be given to the operating system, and it will switch user mode
kernel mode
, and the operating system is responsible for the thread The scheduling and thread state changes between the two require frequent switching between these two modes ( context switch ). This kind of behavior of finding the kernel with a little competition is very bad and will cause a lot of overhead, so everyone calls it heavyweight lock , and the natural efficiency is also very low, which also left a deep-rooted impression on many children's shoes. —— synchronized keyword performs poorly compared to other synchronization mechanisms
Free Java Concurrent Programming Booklet is here
The evolution of the lock
Coming to JDK1.6, how can we optimize the lock to make it lighter? The answer is:
Lightweight lock: CPU CAS
If the CPU can handle lock/release locks through simple CAS, there will be no context switching, which is naturally much lighter than heavyweight locks. However, when the competition is fierce, no matter how much CAS tries, it will waste CPU. We should consider that it is better to upgrade to a heavyweight lock, blocking threads from queuing competition, and there is a process of upgrading a lightweight lock to a heavyweight lock.
Programmers in the pursuit of the ultimate road is never-ending, HotSpot author has found that, in most cases, there is no multi-threaded lock not only competitive, but always by the same thread get many times, with a thread Acquiring locks repeatedly, if you still acquire locks (CAS) in light-weight locks, there is a certain price. How to make this price smaller?
Bias lock
The partial lock actually means that the lock object subconsciously "eccentrically" accesses the same thread, so that the lock object remembers the thread ID. When the thread acquires the lock again, the identity is revealed. If the same ID is used, the lock is directly acquired. It is a kind of load-and-test
is naturally lighter than CAS
However, in a multi-threaded environment, it is impossible for the same thread to keep acquiring the lock. Other threads also have to work. If multiple threads compete, there will be a process of biased lock upgrade.
Let’s think about it first: Can bias locks bypass lightweight locks and upgrade directly to heavyweight locks?
They are all the same lock object, but there are multiple lock states, the purpose of which is obvious:
The fewer resources occupied, the faster the program execution speed
Bias lock, lightweight lock, neither of them will call the system mutex (Mutex Lock), just to improve performance, there are two more lock states, so that the most appropriate strategy can be adopted in different scenarios, so you can In summary:
- Biased lock: In the absence of competition, only one thread enters the critical section, and biased lock is used
- Lightweight lock: multiple threads can alternately enter the critical section, using lightweight locks
- Heavyweight lock: multiple threads enter the critical section at the same time, and hand it over to the operating system mutex to handle
At this point, everyone should understand the big picture, but there are still many questions:
- Where does the lock object store the thread ID to identify the same thread?
- How is the entire upgrade process transitioned?
To understand these issues, you need to know the structure of the Java object header
Know the Java object header
According to conventional understanding, identifying the thread ID requires a set of mapping relationships to get it done. If you maintain this mapping relationship separately, you need to consider thread safety. Occam’s razor principle, everything in Java is an object, and objects can be used as locks. Instead of maintaining a mapping relationship separately, it is better to centrally maintain the lock information on the Java object itself.
The Java object header consists of at most three parts:
MarkWord
- ClassMetadata Address
- Array Length ( will have this part if the object is an array )
Among them, Markword
is the key to save the lock state. The object lock state can be upgraded from a biased lock to a lightweight lock and then to a heavyweight lock. In addition to the initial lock-free state, it can be understood that there are 4 states. If you want to express so much information in an object, you naturally need to use -bit storage. In a 64-bit operating system, it is stored like this ( pay attention to the color mark ). If you want to see specific comments, please see hotspot(1.8) source file
path/hotspot/src/share/vm/oops/markOop.hpp
Line 30
With this basic information, we only need to figure out how the lock information in MarkWord changes
Understanding bias lock
Simply looking at the picture above, it still seems very abstract. As programmers, we like to use code to speak. The intimate openjdk official website provides tools to view the memory layout of objects JOL (java object layout)
Maven Package
<dependency>
<groupId>org.openjdk.jol</groupId>
<artifactId>jol-core</artifactId>
<version>0.14</version>
</dependency>
Gradle Package
implementation 'org.openjdk.jol:jol-core:0.14'
Next, let's use the code to learn more about the bias lock.
Note:
The above picture (from left to right) represents the
-> low position
The JOL output result (from left to right) represents the
-> the high bit
Look at the test code
scene 1
public static void main(String[] args) {
Object o = new Object();
log.info("未进入同步块,MarkWord 为:");
log.info(ClassLayout.parseInstance(o).toPrintable());
synchronized (o){
log.info(("进入同步块,MarkWord 为:"));
log.info(ClassLayout.parseInstance(o).toPrintable());
}
}
Look at the output:
The JOL version we used above is 0.14
, let us quickly understand the specific value of the bit, then we will use the 0.16
version to view the output result, because this version gives us a more friendly description, the same code, look at the output result :
Seeing this result, you should be in doubt. After JDK 1.6, the biased lock is turned on by default. Why is the initialized code in a lock-free state? When entering the synchronization block to generate competition, it will bypass the biased lock and directly become a lightweight lock. ?
Although biased locking is enabled by default, but open delay , about 4s. The reason is that synchronized is used in many places in the code inside the JVM. If the bias is turned on directly, there will be lock upgrades when competition occurs, which will bring additional performance loss, so there is a delay strategy
We can use the parameter -XX:BiasedLockingStartupDelay=0
change the delay to 0, but not recommended for . We can understand the current situation through a picture:
Scene 2
Then we will delay the code for 5 seconds to create the object to see if the bias takes effect
public static void main(String[] args) throws InterruptedException {
// 睡眠 5s
Thread.sleep(5000);
Object o = new Object();
log.info("未进入同步块,MarkWord 为:");
log.info(ClassLayout.parseInstance(o).toPrintable());
synchronized (o){
log.info(("进入同步块,MarkWord 为:"));
log.info(ClassLayout.parseInstance(o).toPrintable());
}
}
Review the running results:
This result is in line with our expectations, but the biasable
state in the result does not exist in the MarkWord table. In fact, this is a kind of anonymous bias state , which is the object initialization. JVM helps us to do it
So when a thread enters the synchronized block:
- Can be biased state: directly replace ThreadID with CAS, if successful, you can get the biased lock
- Non-biased state: it will become a lightweight lock
The problem is again. Now the lock object has a specific biased thread. If a new thread comes to execute the synchronized block, will it be biased to the new thread?
Scene 3
public static void main(String[] args) throws InterruptedException {
// 睡眠 5s
Thread.sleep(5000);
Object o = new Object();
log.info("未进入同步块,MarkWord 为:");
log.info(ClassLayout.parseInstance(o).toPrintable());
synchronized (o){
log.info(("进入同步块,MarkWord 为:"));
log.info(ClassLayout.parseInstance(o).toPrintable());
}
Thread t2 = new Thread(() -> {
synchronized (o) {
log.info("新线程获取锁,MarkWord为:");
log.info(ClassLayout.parseInstance(o).toPrintable());
}
});
t2.start();
t2.join();
log.info("主线程再次查看锁对象,MarkWord为:");
log.info(ClassLayout.parseInstance(o).toPrintable());
synchronized (o){
log.info(("主线程再次进入同步块,MarkWord 为:"));
log.info(ClassLayout.parseInstance(o).toPrintable());
}
}
Looking at the results of the operation, something strange happened:
Mark 1: Initial deflectable state
Mark 2: After biasing towards the main thread, the main thread exits the synchronization code block
Mark 3: new thread enters the synchronization code block and is upgraded to a lightweight lock
Mark 4: The lightweight lock of the new thread exits the synchronization code block, and the main thread looks at it and becomes non-biasable.
Mark 5: Since the object cannot be biased, it is the same as Scene 1. main thread enters the synchronization block again, and naturally a lightweight lock will be used
So far, scene one, two and three can be summarized as a picture:
Judging from the results of this operation, the biased lock is like " ". As long as a certain thread is biased, subsequent threads trying to acquire the lock will become a lightweight lock. This bias is very limited. actually not like this. . If you look closely at mark 2 (biased state), there is an epoch that we did not mention. This value is the key to breaking this limitation. Before we understand the epoch, we need to understand one more Concept-preference revocation
Free Java Concurrent Programming Booklet is here
Bias revocation
Before really explaining the partial revocation, we need to clarify a concept-partial lock revocation and partial lock release are two different things
- Revocation: Generally speaking, when multiple threads compete to cause the biased mode to no longer be used, it is mainly to inform the lock object that the biased mode can no longer be used
- Release: Same as your normal understanding, it corresponds to the exit of the synchronized method or the end of the synchronized block
What is preference revocation?
Withdraw from the biased state to the original state, that is, change the value of the 3rd bit of MarkWord (whether biased to be withdrawn), from 1 to 0
If only one thread acquires the lock, coupled with the "eccentric" mechanism, there is no reason to revoke the bias, so the revoking of the bias can only happen in the case of competition
If you want to revoke the biased lock, it cannot affect the thread holding the biased lock, so you have to wait for the thread holding the biased lock to reach a safepoint safety point (the safety point here is the JVM in order to ensure that it is referenced in the garbage collection process A safe state in which the relationship will not change, and all threads will be suspended in this state. At this safe point, the thread that obtains the biased lock will be suspended
At this safety point, the threads may still be in different states, let's first conclude (because the source code is written in this way, possible confusion will be explained later)
- The thread is not alive or the thread that is alive but exits the synchronization block, it is very simple, just cancel the bias directly
- If the thread is alive but is still in the synchronization block, it must be upgraded to a lightweight lock
This seems to have nothing to do with epoch, because this is not the whole scene. Bias lock is a solution to improve program efficiency in specific scenarios, but it does not mean that the programs written by programmers meet these specific scenarios, such as these scenarios (under the premise of opening bias lock):
- One thread creates a large number of objects and performs initial synchronization operations, and then uses these objects as locks in another thread for subsequent operations. In this case, it will lead to a large number of biased lock undo operations
- Knowing that there is multi-threaded competition (producer/consumer queue), but also the use of biased locks, it will also lead to various cancellations
Obviously, these two scenarios will definitely lead to biased revocation. The cost of a biased revocation does not matter, and the cost of a large number of biased revocations cannot be ignored. What to do then? Neither do you want to disable the bias lock, nor do you want to endure the increased costs of a large number of revocation biases. This solution is to design a with a stepped bottom line
Bulk rebias
This is a quick solution for the first scenario. With class as the unit, a bias lock revocation counter is maintained for each class. Every time an object of the class undergoes a bias revocation operation, the counter +1
, when this value reaches the re-bias threshold (Default 20):
BiasedLockingBulkRebiasThreshold = 20
JVM thinks that there is a problem with the bias lock of the class, so it will perform batch re-bias. Its implementation method uses the epoch
Epoch
, as it means "epoch", is a timestamp. Each class object will have a corresponding epoch
, each in the biased lock state object mark word
also has this field, its initial value is the value of epoch
in the class when the object was created (the two are equal at this time ). Each time batch re-biasing occurs, the value is increased by 1, and the stack of all threads in the JVM is traversed at the same time
- this class that are locked and , and change their
epoch
field to the new value - in the class is not in the locked state The biased lock object of 161cbb32171f3b (not held by any thread, but it was held by the thread before, the markword of this lock object must be biased), keep the value of the
epoch
In this way, the next time the lock is obtained, the 061cbb32171f78 value of the epoch
epoch
. Based on the principle that does not ask the affairs 161cbb32171f82 (the last epoch), then even if the current has been biased to other threads, the cancellation will not be executed. Operation, but directly through the CAS operation to mark word
the thread ID of 061cbb32171f7c to the current thread ID, which is also considered a certain degree of optimization, after all, there is no upgrade lock;
If epoch
is the same, it means that batch re-biasing has not occurred. If markword
has a thread ID and there are other locks competing, then the lock will naturally be upgraded (as in the previous example epoch=0)
batch re-biasing is the bottom line of the first step, and the bottom line of the second step
Bulk revoke
When the re-bias threshold is reached, assuming that the class counter continues to increase, when it reaches the threshold of batch cancellation (default 40),
BiasedLockingBulkRevokeThreshold = 40
JVM thinks that there is multi-threaded competition in the usage scenario of this class, and will mark this class as non-biasable. Then for the lock of this class, go directly to the logic of lightweight lock
This is the bottom line of the second ladder, but in the transition process from the first ladder to the second ladder, that is, before the deflection lock is completely disabled, there is another opportunity to reform and refresh, and that is another timer:
BiasedLockingDecayTime = 25000
- If within 25 seconds since the last batch re-biasing occurred, and the cumulative cancellation count reaches 40, batch cancellation will occur (the bias lock completely game over)
- If it is more than 25 seconds since the last batch re-biasing occurred, then
[20, 40)
will be reset, and another chance will be given
If you are interested, you can write code to test the critical point and observe the changes of the markword
At this point, the entire workflow of the bias lock can be represented by a picture:
At this point, you should have a basic understanding of the bias lock, but many questions in my heart have not been resolved, let's continue to read:
Where is HashCode
In the above scenario 1, there is no lock state, and there is no hashcode in the object header; if it is biased towards the locked state, the object header still has no hashcode, then where is our hashcode?
First of all, we must know that the hashcode is not written to the object header for us when we create the object, but it will be stored in the object header after the first call Object::hashCode()
or System::identityHashCode(Object)
After the hashcode generated by for the first time that has remained unchanged, but the bias lock is to change the markword of the lock object back and forth, which will definitely affect the generation of hashcode, so what should I do? , Let's use code verification:
scene one
public static void main(String[] args) throws InterruptedException {
// 睡眠 5s
Thread.sleep(5000);
Object o = new Object();
log.info("未生成 hashcode,MarkWord 为:");
log.info(ClassLayout.parseInstance(o).toPrintable());
o.hashCode();
log.info("已生成 hashcode,MarkWord 为:");
log.info(ClassLayout.parseInstance(o).toPrintable());
synchronized (o){
log.info(("进入同步块,MarkWord 为:"));
log.info(ClassLayout.parseInstance(o).toPrintable());
}
}
Look at the results
The conclusion is: even if it is initialized as an object that can be biased, onceObject::hashCode()
orSystem::identityHashCode(Object)
is called, the lightweight lock will be used directly when entering the synchronization block
Scene two
What happens if it is biased towards a certain thread, then generates a hashcode, and then the same thread enters the synchronization block? Look at the code:
public static void main(String[] args) throws InterruptedException {
// 睡眠 5s
Thread.sleep(5000);
Object o = new Object();
log.info("未生成 hashcode,MarkWord 为:");
log.info(ClassLayout.parseInstance(o).toPrintable());
synchronized (o){
log.info(("进入同步块,MarkWord 为:"));
log.info(ClassLayout.parseInstance(o).toPrintable());
}
o.hashCode();
log.info("生成 hashcode");
synchronized (o){
log.info(("同一线程再次进入同步块,MarkWord 为:"));
log.info(ClassLayout.parseInstance(o).toPrintable());
}
}
View the results of the operation:
The conclusion is: in the same scenario, lightweight locks will be used directly
Scene three
So if the object is in the biased state, what happens when the two methods are called in the synchronized block? Continue code verification:
public static void main(String[] args) throws InterruptedException {
// 睡眠 5s
Thread.sleep(5000);
Object o = new Object();
log.info("未生成 hashcode,MarkWord 为:");
log.info(ClassLayout.parseInstance(o).toPrintable());
synchronized (o){
log.info(("进入同步块,MarkWord 为:"));
log.info(ClassLayout.parseInstance(o).toPrintable());
o.hashCode();
log.info("已偏向状态下,生成 hashcode,MarkWord 为:");
log.info(ClassLayout.parseInstance(o).toPrintable());
}
}
Look at the running results:
The conclusion is: if the object is in a biased state, after the hashcode is generated, it will be directly upgraded to a heavyweight lock
Finally, use a paragraph in the book to describe the relationship between lock and hashcode before
What happens when the Object.wait() method is called?
In addition to the above hashcode method provided by Object, there is also the wait()
method, which is also commonly used in synchronized blocks. What impact will this have on locks? Look at the code:
public static void main(String[] args) throws InterruptedException {
// 睡眠 5s
Thread.sleep(5000);
Object o = new Object();
log.info("未生成 hashcode,MarkWord 为:");
log.info(ClassLayout.parseInstance(o).toPrintable());
synchronized (o) {
log.info(("进入同步块,MarkWord 为:"));
log.info(ClassLayout.parseInstance(o).toPrintable());
log.info("wait 2s");
o.wait(2000);
log.info(("调用 wait 后,MarkWord 为:"));
log.info(ClassLayout.parseInstance(o).toPrintable());
}
}
View the results of the operation:
The conclusion is that the wait method is unique to the mutex (heavyweight lock). Once this method is called, it will be upgraded to a heavyweight lock (this is the highlight of the interview)
Finally, continue to enrich the change map of the lock object:
Free Java Concurrent Programming Booklet is here
Say goodbye to biased locks
You should be a little panicked when you see this title. Why say goodbye to biased locks because the maintenance cost is a bit high. Look official statement of 161cbb321726b2 Open JDK, JEP 374: Deprecate and Disable Biased Locking . I believe you can read the text above. Have a deep understanding
The update time of this description is very close now, and it has already started in the JDK15 version.
One sentence explanation is that the maintenance cost is too high
In the end, before JDK 15, the biased lock was enabled by default. Starting from 15, the default is disabled, unless the display is enabled by UseBiasedLocking
Among them, an article quarkus
Bias lock adds huge complexity to the JVM. Only a few very experienced programmers can understand the whole process. The maintenance cost is very high, which greatly hinders the process of developing new features. How about a few experienced programmers? Haha)
Summarize
The biased lock may end its life in this way. Some students may directly ask questions. They are all deprecated. The JDK is 17. Why do you still talk about so much?
- Let it post java. I use Java 8. This is a mainstream state, at least the version you are using has not been deprecated.
- The interview will still be frequently asked
- In case there is a better design plan someday, the "bias lock" will come back in a new form. Only by understanding the changes can we better understand the design behind it
- The principle of Occam’s razor is the same in our actual optimization. If there is no need to increase the entity, if the added content brings a lot of cost, it is better to abolish it boldly and accept a little difference.
Previously, I was only purely theoretical about the bias lock, but in order to write this article, I read a lot of materials, including also re-checked the Hotspot source code. These contents can not fully explain the details of the entire process of the bias lock. You need to track and view in specific practice. Here are a few key entries of the source code to facilitate everyone's tracking:
- lock entrance: 161cbb32172938 http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/file/9ce27f0a4683/src/share/vm/interpreter/bytecodeInterpreter.cpp#l1816
- withdrawal entry: 161cbb3217296e http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/file/9ce27f0a4683/src/share/vm/interpreter/interpreterRuntime.cpp#l608
- lock release entrance: 161cbb321729a3 http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/file/9ce27f0a4683/src/share/vm/interpreter/bytecodeInterpreter.cpp#l1923
If you have any questions in the article, please leave a message for discussion. If there are errors, please help me to correct me.
Soul inquiry
- Lightweight and heavyweight locks, where does the hashcode exist?
Reference
Thanks for the summary of the essence of the predecessors from all walks of life, so that I can refer to and understand:
- https://www.oracle.com/technetwork/java/javase/tech/biasedlocking-oopsla2006-preso-150106.pdf
- https://www.oracle.com/technetwork/java/biasedlocking-oopsla2006-wp-149958.pdf
- https://wiki.openjdk.java.net/display/HotSpot/Synchronization#Synchronization-Russel06
- https://github.com/farmerjohngit/myblog/issues/12
- https://zhuanlan.zhihu.com/p/440994983
- https://mp.weixin.qq.com/s/G4z08HfiqJ4qm3th0KtovA
- https://www.jianshu.com/p/884eb51266e4
Sun Gong Yibing| Original
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。