- Lock overview
- Internal lock: synchronized
- Explicit lock: Lock
- memory barrier
- Lightweight synchronization mechanism: volatile keyword
- Singleton pattern thread safety issues
- CAS
- static and final
Lock overview
- A thread must apply for a corresponding lock (equivalent to a license) when accessing shared data. A thread can access shared data only after obtaining the corresponding "license", and a "license" can only be accessed by one thread at the same time. , After the access is completed, the thread needs to release the corresponding lock (return the license) so that other threads can access it. The code from the lock application to the lock release is called the critical section.
- Internal lock: synchronized Display lock: ReentrantLock
- Visibility is guaranteed by the two actions of the write thread flushing the processor cache and the reading thread flushing the processor cache. When using a lock, the processor cache action will be performed before the lock is acquired, and the processor will be flushed after the lock is released. cache action.
- Although locks guarantee ordering, operations in critical sections may still be reordered, because operations in critical sections are invisible to other threads, which means that even if operations in critical sections are reordered, it will not cause order problem.
Reentrancy: Can a thread continue to acquire the lock when it owns the lock, if so, we call this lock a reentrant lock
void metheadA(){ acquireLock(lock); // 申请锁lock // 省略其他代码 methodB(); releaseLock(lock); // 释放锁lock } void metheadB(){ acquireLock(lock); // 申请锁lock // 省略其他代码 releaseLock(lock); // 释放锁lock }
- Lock leak: After the lock is acquired, it has not been released.
Internal lock: synchronized
How to use the internal lock
同步代码块 synchronized(lock){ ...... } ============================== 同步方法 public synchronized void method(){ ...... } 等同于 public void method(){ synchronized(this){ ...... } } ============================== 同步静态方法 class Example{ public synchronized static void method(){ ...... } } 等同于 class Example{ public static void method(){ synchronized(Example.class){ ...... } } }
- Internal locks do not cause lock leaks, because the java compiler (javac) makes a special case for exceptions that may be thrown in the critical section but are not caught by the program code when compiling the synchronized code block into bytecode. processing, which makes the normal release of internal locks not affected even if the code block of the critical section throws an exception.
- The java virtual machine maintains an Entry Set for each internal lock to maintain the set of waiting threads that apply for the lock. When multiple threads apply for a lock at the same time, only one thread will apply successfully, and other failed threads will not. If an exception is thrown, it will be suspended (into the Blocked state) waiting, and stored in the entry set. After the thread with the lock is executed, the Java virtual machine will randomly wake up a Blocked thread from the entry set to apply for this entry. The lock, but it may not be able to acquire the lock, because at this time it may also face other new active threads (Runnable) to compete for the lock.
Explicit lock: Lock
- Internal locks only support unfair locks, and explicit locks can support both fair locks and unfair locks (default unfair locks).
- Fair locks often bring additional overhead, because, for the sake of "fairness", in most cases virtual machines increase thread switching, which will increase more context switches than unfair locks. Therefore, fair locks are suitable for tasks where threads will occupy the lock time for a long time, so as not to cause starvation of some threads.
How to use Lock
lock.lock() try{ ...... }catch(Exception e){ ...... }finally{ lock.unlock() }
The difference between synchronized and Lock
- Synchronized is a built-in keyword of java and belongs to the jvm level, and Lock is a class of java
- Lock.tryLock() can try to acquire the lock, however, synchronized cannot
- Synchronized can automatically release the lock, and Lock has to be manually unlocked
- Synchronized is an unfair lock, Lock can be set to fair or unfair
- Lock is suitable for a large number of synchronized code, synchronized is suitable for a small amount of synchronized code
- Read-write lock: When a reader thread holds the lock, other reader threads are allowed to acquire the read lock, but the writer thread is not allowed to acquire the lock. When the writer thread holds the lock, no other thread is allowed to acquire the lock.
Use of read-write locks
class Apple{ ReadWriteLock lock = new ReentrantReadWriteLock(); Lock writeLock = lock.writeLock(); Lock readLock = lock.readLock(); private BigDecimal price; public double getPrice(){ double p; readLock.lock(); try{ p = price.divide(new BigDecimal(100)).doubleValue(); }catch(Exception e){ ... }finally{ readLock.unLock(); } return double; } public void setPrice(double p){ writeLock.lock(); try{ price = new BigDecimal(p); }catch(Exception e){ ... }finally{ writeLock.unLock(); } } }
Read-write locks are suitable for the following scenarios:
- Read operations are more frequent than write operations
- The read thread is held for a long time
Lock downgrade: A thread can apply to downgrade a write lock to a read lock when it holds a write lock.
public class ReadWriteLockDowngrade { private final ReadWriteLock rwLock = new ReentrantReadWriteLock(); private final Lock readLock = rwLock.readLock(); private final Lock writeLock = rwLock.writeLock(); public void operationWithLockDowngrade() { boolean readLockAcquired = false; writeLock.lock(); // 申请写锁 try { // 对共享数据进行更新 // ... // 当前线程在持有写锁的情况下申请读锁readLock readLock.lock(); readLockAcquired = true; } finally { writeLock.unlock(); // 释放写锁 } if (readLockAcquired) { try { // 读取共享数据并据此执行其他操作 // ... } finally { readLock.unlock(); // 释放读锁 } } else { // ... } } }
Reasons for not supporting lock escalation - Because there are multiple threads holding read locks at the same time, deadlocks may occur during lock escalation.
Assuming that there are two reading threads A and B that acquire the same read lock, then thread A wants to upgrade to a write lock, and thread B can upgrade successfully after thread B releases the read lock. But if thread A wants to upgrade at the same time B If you want to upgrade too, they will both wait for each other to release the read lock at the same time, which will cause a confrontation situation, that is, a typical deadlock.
memory barrier
- A memory barrier is a compiler processor reordering where two instructions are inserted on both sides of a block of instructions to act as a "barrier".
- The bytecode instructions corresponding to the application and release of the internal lock are MonitorEnter and MonitorExit respectively.
The memory barrier can be divided into a load barrier (Load Barrier) and a store barrier (Store Barrier) by the visibility.
The java virtual machine inserts a load barrier at the beginning of the critical section after the MonitorEnter instruction to ensure that updates to shared variables by other threads can be synchronized to the cache of the processor where the thread is located. At the same time, it will also insert a storage after the MonitorExit instruction The barrier ensures that changes to shared variables in the code of the critical section can be synchronized in time.
According to the order, the memory barrier can be divided into Acquire Barrier and Release Barrier . The acquisition barrier prohibits the reordering of the critical section instructions and the code instructions before the critical section, and the release barrier prohibits the critical section instructions and the code instructions before the critical section. Code instructions after the critical section are reordered
The java virtual machine inserts an acquire barrier after the MonitorEnter instruction and a release barrier before the MonitorExit instruction.
Sorting rules under the memory barrier (solid lines represent sortable, dashed lines represent unsortable)
Lightweight synchronization mechanism: volatile keyword
The functions of the volatile keyword include: guaranteeing visibility, guaranteeing orderliness, and guaranteeing the atomicity of read and write operations of long/double type variables.
The reason why the two basic types of long and double write operations are non-atomic is that their write operations in the 32-bit Java virtual machine are divided into double 32-bit operations, so in the Java bytecode, a long or double variable is written. The operation is to execute a two-step bytecode instruction.
- Volatile variables will not be allocated to registers for storage by the compiler, and the shorthand operations on volatile are all memory accesses
The volatile keyword only guarantees the read-write atomicity of the modified variable itself. If the assignment atomicity of the modified variable is to be involved, the assignment operation cannot involve any shared variables, otherwise the operation will not be atomic.
A = B + 1
If A is a volatile-modified shared variable, the assignment operation is actually a read-modify-write operation. If B is a shared variable, B may have been modified during the assignment process, so thread safety issues may occur. But if B is a local variable, then this assignment will be atomic.
- The principle of volatile guaranteeing the ordering of variable read and write is basically the same as that of synchronized - adding related memory barriers before and after write operations (detailed in the hardware foundation and memory model articles)
If an array is modified by volatile, then volatile only affects the operation of the array itself, not the operation of the array elements.
//nums被volatile修饰 int num = nums[0]; //1 nums[1] = 2; //2 volatile int[] newNums = nums; //3
For example, operation 1 is actually two sub-steps ① reading the array reference. This sub-step belongs to the read operation that the array operation is volatile, so the relatively new value of the nums array can be read, and step ② is to calculate the bias on the basis of ① The shifter obtains the value of nums[0], which is not a volatile operation, so it cannot be guaranteed that it reads a relatively new value.
Operation 2 can be divided into ① the read operation of the array and ② the write operation of the array element. Similarly, ① is a volatile read operation, but the write operation of ② may cause corresponding problems.
Operation 3 is equivalent to using a volatile array to update the reference of another volatile array. All operations are operations at the array level, so there will be no concurrency problems.
volatile overhead
The read and write operations of volatile variables do not cause context switches, so the overhead of volatile is smaller than that of locks. Writing to a volatile variable makes the result of that operation and any write operations preceding that operation synchronizable to other processors, so the cost of a volatile variable write operation is between that of a normal variable write operation and a write operation made in a critical section between. The cost of reading a volatile variable is also lower than reading a variable in a critical section (no lock acquisition and release and context switching overhead), but the cost may be higher than reading a normal variable. This is because the value of the volatile variable needs to be read from the cache or main memory every time, and cannot be temporarily stored in the register, so that the access efficiency cannot be exerted.
Singleton pattern thread safety issues
The following is a singleton implementation of a classic double check lock
public class Singleton {
// 保存该类的唯一实例
private static Singleton instance = null;
/**
* 私有构造器使其他类无法直接通过new创建该类的实例
*/
private Singleton() {
// 什么也不做
}
/**
* 获取单例的主要方法
*/
public static Singleton getInstance() {
if (null == instance) {// 操作1:第1次检查
synchronized (Singleton.class) { //操作2
if (null == instance) {// 操作3:第2次检查
instance = new Singleton(); // 操作4
}
}
}
return instance;
}
}
First, let's analyze why operation 1 and operation 2 work
If there is no operation 1 and operation 2, then thread 1 calls the getInstance() method. When operation 4 is being executed, thread 2 also calls this method, and operation 4 has not been completed. Therefore, thread 2 can successfully pass the operation. 3, there will be a problem, new Singleton() is executed twice, which also violates the original intention of the singleton pattern.
Due to the above problems, adding an operation 2 before operation 3 will ensure that only one thread will execute operation 4 at a time. However, this will cause the lock to be applied/released every time getInstance() is called, which will cause extreme Large performance consumption, so you need to add an operation 1 before operation 2 to avoid such problems.
In addition, the static modification variable ensures that it will only be loaded once.
So it seems that this double verification lock is perfect?
The above operation 4 can be divided into the following 3 sub-operations
objRef = allocate(IncorrectDCLSingletion.class); // 子操作1:分配对象所需的存储空间
invokeConstructor(objRef); // 子操作2:初始化objRef引用的对象
instance = objRef; // 子操作3:将对象引用写入共享变量
Reordering is allowed in the synchronized critical area. The JIT compiler may reorder the above operations into sub-operation 1 → sub-operation 3 → sub-operation 2, so what may happen is that a thread executes the reordered operation 4 When (1→3→2), when the thread has just executed sub-operation 3 (sub-operation 2 has not been executed), there are other threads executing operation 1, then instance ≠ null will directly return it. , but this instance is not initialized, so there will be problems.
If the instance is decorated with the volatile keyword, this situation will not happen, volatile solves the reordering problem of sub-operation 2 and sub-operation 3.
Volatile also prevents this shared variable from not being stored in a register, avoiding visibility issues.
In addition, static inner classes and enumeration classes can also safely implement the singleton pattern
public class Singleton {
// 私有构造器
private Singleton() {
}
private static class InstanceHolder {
// 保存外部类的唯一实例
final static Singleton INSTANCE = new Singleton();
}
public static Singleton getInstance() {
return InstanceHolder.INSTANCE;
}
}
The above is the implementation of the inner static class, InstanceHolder will be loaded when it is called, so this is also a lazy singleton.
CAS
CAS is a more lightweight lock. Its main implementation is through the comparison before assignment, such as i = i + 1 operation, the thread will compare the current i before assigning the result of i+1 to i Whether it is the same as the old value of i (the value recorded before i+1), if it is the same, it is considered that i has not been modified by other threads in this process, otherwise, the previous i+1 operation will be discarded and executed again.
This update mechanism is based on the fact that the CAS operation is an atomic operation, which is directly guaranteed by the processor. However, CAS can only guarantee the atomicity of the operation, but not the visibility of the operation (the visibility is not guaranteed, and the ordering is naturally not guaranteed).
CAS may have an ABA problem, that is, when the initial value of i is 0 and the operation of i + 1 is performed, another thread modifies the i variable to 10, and then a third thread modifies i back to 0 during this process. , and then when the thread finds that i is still the initial value during the comparison, it assigns the operation result of i+1 to i, which is obviously not what we want. The solution is to add a version number when operating on this variable. After each modification, the version will be +1, so we can clearly see whether the variable has been changed by other threads.
The implementation principle of commonly used atomic classes is CAS
grouping | kind |
---|---|
basic data type | AtomicInteger, AtomicLong, AtomicBoolean |
array type | AtomicIntegerArray, AtomicLongArray, AtomicReferenceArray |
field updater | AtomicIntegerFieldUpdater, AtomicLongFieldUpdater, AtomicReferenceFieldUpdater |
reference type | AtomicReference, AtomicStampedReference, AtomicMarkableReference |
static and final
After a class is loaded by the virtual machine, the value of the static variable of the class is still the default value (reference type variable is null, boolean type is false), until a static variable is accessed for the first time. will be initialized.
public class InitStaticExample { static class InitStatic{ static String s = "hello world"; static { System.out.println("init....."); Integer a = 100; } } public static void main(String[] args) { System.out.println(InitStaticExample.InitStatic.class.getName()); System.out.println(InitStatic.s); } } =================结果================= io.github.viscent.mtia.ch3.InitStaticExample$InitStatic init..... hello world
- For a reference static variable, when any thread gets the variable, it is initialized (this is different from the double-checked lock error, although the instance is a static variable, and the singleton object of the double-checked lock is is not a static class, so new Singleton() runs the risk of being uninitialized). However, this visibility and ordering guarantee of static only works the first time a thread reads a static variable.
- When an object is released to other threads, the final variables in this object are always initialized (and also guarantee the initialized object when referencing the variable), which ensures that the value read by other threads is not the default value. Final can only solve the ordering problem, that is, to ensure that the variables obtained are initialized, but it does not guarantee visibility.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。