1
头图

First of all to share a github repository, put on top of 200 Duo classic computer books , including the C language, C ++, Java, Python, front-end, database, operating systems, computer networks, data structures and algorithms, machine learning, programming Life and so on, you can star, next time you find a book, search directly on it, and the warehouse is continuously updating~

github address: https://github.com/Tyson0314/java-books

If github is not accessible, you can visit the gitee repository.

gitee address: https://gitee.com/tysondai/java-books

Thread Pool

using thread pool :

  • reduces resource consumption . Reduce the consumption caused by thread creation and destruction by reusing the created threads.
  • improves the response speed . When the task arrives, the task can be executed immediately without waiting until the thread is created.
  • improves thread manageability . Unified management of threads, to avoid the system creating a large number of threads of the same type and causing the memory to run out.
public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler);

Principle of Thread Pool

To create a new thread, you need to acquire a global lock. Through this design, you can avoid acquiring a global lock as much as possible. When the ThreadPoolExecutor finishes warming up (the number of threads currently running is greater than or equal to corePoolSize), most of the submitted tasks will be placed in the BlockingQueue.

The general constructor of ThreadPoolExecutor:

public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler);
  • corePoolSize: When there is a new task, if the number of threads in the thread pool does not reach the basic size of the thread pool, a new thread will be created to perform the task, otherwise the task will be placed in the blocking queue. When the number of surviving threads in the thread pool is always greater than corePoolSize, you should consider increasing corePoolSize.
  • maximumPoolSize: When the blocking queue is full, if the number of threads in the thread pool does not exceed the maximum number of threads, a new thread will be created to run the task. Otherwise, the new task will be processed according to the rejection policy. Non-core threads are similar to temporarily borrowed resources. These threads should exit after their idle time exceeds keepAliveTime to avoid waste of resources.
  • BlockingQueue: Store tasks waiting to be run.
  • keepAliveTime: non-core thread idle, keep alive time, this parameter is only valid for non-core threads. Set to 0, which means that redundant idle threads will be terminated immediately.
  • TimeUnit: Time unit

    TimeUnit.DAYS
    TimeUnit.HOURS
    TimeUnit.MINUTES
    TimeUnit.SECONDS
    TimeUnit.MILLISECONDS
    TimeUnit.MICROSECONDS
    TimeUnit.NANOSECONDS
  • ThreadFactory: Whenever a new thread is created in the thread pool, it is done through the thread factory method. Only one method newThread is defined in ThreadFactory, and it is called whenever the thread pool needs to create a new thread.

    public class MyThreadFactory implements ThreadFactory {
        private final String poolName;
        
        public MyThreadFactory(String poolName) {
            this.poolName = poolName;
        }
        
        public Thread newThread(Runnable runnable) {
            return new MyAppThread(runnable, poolName);//将线程池名字传递给构造函数,用于区分不同线程池的线程
        }
    }
  • RejectedExecutionHandler: When the queue and thread pool are full, new tasks are processed according to the rejection policy.

    AbortPolicy:默认的策略,直接抛出RejectedExecutionException
    DiscardPolicy:不处理,直接丢弃
    DiscardOldestPolicy:将等待队列队首的任务丢弃,并执行当前任务
    CallerRunsPolicy:由调用线程处理该任务

Thread pool size

If the number of threads in the thread pool is too small, when there are a large number of requests to be processed, the slow response of the system affects the experience, and even a large number of tasks in the task queue may cause OOM.

If the number of threads in the thread pool is too large, a large number of threads may be fighting for CPU resources at the same time, which will lead to a large number of context switches (cpu allocates time slices to threads, and saves the state when the cpu time slices of the threads are used up, so that they can continue to run next time. ), thereby increasing the execution time of the thread and affecting the overall execution efficiency.

CPU-intensive tasks (N+1) : This task mainly consumes CPU resources. The number of threads can be set to N (the number of CPU cores)+1. One thread more than the number of CPU cores is to prevent certain The impact of task suspension caused by some reasons (thread blocking, such as io operation, waiting for lock, thread sleep). Once a thread is blocked, the cpu resources are released, and in this case an extra thread can make full use of the idle time of the CPU.

I/O intensive tasks (2N) : The system will spend most of the time processing I/O operations, and threads waiting for I/O operations will be blocked, releasing cpu resources, then the CPU can be handed over to Used by other threads. Therefore, in the application of I/O intensive tasks, we can configure more threads, the specific calculation method: optimal number of threads = number of CPU cores (1/CPU utilization) = number of CPU cores (1 + (I /O time-consuming/CPU time-consuming)), generally can be set to 2N.

Close thread pool

shutdown():

Set the thread pool status to SHUTDOWN , and it will not stop immediately:

  • Stop receiving externally submitted tasks
  • Tasks that are running internally and tasks waiting in the queue will be executed
  • Wait until the second step is completed before it really stops

shutdownNow():

Set the thread pool status to STOP . Attempts to stop immediately, in fact not necessarily:

  • Like shutdown(), first stop receiving externally submitted tasks
  • Ignore tasks waiting in the queue
  • Try to interrupt the running task (not necessarily the interruption is successful, it depends on the logic of the task's response to the interruption)
  • Return to the list of unexecuted tasks

executor framework

The biggest advantage of the Executor framework introduced after 1.5 is to decouple the submission and execution of tasks. When a Callable object is submitted to ExecutorService, a Future object will be obtained, and the get method of the Future object will be called to wait for the execution result. The thread pool mechanism is used internally in the Executor framework, which is under the java.util.cocurrent package. Through this framework, the thread startup, execution and shutdown can be controlled, which can simplify the operation of concurrent programming.

Introduction

The executor framework consists of three parts: tasks, task execution, and asynchronous calculation results

  • Task. Interfaces that need to be implemented: Runnable and Callable interfaces.
  • Execution of tasks. ExecutorService is an interface used to define a thread pool and call its execute (Runnable) or submit (Runnable/Callable) to perform tasks. The ExecutorService interface inherits from Executor and has two implementation classes ThreadPoolExecutor and ScheduledThreadPoolExecutor .
  • The result of asynchronous calculation. Including the future interface and FutureTask that implements the future interface, calling future.get() will block the current thread until the task is completed, and future.cancel() can cancel the execution of the task.

ThreadPoolExecutor instance

Use ThreadPoolExecutor constructor custom parameters to create a thread pool.

public class ThreadPoolExecutorDemo {
    private static final int CORE_POOL_SIZE = 5;
    private static final int MAX_POOL_SIZE = 10;
    private static final int QUEUE_CAPACITY = 100;
    private static final long KEEP_ALIVE_TIME = 1L;

    public static void main(String[] args) throws ExecutionException, InterruptedException {
        ThreadPoolExecutor executor = new ThreadPoolExecutor(
                CORE_POOL_SIZE,
                MAX_POOL_SIZE,
                KEEP_ALIVE_TIME,
                TimeUnit.SECONDS,
                new ArrayBlockingQueue<>(QUEUE_CAPACITY),
                new ThreadPoolExecutor.CallerRunsPolicy()
        );

        for (int i = 0; i < 10; i++) {
            Callable worker = () -> {
                System.out.println(Thread.currentThread().getName());
                return "ok";
            };
            Future<String> f = executor.submit(worker);
            f.get();
        }
        executor.shutdown();
        while (!executor.isTerminated()) {
        }
        System.out.println("Finished all threads");
    }
}

The difference between Runnable and Callable

After the Runnable task is executed, it cannot return a value or throw an exception. Callable tasks can return values or throw exceptions after execution.

Executors.callable(Runnable task);//runnable转化为callable
ExecutorService.execute(Runnable);
ExecutorService.submit(Runnable/Callable);//submit callable任务有返回值

//返回值是泛型参数V
public interface Callable<V> {
    V call() throws Exception;
}

Future and FutureTask

Future can get the result of task execution and cancel the task. Calling future.get() will block the current thread until the task returns the result.

public interface Future<V> {
    boolean cancel(boolean mayInterruptIfRunning);

    boolean isCancelled();

    boolean isDone();

    V get() throws InterruptedException, ExecutionException;

    V get(long timeout, TimeUnit unit)
        throws InterruptedException, ExecutionException, TimeoutException;
}

FutureTask implements the RunnableFuture interface, and RunnableFuture implements the Runnable and Future\<V> interfaces.

execute() and submit()

execute() method is used to submit tasks that do not require a return value, so it is impossible to determine whether the task is successfully executed by the thread pool.

submit() method is used to submit tasks that require a return value. Thread pool will return a Future object type, by this Future object can determine whether the mission is successful, and can be Future of get() to capture the return value of the method, get() method blocks the current thread until the task is completed, and the use get(long timeout, TimeUnit unit) method will be blocked The current thread returns immediately after a period of time, regardless of whether the task is completed or not.

Common thread pool

Common thread pools are FixedThreadPool, SingleThreadExecutor, CachedThreadPool and ScheduledThreadPool. These are all instances of ExecutorService (thread pool).

FixedThreadPool

A thread pool with a fixed number of threads. At any point in time, at most nThreads threads are active to perform tasks.

public static ExecutorService newFixedThreadPool(int nThreads) {
    return new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
}

Using the unbounded queue LinkedBlockingQueue (queue capacity is Integer.MAX_VALUE), the running thread pool will not reject the task, that is, it will not call the RejectedExecutionHandler.rejectedExecution() method.

maxThreadPoolSize is an invalid parameter, so set its value to be consistent with coreThreadPoolSize.

keepAliveTime is also an invalid parameter, set to 0L, because all threads in this thread pool are core threads, and core threads will not be recycled (unless executor.allowCoreThreadTimeOut(true) is set).

Not recommended to use: FixedThreadPool will not reject tasks, and will cause OOM when there are more tasks.

SingleThreadExecutor

A thread pool with only one thread.

public static ExecutionService newSingleThreadExecutor() {
    return new ThreadPoolExecutor(1, 1, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
}

Use an unbounded queue LinkedBlockingQueue. There is only one running thread in the thread pool, new tasks are put into the work queue, and the thread processes the tasks and then cyclically obtains tasks from the queue for execution. Ensure that each task is executed in order.

Not recommended: Same as FixedThreadPool, it will cause OOM when there are more tasks.

CachedThreadPool

Create a thread pool of new threads as needed.

public static ExecutorService newCachedThreadPool() {
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>());
}

If the main thread submits tasks faster than the threads process tasks, CachedThreadPool will continue to create new threads. In extreme cases, this will cause exhaustion of cpu and memory resources.

Use SynchronousQueue with no capacity as the thread pool work queue. When there are idle threads in the thread pool, SynchronousQueue.offer(Runnable task) will be processed by the idle threads, otherwise a new thread will be created to process the tasks.

It is not recommended to use: CachedThreadPool allows the number of threads to be created is Integer.MAX_VALUE, which may create a large number of threads, resulting in OOM.

ScheduledThreadPoolExecutor

Run the task after a given delay, or execute the task periodically. Basically it will not be used in actual projects, because there are other options such as quartz .

Task queue used DelayQueue encapsulates a PriorityQueue , PriorityQueue will queue the task of sorting, the first time early tasks to be performed (ie ScheduledFutureTask of time variable small first execution), if the same time the first task will be submitted to execution ( ScheduledFutureTask of squenceNumber variable small first execution).

Steps to perform periodic tasks:

  1. Thread from DelayQueue get expired in ScheduledFutureTask(DelayQueue.take()) . Due task means that ScheduledFutureTask is greater than or equal to the time of the current system;
  2. Execute this ScheduledFutureTask ;
  3. Modify ScheduledFutureTask to be the time to be executed next time;
  4. ScheduledFutureTask after modifying the time back into DelayQueue ( DelayQueue.add() ).

Coding Standards

The Alibaba coding protocol does not allow the use of Executors to create thread pools, but manually creates thread pools through ThreadPoolExecutor, so that users will be more aware of the operating mechanism of thread pools and avoid the risk of resource exhaustion.

The drawbacks of Executors creating thread pool objects:

FixedThreadPool and SingleThreadPool. Allow the request queue length to be Integer.MAX_VALUE, which may accumulate a large number of requests, resulting in OOM.

CachedThreadPool. The maximum number of threads allowed by the created thread pool is Integer.MAX_VALUE. When the speed of adding tasks is greater than the speed of processing tasks in the thread pool, it may create a large number of threads, consume resources, and even cause OOM.

Correct example (Alibaba coding standard):

//正例1
ThreadFactory namedThreadFactory = new ThreadFactoryBuilder().setNameFormat("demo-pool-%d").build();
//Common Thread Pool
ExecutorService pool = new ThreadPoolExecutor(5, 200,
0L, TimeUnit.MILLISECONDS, //0L keepAliveTime
new LinkedBlockingQueue<Runnable>(1024), namedThreadFactory, new ThreadPoolExecutor.AbortPolicy());

pool.execute(()-> System.out.println(Thread.currentThread().getName()));
pool.shutdown();//gracefully shutdown

//正例2
ScheduledExecutorService executorService = new ScheduledThreadPoolExecutor(1, //corePoolSize threadFactory
    new BasicThreadFactory.Builder().namingPattern("example-schedule-pool-%d").daemon(true).build());

JMM

Java memory model: Shared variables between threads are stored in the main memory. Each thread has its own private local memory. The local memory saves a copy of the shared variable. The operation of the thread on the variable is performed in the local memory and cannot be directly Read and write variables in main memory.

Local memory is an abstract concept of JMM and does not really exist. It includes caches, write buffers, registers, and other hardware and compiler optimizations.

Process thread

A process refers to an application program running in memory. Each process has its own independent piece of memory space, and multiple threads can be started in a process.
A thread is an execution unit smaller than a process. It is an independent control flow in a process. A process can start multiple threads, and each thread executes different tasks in parallel.

Thread state

Initial (NEW): The thread is constructed and start() has not been called yet.

Run (RUNNABLE): Including the ready and running status of the operating system.

Blocked (BLOCKED): Generally passive, the resource cannot be obtained in the preemption of resources, passively hangs in the memory, and waits for the resource to be released to wake it up. The blocked thread will release the CPU, not the memory.

Waiting (WAITING): The thread entering this state needs to wait for other threads to make some specific actions (notification or interrupt).

TIMED_WAITING: This state is different from WAITING, it can return by itself after a specified time.

Termination (TERMINATED): Indicates that the thread has been executed.

Image Source: The Art of Concurrent Programming in Java

Interrupt

Thread interruption means that the thread is interrupted by other threads in the process of running. The biggest difference between it and stop is: stop is the system forcibly terminating the thread, and thread interruption is to send an interrupt signal to the target thread, if the target thread does not receive the thread The interrupt signal and end the thread, the thread will not terminate, the specific exit or execution of other logic depends on the target thread.

Three important methods of thread interruption:

1、java.lang.Thread#interrupt

The interrupt() method of the target thread is called to send an interrupt signal to the target thread, and the thread is marked as interrupted.

2、java.lang.Thread#isInterrupted()

To determine whether the target thread is interrupted, the interrupt flag will not be cleared.

3、java.lang.Thread#interrupted

To determine whether the target thread is interrupted, the interrupt flag will be cleared.

private static void test2() {
    Thread thread = new Thread(() -> {
        while (true) {
            Thread.yield();

            // 响应中断
            if (Thread.currentThread().isInterrupted()) {
                System.out.println("Java技术栈线程被中断,程序退出。");
                return;
            }
        }
    });
    thread.start();
    thread.interrupt();
}

Common method

join

Thread.join(), thread thread is created in main, thread.join()/thread.join(long millis) is called in main, main thread gives up cpu control, thread enters WAITING/TIMED_WAITING state, wait until thread thread The execution of the main thread is continued after execution.

public final void join() throws InterruptedException {
    join(0);
}

yield

Thread.yield(), this method must be called by the current thread. The current thread abandons the acquired CPU time slice, but does not release the lock resource, and changes from the running state to the ready state, allowing the OS to select the thread again. Function: Let threads of the same priority execute in turn, but there is no guarantee that they will execute in turn. In practice, there is no guarantee that yield() will achieve the concession purpose, because the concession thread may be selected again by the thread scheduler. Thread.yield() will not cause blocking. This method is similar to sleep(), except that the user cannot specify the length of the pause.

public static native void yield(); //static方法

sleep

Thread.sleep(long millis), this method must be called by the current thread, and the current thread enters the TIMED_WAITING state, giving up cpu resources, but does not release the object lock, and resumes running after the specified time. Role: The best way to give other threads a chance to execute.

public static native void sleep(long millis) throws InterruptedException;//static方法

The difference between wait() and sleep()

Same point:

  1. Suspend the current thread and pass the opportunity to other threads
  2. Any thread interrupted while waiting will throw InterruptedException

difference:

  1. wait() is a method in the Object superclass; sleep() is a method in the Thread class
  2. The holding of the lock is different, wait() will release the lock, while sleep() does not release the lock
  3. The wake-up methods are not exactly the same, wait() relies on notify or notifyAll, interrupt, and reaches the specified time to wake up; while sleep() is waked up when the specified time is reached
  4. Calling obj.wait() needs to acquire the lock of the object first, while Thread.sleep() does not

Methods of creating threads

  • Create multiple threads by extending the Thread class
  • By implementing the Runnable interface to create multiple threads, resource sharing between threads can be realized
  • Implement the Callable interface and create a thread through the FutureTask interface.
  • Use the Executor framework to create a thread pool.

inherits Thread to create thread code is as follows. The run() method is a callback method after the operating system level thread is created by the jvm. It cannot be called manually. Manual calling is equivalent to calling a common method.

/**
 * @author: 程序员大彬
 * @time: 2021-09-11 10:15
 */
public class MyThread extends Thread {
    public MyThread() {
    }

    @Override
    public void run() {
        for (int i = 0; i < 10; i++) {
            System.out.println(Thread.currentThread() + ":" + i);
        }
    }

    public static void main(String[] args) {
        MyThread mThread1 = new MyThread();
        MyThread mThread2 = new MyThread();
        MyThread myThread3 = new MyThread();
        mThread1.start();
        mThread2.start();
        myThread3.start();
    }
}

Runnable creates thread code :

/**
 * @author: 程序员大彬
 * @time: 2021-09-11 10:04
 */
public class RunnableTest {
    public static  void main(String[] args){
        Runnable1 r = new Runnable1();
        Thread thread = new Thread(r);
        thread.start();
        System.out.println("主线程:["+Thread.currentThread().getName()+"]");
    }
}

class Runnable1 implements Runnable{
    @Override
    public void run() {
        System.out.println("当前线程:"+Thread.currentThread().getName());
    }
}

The advantages of implementing the Runnable interface over inheriting the Thread class:

  1. Resource sharing, suitable for multiple threads of the same program code to process the same resource
  2. Can avoid the limitation of single inheritance in java
  3. The thread pool can only be placed in threads that implement Runable or Callable classes, and cannot be directly placed in classes that inherit Thread

Callable Create thread code :

/**
 * @author: 程序员大彬
 * @time: 2021-09-11 10:21
 */
public class CallableTest {
    public static void main(String[] args) {
        Callable1 c = new Callable1();

        //异步计算的结果
        FutureTask<Integer> result = new FutureTask<>(c);

        new Thread(result).start();

        try {
            //等待任务完成,返回结果
            int sum = result.get();
            System.out.println(sum);
        } catch (InterruptedException | ExecutionException e) {
            e.printStackTrace();
        }
    }

}

class Callable1 implements Callable<Integer> {

    @Override
    public Integer call() throws Exception {
        int sum = 0;

        for (int i = 0; i <= 100; i++) {
            sum += i;
        }
        return sum;
    }
}

uses Executor to create thread code :

/**
 * @author: 程序员大彬
 * @time: 2021-09-11 10:44
 */
public class ExecutorsTest {
    public static void main(String[] args) {
        //获取ExecutorService实例,生产禁用,需要手动创建线程池
        ExecutorService executorService = Executors.newCachedThreadPool();
        //提交任务
        executorService.submit(new RunnableDemo());
    }
}

class RunnableDemo implements Runnable {
    @Override
    public void run() {
        System.out.println("大彬");
    }
}

Inter-thread communication

volatile

Volatile is a lightweight synchronization mechanism. Volatile guarantees the visibility of variables to all threads and does not guarantee atomicity.

  1. When a volatile variable is written, the JVM will send a LOCK-prefixed instruction to the processor to write the data of the cache line where the variable is located back to the system memory.
  2. Due to the cache coherency protocol, each processor checks whether its own cache is expired by sniffing the data transmitted on the bus. When the processor finds that the memory address corresponding to its cache line has been modified, it will change the current processor’s The cache line is set to an invalid state. When the processor modifies this data, it will read the data from the system memory to the processor cache again.

MESI (Cache Consistency Protocol): When the CPU writes data, if the operating variable is found to be a shared variable, that is, a copy of the variable exists in other CPUs, it will send a signal to notify other CPUs to invalidate the cache line of the variable State, so when other CPUs need to read this variable, they will re-read it from memory.

Two functions of the volatile keyword:

  1. The visibility of operations on shared variables by different threads is guaranteed, that is, if one thread modifies the value of a variable, the new value is immediately visible to other threads.
  2. Reordering of instructions is prohibited.

Instruction reordering is the JVM in order to optimize instructions, improve program operation efficiency, and increase the degree of parallelism as much as possible without affecting the execution results of single-threaded programs. memory barrier instruction at the appropriate position when generating the instruction series to prohibit processor reordering. Inserting a memory barrier is equivalent to telling the CPU and the compiler that the commands that precede this command must be executed first, and those that follow this command must be executed later. For a write operation to a volatile field, the Java memory model will insert a write barrier instruction after the write operation. This instruction will flush all the previously written values to the memory.

synchronized

Ensure the visibility and exclusivity of thread access to variables. See the lock section below for details of synchronized.

Waiting for notification mechanism

Wait/notify is a method of the Object object. Calling wait/notify requires obtaining the lock of the object first. After the object calls wait, the thread releases the lock and puts the thread in the waiting queue of the object. When the notification thread calls the notify() method of this object, the waiting thread does not immediately return from wait. It needs to wait for the notification thread to release the lock (notify the thread to execute Finish the synchronization code block), wait for the thread in the queue to acquire the lock, and only after acquiring the lock can it return from the wait() method, that is, the premise of returning from the wait method is that the thread acquires the lock.

The waiting notification mechanism relies on the synchronization mechanism to ensure that the waiting thread can perceive the modification of the variable value of the object when the waiting thread returns from the wait method.

Lock

synchronized

A more commonly used method to ensure thread safety. When a thread acquires a lock, other threads will be blocked. When the thread holding the lock releases the lock, these threads will be awakened, and the awakened thread will have the opportunity to acquire the lock.

  • Modification of the instance method, acting on the lock of the current object instance, and obtaining the lock of the current object instance before entering the synchronization code
  • Modified static method, acting on the lock of the current class object, obtain the lock of the current class object before entering the synchronization code (the bytecode file of the class)
  • Modify the code block, specify the lock object, lock the given object, and obtain the lock of the given object before entering the synchronization code block

There is no conflict between the thread that acquired the class lock and the thread that acquired the object lock.

Release lock

When the method or code block is executed, the lock will be automatically released without any operations.
When an exception occurs in the code executed by a thread, the lock held by it will be automatically released.

Realization principle

Synchronized is achieved through the monitor lock (monitor) inside the object. Every object has a monitor, and when the monitor of the object is held, it is in a locked state.

The synchronization of the is implemented using the monitorenter and monitorexit instructions. The monitorenter instruction is inserted at the beginning of the synchronized code block after compilation, and the monitorexit is inserted at the end of the method or at the abnormal place.

public class SynchronizedDemo {
    public void method() {
        synchronized (this) {
            System.out.println("method start");
        }
    }
}

When the thread accesses the synchronized block, it tries to obtain the monitor when the monitorenter instruction is executed first. The process is as follows:

  1. If the entry count of the monitor is 0, the thread enters the monitor, and then sets the entry count to 1, and the thread is the owner of the monitor.
  2. If the thread has already occupied the monitor and just re-entered, the number of entries entering the monitor is increased by one.
  3. If other threads have already occupied the monitor, the thread enters the blocking state until the number of monitor entries is 0, and then try to acquire the monitor again.

When the thread exits the synchronization block, it executes the monitorexit instruction. The number of entries in the monitor is reduced by 1. If the entry number is 0 after subtracting 1, the thread exits the monitor and is no longer the owner of the monitor. Other threads blocked by this monitor can try to get this monitor.

The bottom layer of Synchronized is completed by a monitor object. In fact, wait/notify and other methods also depend on the monitor object. This is why wait/notify and other methods can only be called in a synchronized block or method, otherwise java.lang will be thrown. The cause of the exception of IllegalMonitorStateException.

The synchronization of the is not done by adding the monitorenter and monitorexit instructions, but the ACC_SYNCHRONIZED identifier is added to its constant pool. JVM implements method synchronization based on this identifier: when a thread calls a method, it will first check whether the ACC_SYNCHRONIZED access flag of the method is set. If it is set, it means that this method is a synchronous method, and the thread of execution will first get the monitor and get The method body can be executed after success, and the monitor can be released after the method is executed. During the execution of the method, other threads can no longer obtain the same monitor object.

public class SynchronizedMethod {
    public synchronized void method() {
        System.out.println("Hello World!");
    }
}

The state of the lock

Synchronized is achieved through a monitor inside the object. But the essence of the monitor lock is realized by the Mutex Lock of the underlying operating system. However, the operating system needs to switch from the user mode to the core mode to realize the switching between threads. This cost is very high, and the transition between the states takes a relatively long time. This kind of lock that depends on the operating system Mutex Lock is called a heavyweight lock.

In JDK1.6, in order to reduce the performance cost of acquiring locks and releasing locks, technologies such as biased locks, lightweight locks, spin locks, adaptive spin locks, lock elimination, and lock coarsening were introduced to reduce lock operations. Overhead.

There are four main states of synchronized locks, in order: biased lock state, lightweight lock state, and heavyweight lock state. They will gradually upgrade with the fierce competition. Locks can be upgraded and cannot be downgraded. This strategy is to improve the efficiency of acquiring and releasing locks.

  • Bias lock: When a thread accesses a synchronization block and acquires a lock, the thread id of the lock bias is stored in the object header and lock record. In the future, when the thread enters and exits the synchronization block, simply test whether the mark word of the object header is Stores the bias lock pointing to the current thread. If the test is successful, the thread acquires the lock successfully. Otherwise, you need to test whether the bias lock identifier in the mark word is 1, if yes, use CAS to operate the competition lock. If the competition succeeds, the thread ID in the Mark Word is set to the current thread ID. If the CAS fails to acquire the bias lock, it means there is competition. When the global security point is reached, the thread that obtained the biased lock is suspended, the biased lock is upgraded to a lightweight lock, and then the thread that is blocked at the security point continues to execute the synchronization code.
    bias lock is biased towards the first thread that obtains it. If the lock is not acquired by other threads during the program operation, then the thread holding the bias lock does not need to be synchronized. The introduction of biased locks is to minimize the unnecessary overhead of lightweight lock execution without multi-threaded competition, because the acquisition and release of lightweight locks uses multiple CAS atomic instructions, and biased locks only replace ThreadID When using a CAS atomic instruction. When there is lock contention, the biased lock will be upgraded to a lightweight lock.
    Applicable scenario: Use when the lock is not competing. Before the thread has finished executing the synchronization code, no other threads will compete for the lock. Once there is competition, it will be upgraded to a lightweight lock, and it needs to be revoked when it is upgraded to a lightweight lock. Biased locks will do a lot of extra operations, resulting in performance degradation.
  • Lightweight lock
    Locking process: Before the thread executes the synchronization block, the JVM will first create a space for storing the lock record in the current thread's stack frame, and copy the mark word of the object header to the displaced mark word, and then the thread will try Use cas to replace the mark word of the object header with a pointer to the lock record. If it succeeds, the current thread acquires the lock, otherwise it means that other threads compete for the lock, and the current thread tries to use spin to acquire the lock. When the spin exceeds a certain number of times, or one thread is holding the lock, one is spinning, and there is a third visit, the lightweight lock is upgraded to a heavyweight lock.
    Unlocking process: Use the atomic cas operation to replace the displaced mark word back to the object head. If it succeeds, the unlocking is successful, otherwise it indicates that there is lock competition, and the lock will expand into a heavyweight lock.

    Under the premise of no multi-threaded competition, the use of lightweight locks can reduce the performance consumption of traditional heavyweight locks using operating system mutexes (application for mutex locks), because when using lightweight locks, no application is required Mutex. In addition, the CAS operation is used to lock and unlock the lightweight lock. If there is no competition, lightweight locks use CAS operations to avoid the overhead of using mutual exclusion operations. But if there is lock contention, in addition to the mutex overhead, additional CAS operations will occur. Therefore, in the case of lock contention, lightweight locks are slower than traditional heavyweight locks! If the lock competition is fierce, then the lightweight lock will quickly expand into a heavyweight lock!

  • Heavyweight lock: When a thread acquires a lock, other threads will be blocked. When the thread holding the lock releases the lock, these threads will be awakened, and the awakened thread will have the opportunity to acquire the lock.

    Synchronized and Lock can ensure that only one thread acquires the lock and then executes the synchronization code at the same time, and flushes the modification of the variable to the main memory before releasing the lock, ensuring visibility.

  • Spin lock: Generally, the time for a thread to hold a lock is not too long, so it is a waste of resources to suspend or resume a thread just for this time. Spin lock is to let the thread wait for a period of time, execute a meaningless loop, will not be suspended immediately, to see if the thread holding the lock will release the lock soon. If the thread holding the lock releases the lock soon, the efficiency of spinning is very good. On the contrary, the spinning thread will consume the processing resources in vain, which will lead to a waste of performance. Therefore, the number of spins must have a limit. If the number of spins exceeds the limit and the lock is not acquired, it should be suspended.
  • Adaptive spin lock: JDK 1.6 introduced a smarter spin lock, the adaptive spin lock. The so-called adaptive means that the number of spins is no longer fixed, it is determined by the previous spin time on the same lock and the state of the lock owner.
  • Lock elimination: Even when the compiler is running, the virtual machine will execute lock elimination if it detects that there is no competition for those shared data.
  • Lock coarsening: If a series of continuous operations repeatedly lock and unlock the same object, it will bring a lot of unnecessary performance consumption. Use lock coarsening to reduce the overhead of lock operations.

ReentrantLock

Re-entry locks support repeated locks on resources by one thread. The lock also supports setting fairness and unfairness when acquiring the lock.

When using lock, you need to unlock it in the try finally block:

public static final Object get(String key) {
    r.lock();
    try {
        return map.get(key);
    } finally {
        r.unlock();
    }
}

principle

ReentrantLock achieves lock acquisition and release by combining custom synchronizers. When a thread tries to acquire the synchronization state, it first determines whether the current thread is the thread that acquires the lock to determine whether the acquisition operation is successful. If the thread that acquires the lock requests again, the synchronization state value is increased and returns true, indicating that the synchronization state is successfully acquired . If the synchronization status fails to be obtained, the thread will be constructed as a node node and placed in the AQS synchronization queue.

If the lock is acquired n times, then the first n-1 tryRelease(int releases) method must return false. After the nth call to tryRelease(), the synchronization state is completely released (the value is 0) before returning true.

The difference between ReentrantLock and synchronized

  1. Use the synchronized keyword to achieve synchronization, the thread will automatically release the lock after executing the synchronized code block, and ReentrantLock needs to manually release the lock.
  2. Synchronized is a non-fair lock, ReentrantLock can be set to a fair lock.
  3. The thread waiting to acquire the lock on ReentrantLock is interruptible, and the thread can give up waiting for the lock. And synchonized will wait indefinitely.
  4. ReentrantLock can set a timeout to acquire the lock. Acquire the lock before the specified deadline, and return if the lock has not been acquired after the deadline.
  5. The tryLock() method of ReentrantLock can try to acquire the lock non-blockingly. It returns immediately after calling this method, and returns true if it can be acquired, otherwise it returns false.

Classification of locks

Fair lock and unfair lock

Acquire object locks in the order of thread access. Synchronized is a non-fair lock. Lock is a non-fair lock by default. It can be set to a fair lock, and a fair lock will affect performance.

public ReentrantLock() {
    sync = new NonfairSync();
}

public ReentrantLock(boolean fair) {
    sync = fair ? new FairSync() : new NonfairSync();
}

Shared and exclusive locks

The main difference between shared and exclusive is that only one thread can obtain synchronization state in exclusive mode at the same time, while in shared mode, multiple threads can obtain synchronization state at the same time. For example, a read operation can be performed by multiple threads at the same time, and a write operation can only be performed by one thread at the same time, and other operations will be blocked.

Pessimistic lock and optimistic lock

Pessimistic locks are locked every time a resource is accessed, and the lock is released after the synchronization code is executed. Synchronized and ReentrantLock belong to pessimistic locks.

Optimistic locking does not lock the resource. All threads can access and modify the same resource. If there is no conflict, the modification succeeds and exits, otherwise the loop will continue to try. The most common implementation of optimistic locking is CAS.

Optimistic locking generally has the following two ways:

  1. It is implemented using the data version recording mechanism, which is the most commonly used implementation of optimistic locking. Adding a version identifier to the data is generally achieved by adding a numeric version field to the database table. When reading data, read the value of the version field together, and add one to the version value every time the data is updated. When we submit an update, we judge that the current version information of the corresponding record in the database table is compared with the version value retrieved for the first time. If the current version number of the database table is equal to the version value retrieved for the first time, it will be updated. Otherwise, it is regarded as expired data.
  2. Use timestamp. The database table adds a field, the field type uses timestamp (timestamp), which is similar to the version above, also when the update is submitted, the timestamp of the data in the current database is checked and compared with the timestamp taken before the update, if they are consistent, OK, otherwise it is a version conflict.

Applicable scene:

  • Pessimistic lock is suitable for scenarios with many write operations.
  • Optimistic locking is suitable for scenarios with many read operations, and no locking can improve the performance of read operations.
CAS

The full name of CAS is Compare And Swap, which is the main implementation method of optimistic locking. CAS realizes variable synchronization between multiple threads without using locks. Both AQS and atomic classes inside ReentrantLock use CAS.

The CAS algorithm involves three operands:

  • The memory value V that needs to be read and written.
  • The value A to be compared.
  • The new value B to be written.

Only when the value of V is equal to A, will the atomic method be used to update the value of V with the new value B, otherwise it will continue to retry until the value is successfully updated.

To AtomicInteger for example, AtomicInteger of getAndIncrement () method of the underlying CAS is realized, the key code is compareAndSwapInt(obj, offset, expect, update) , its meaning is that if obj in value and expect equal to prove that no other thread changed this variable, then update it to update If they are not equal, it will continue to retry until the value is successfully updated.

Three major issues of CAS:

  1. ABA question . CAS needs to check whether the memory value has changed when operating the value, and update the memory value when there is no change. But if the memory value was originally A, later became B, and then became A, then the CAS will find that the value has not changed when it checks, but it has actually changed. The solution to the ABA problem is to add the version number in front of the variable, and add one to the version number every time the variable is updated, so that the change process changes from A-B-A to 1A-2B-3A .

    JDK has provided the AtomicStampedReference class since 1.5 to solve the ABA problem, and atomically updates reference types with version numbers.

  2. cycle time and high overhead . If the CAS operation is unsuccessful for a long time, it will spin continuously, which will bring a very large overhead to the CPU.
  3. can only guarantee the atomic operation of a shared variable . When operating on a shared variable, CAS can guarantee atomic operation, but when operating on multiple shared variables, CAS cannot guarantee the atomicity of the operation.

    Since Java 1.5, the JDK has provided the AtomicReference class to ensure the atomicity between referenced objects, and multiple variables can be placed in one object for CAS operations.

Concurrency tools

Several very useful concurrency tool classes are provided in the JDK concurrency package. The CountDownLatch, CyclicBarrier, and Semaphore tool classes provide a means of concurrent process control.

CountDownLatch

CountDownLatch is used for a thread to wait for other threads execute tasks before executing, similar to thread.join() function. A common application scenario is to open multiple threads to perform a task at the same time, and wait until all tasks are executed before performing a specific operation, such as summarizing statistical results.

public class CountDownLatchDemo {
    static final int N = 4;
    static CountDownLatch latch = new CountDownLatch(N);

    public static void main(String[] args) throws InterruptedException {

       for(int i = 0; i < N; i++) {
            new Thread(new Thread1()).start();
       }

       latch.await(1000, TimeUnit.MILLISECONDS); //调用await()方法的线程会被挂起,它会等待直到count值为0才继续执行;等待timeout时间后count值还没变为0的话就会继续执行
       System.out.println("task finished");
    }

    static class Thread1 implements Runnable {

        @Override
        public void run() {
            try {
                System.out.println(Thread.currentThread().getName() + "starts working");
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            } finally {
                latch.countDown();
            }
        }
    }
}

operation result:

Thread-0starts workingThread-1starts workingThread-2starts workingThread-3starts workingtask finished

CyclicBarrier

CyclicBarrier (synchronization barrier), used for a group of threads to wait for each other to a certain state, and then this group of threads at the same time execution.

public CyclicBarrier(int parties, Runnable barrierAction) {}public CyclicBarrier(int parties) {}

The parameter parties refers to how many threads or tasks are allowed to wait until a certain state; the parameter barrierAction is the content that will be executed when these threads all reach a certain state.

public class CyclicBarrierTest {
    // 请求的数量
    private static final int threadCount = 10;
    // 需要同步的线程数量
    private static final CyclicBarrier cyclicBarrier = new CyclicBarrier(5);

    public static void main(String[] args) throws InterruptedException {
        // 创建线程池
        ExecutorService threadPool = Executors.newFixedThreadPool(10);

        for (int i = 0; i < threadCount; i++) {
            final int threadNum = i;
            Thread.sleep(1000);
            threadPool.execute(() -> {
                try {
                    test(threadNum);
                } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                } catch (BrokenBarrierException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            });
        }
        threadPool.shutdown();
    }

    public static void test(int threadnum) throws InterruptedException, BrokenBarrierException {
        System.out.println("threadnum:" + threadnum + "is ready");
        try {
            /**等待60秒,保证子线程完全执行结束*/
            cyclicBarrier.await(60, TimeUnit.SECONDS);
        } catch (Exception e) {
            System.out.println("-----CyclicBarrierException------");
        }
        System.out.println("threadnum:" + threadnum + "is finish");
    }

}

The running results are as follows, we can see that CyclicBarrier can be reused:

threadnum:0is ready
threadnum:1is ready
threadnum:2is ready
threadnum:3is ready
threadnum:4is ready
threadnum:4is finish
threadnum:3is finish
threadnum:2is finish
threadnum:1is finish
threadnum:0is finish
threadnum:5is ready
threadnum:6is ready
...

When the four threads all reach the barrier state, one of the four threads will be selected to execute Runnable.

The difference between CyclicBarrier and CountDownLatch

Both CyclicBarrier and CountDownLatch can realize the waiting between threads.

CountDownLatch is used by a thread to wait for other threads to finish executing tasks before executing. CyclicBarrier is used for a group of threads to wait for each other to a certain state, and then this group of threads at the same time as .
The counter of CountDownLatch can only be used once, while the counter of CyclicBarrier can be reset using the reset() method, which can be used to handle more complex business scenarios.

Semaphore

Semaphore is similar to a lock, it is used to control the number of threads that simultaneously access a specific resource and control the number of concurrent threads.

public class SemaphoreDemo {    public static void main(String[] args) {        final int N = 7;        Semaphore s = new Semaphore(3);        for(int i = 0; i < N; i++) {            new Worker(s, i).start();        }    }    static class Worker extends Thread {        private Semaphore s;        private int num;        public Worker(Semaphore s, int num) {            this.s = s;            this.num = num;        }        @Override        public void run() {            try {                s.acquire();                System.out.println("worker" + num +  " using the machine");                Thread.sleep(1000);                System.out.println("worker" + num +  " finished the task");                s.release();            } catch (InterruptedException e) {                e.printStackTrace();            }        }    }}

The results of the operation are as follows. It can be seen that the resource locks are not acquired in the order of thread access, namely

worker0 using the machineworker1 using the machineworker2 using the machineworker2 finished the taskworker0 finished the taskworker3 using the machineworker4 using the machineworker1 finished the taskworker6 using the machineworker4 finished the taskworker3 finished the taskworker6 finished the taskworker5 using the machineworker5 finished the task

Atomic class

Basic type atomic class

Update basic types in an atomic way

  • AtomicInteger: Integer atomic class
  • AtomicLong: Long integer atomic class
  • AtomicBoolean: Boolean atomic class

Commonly used methods of the AtomicInteger class:

public final int get() //获取当前的值
public final int getAndSet(int newValue)//获取当前的值,并设置新的值
public final int getAndIncrement()//获取当前的值,并自增
public final int getAndDecrement() //获取当前的值,并自减
public final int getAndAdd(int delta) //获取当前的值,并加上预期的值
boolean compareAndSet(int expect, int update) //如果输入的数值等于预期值,则以原子方式将该值设置为输入值(update)
public final void lazySet(int newValue)//最终设置为newValue,使用 lazySet 设置之后可能导致其他线程在之后的一小段时间内还是可以读到旧的值。

The AtomicInteger class mainly uses CAS (compare and swap) to ensure atomic operations, thereby avoiding the high overhead of locking.

Array type atomic class

Update an element in the array atomically

  • AtomicIntegerArray: Integer array atomic class
  • AtomicLongArray: Atomic class of long integer array
  • AtomicReferenceArray: reference type array atomic class

Common methods of the AtomicIntegerArray class:

public final int get(int i) //获取 index=i 位置元素的值
public final int getAndSet(int i, int newValue)//返回 index=i 位置的当前的值,并将其设置为新值:newValue
public final int getAndIncrement(int i)//获取 index=i 位置元素的值,并让该位置的元素自增
public final int getAndDecrement(int i) //获取 index=i 位置元素的值,并让该位置的元素自减
public final int getAndAdd(int i, int delta) //获取 index=i 位置元素的值,并加上预期的值
boolean compareAndSet(int i, int expect, int update) //如果输入的数值等于预期值,则以原子方式将 index=i 位置的元素值设置为输入值(update)
public final void lazySet(int i, int newValue)//最终 将index=i 位置的元素设置为newValue,使用 lazySet 设置之后可能导致其他线程在之后的一小段时间内还是可以读到旧的值。

Reference type atomic class

  • AtomicReference: reference type atomic class
  • AtomicStampedReference: Reference type atomic class with version number. This class associates integer values with references, which can be used to solve atomic update data and data version numbers, and can solve ABA problems that may occur when using CAS for atomic updates.
  • AtomicMarkableReference: Atomic update of marked reference types. This class associates boolean tags with references

AQS

AQS defines a synchronizer framework for multi-threaded access to shared resources. The implementation of many concurrency tools depends on it, such as the commonly used ReentrantLock/Semaphore/CountDownLatch.

principle

AQS uses a volatile int type member variable state to represent the synchronization state, and modifies the value of the synchronization state through CAS.

private volatile int state;//共享变量,使用volatile修饰保证线程可见性

The synchronizer relies on the internal synchronization queue (a FIFO two-way queue) to complete the management of the synchronization state. When the current thread fails to obtain the synchronization state, the synchronizer will construct the current thread and the waiting state (exclusive or shared) into a node (Node) and Add it to the synchronization queue and spin it. When the synchronization state is released, the thread corresponding to the successor node in the first section will be awakened to make it try to obtain the synchronization state again.

Condition

Any Java object has a set of monitor methods (defined on java.lang.Object), including wait(), wait(long timeout), notify() and notifyAll() methods. The premise of using these methods is The lock of the object has been acquired and used in conjunction with synchronized. The Condition interface also provides a monitor method similar to Object, which can be used with Lock to realize the wait/notify mode. Condition is dependent on the Lock object.

Lock lock = new ReentrantLock();
Condition condition = lock.newCondition();
public void conditionWait() throws InterruptedException {
    lock.lock();
    try {
            condition.await();
    } finally {
            lock.unlock();
    }
}
public void conditionSignal() throws InterruptedException {
    lock.lock();
    try {
            condition.signal();
    } finally {
            lock.unlock();
    }
}

Generally, the Condition object is used as a member variable. When the await() method is called, the current thread will release the lock and enter the waiting queue. Other threads call the signal() method of the Condition object to wake up the thread waiting for the first node of the queue.

Realization principle

Each Condition object contains a waiting queue. If a thread successfully acquires the lock and calls the Condition.await() method, then the thread will release the synchronization state, wake up the subsequent nodes in the synchronization queue, and then construct a node to join Waiting in the queue. Only after the thread acquires the lock associated with the Condition again, can it return from the await() method.

Image Source: The Art of Concurrent Programming in Java

In Object's monitor model, an object has a synchronization queue and a waiting queue. Lock is implemented by AQS, AQS can have multiple Conditions, so Lock has a synchronization queue and multiple waiting queues.

Image Source: The Art of Concurrent Programming in Java

After the thread acquires the lock, it calls the signal() method of Condition to move the head node of the waiting queue to the synchronization queue, and then the thread of the node will try to obtain the synchronization state. After successfully obtaining the synchronization status, the thread returns the await() method.

Image Source: The Art of Concurrent Programming in Java

other

Daemon Thread

There are two types of threads in Java:

  • User Thread
  • Daemon Thread

As long as any non-daemon thread in the current JVM instance does not end, the daemon thread will all work; only when the last non-daemon thread ends, the daemon thread will finish working along with the JVM.

The role of Daemon is to provide convenient services for the operation of other threads. The most typical application of daemon threads is garbage collection.

Converting a thread to a daemon thread can be achieved by calling the setDaemon(true) method of the Thread object.

Reference

thread interrupt

synchronized realization principle

Instruction rearrangement causes singleton mode to fail

This article has been included in the github repository, which is used to share a summary of Java-related knowledge, including Java basics, MySQL, Spring Boot, MyBatis, Redis, RabbitMQ, computer networks, data structures and algorithms, etc. Welcome everyone to mention pr and star!

github address: https://github.com/Tyson0314/Java-learning

If github is not accessible, you can visit the gitee repository.

gitee address: https://gitee.com/tysondai/Java-learning


程序员大彬
468 声望489 粉丝

非科班转码,个人网站:topjavaer.cn