1. Introduction to the thread pool
1. The idea of pooling
In project engineering, there are many technical applications based on the idea of pooling, such as the concurrent execution of tasks based on thread pools, the configuration of connection pools for middleware services, and the management of shared resources to reduce resource consumption and improve efficiency and service performance.
The idea of pooling is intuitively understood. It not only has the storage ability as a container (continuous succession), but also has the ability to maintain a certain amount of reserves (initialization provision). At the same time, as a container, there must be size restrictions. This basic logic analyzes the principle of thread pool in Java in detail.
2. Thread pool
First of all, those who are familiar with the execution cycle of the JVM know that the frequent creation and destruction of objects in memory affects performance. As the basic unit running in the process, the thread reuses the created thread through the thread pool, and executes actions in the task. Avoid or reduce the frequent creation of threads.
Multiple threads are maintained in the thread pool. When a scheduled task is received, it can be avoided to create a thread for direct execution, thereby reducing the consumption of service resources, managing relatively uncertain concurrent tasks in a relatively certain thread pool, and improving system services. stability. The following is an in-depth analysis of the ThreadPoolExecutor
category based on JDK1.8
.
Second, the principle and cycle
1. Class diagram design
- Executor interface
Interpretation of source code comments: Commands will be executed in the future, and the two actions of task submission and execution will be decoupled. Just pass in the Runnable task object, and the thread pool will perform corresponding scheduling and task processing. Although Executor is the top-level interface of ThreadPoolExecutor thread pool, it only abstracts the processing idea of tasks.
- ExecutorService interface
Extend the Executor interface, generate Futures for the execution results of tasks individually or in batches, and add management methods for task interruption or termination.
- AbstractExecutorService abstract class
Provides default implementations of the task execution methods (submit, invokeAll) defined by the ExecutorService interface, and provides the newTaskFor method for building RunnableFuture objects.
- ThreadPoolExecutor class
Maintain the life cycle of the thread pool, manage threads and tasks, and implement concurrent execution of tasks through the corresponding scheduling mechanism.
2. Basic case
In the example, a simple butte-pool
thread pool is created, 4 core threads are set to execute tasks, and the queue container size is set to 256; in actual business, the task execution time, service configuration, test data, etc. need to be considered for parameter setting.
public class ThrPool implements Runnable {
private static final Logger logger = LoggerFactory.getLogger(ThrPool.class) ;
/**
* 线程池管理,ThreadFactoryBuilder出自Guava工具库
*/
private static final ThreadPoolExecutor DEV_POOL;
static {
ThreadFactory threadFactory = new ThreadFactoryBuilder().setNameFormat("butte-pool-%d").build();
DEV_POOL = new ThreadPoolExecutor(0, 8,60L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<>(256),threadFactory, new ThreadPoolExecutor.AbortPolicy());
DEV_POOL.allowCoreThreadTimeOut(true);
}
/**
* 任务方法
*/
@Override
public void run() {
try {
logger.info("Print...Job...Run...;queue_size:{}",DEV_POOL.getQueue().size());
Thread.sleep(5000);
} catch (Exception e){
e.printStackTrace();
}
}
}
By continuously adjusting the core parameters of the thread pool and controlling the length of the task execution time, especially the extreme values of some parameters can be set, and the effect of task execution can be observed, and the running characteristics of the thread pool can be initially perceived. analyze.
3. Construction method
Provide multiple construction methods in the ThreadPoolExecutor class to meet the construction requirements of thread pools in different scenarios. Here are a few considerations:
public ThreadPoolExecutor(int corePoolSize,int maximumPoolSize,long keepAliveTime,
BlockingQueue<Runnable> workQueue,ThreadFactory threadFactory)
- Judging from the construction method, the size of corePoolSize is allowed to be set to 0, and the impact will be detailed when the analysis task is executed;
- After the thread pool is created, the core thread will not be started immediately, and it will usually wait until the task is submitted before starting it; or actively execute the
prestartCoreThread||prestartAllCoreThreads
method; - In the current version of the JDK, the CoreThread core thread is also allowed to be terminated by timeout to avoid the thread being idle for a long time;
- If the core thread is allowed to terminate over time, this method will check that keepAliveTime must be greater than 0, otherwise an exception will be thrown;
4. Operating principle
The basic operation logic of the thread pool, after the task is submitted, there are three processing methods: directly assign the thread for execution; or put it into the task queue and wait for execution; if it is directly rejected, an exception will be returned; task submission and execution are decoupled, forming a A model of production and consumption.
5. Life cycle
Here, the core logic of the thread pool is gradually analyzed from the source code. First, let's take a look at the state description of the life cycle, which involves the following core fields:
private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
private static final int COUNT_BITS = Integer.SIZE - 3;
private static final int CAPACITY = (1 << COUNT_BITS) - 1;
// 状态描述
private static final int RUNNING = -1 << COUNT_BITS;
private static final int SHUTDOWN = 0 << COUNT_BITS;
private static final int STOP = 1 << COUNT_BITS;
private static final int TIDYING = 2 << COUNT_BITS;
private static final int TERMINATED = 3 << COUNT_BITS;
ctl
controls the status of the thread pool, including two conceptual fields: workerCount
number of valid threads in the thread pool, runState
running status, there are 5 specific running status descriptions:
- RUNNING: accept new tasks and process tasks in the blocking queue;
- SHUTDOWN: Do not accept new tasks and process existing tasks in the blocking queue;
- STOP: Do not accept new tasks, do not process tasks in the blocking queue, and interrupt ongoing tasks;
- TIDYING: All tasks have been terminated, workerCount=0, the thread pool will execute the
terminated()
method after entering this state; - TERMINATED: Enter this state after executing the
terminated()
method;
The transition logic between states is as follows:
The current running state can be calculated by the runStateOf()
method. The definition of the thread pool life cycle and the state transition logic are in the source code comments of the ctl
field. For more details, please refer to the description document here.
3. Task management
1. Scheduling logic
After having an overall understanding of the thread pool from the above, now start with the core process of task submission and execution, and conduct an in-depth analysis of the source code and logic. Task scheduling, as the core capability of the thread pool, can be directly entered from the execute(task)
method.
public void execute(Runnable command) {
// 上文描述的workerCount与runState
int c = ctl.get();
// 核心线程池
if (workerCountOf(c) < corePoolSize){}
// 任务队列
if (isRunning(c) && workQueue.offer(command)){}
// 拒绝策略
else if (!addWorker(command, false)) reject(command);
}
On the whole, task scheduling is judged in three branch steps, namely: core thread pool, task queue, and rejection strategy. Let's take a closer look at the processing logic of each branch;
1.1 Core thread pool
// 如果有效线程数小于核心线程数,新建线程并绑定当前任务
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
}
1.2 Task Queue
// 如果线程池是运行状态,并且任务添加队列成功
if (isRunning(c) && workQueue.offer(command)) {
// 二次校验如果是非运行状态,则移除该任务,执行拒绝策略
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
// 如果有效线程数是0,执行addWorker添加方法
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
1.3 Deny Policy
// 再次执行addWorker方法,如果失败则拒绝该任务
else if (!addWorker(command, false)) reject(command);
In this way, the execute method executes the logic, and the task scheduling process is as follows:
As shown in the core scheduling logic after the task is submitted to the thread pool, since the task is submitted, it is naturally expected to be executed, and the addWorker
method is also called in the source code to add worker threads.
2. Worker thread
The worker threads in the thread pool are encapsulated in the Worker class, inherit AQS and implement the Runnable interface to maintain the creation of threads and the execution of tasks:
private final class Worker extends AbstractQueuedSynchronizer implements Runnable {
final Thread thread; // 持有线程
Runnable firstTask; // 初始化任务
}
2.1 addWorker method
Now that a worker thread is added, it means that there are tasks that need to be executed:
- firstTask: The first task executed by the newly created thread, empty or null is allowed;
- core: pass true, when adding threads, judge whether the current number of threads is less than corePoolSize; pass false, when adding new threads, judge whether the current number of threads is less than maximumPoolSize;
private final HashSet<Worker> workers = new HashSet<Worker>();
private final BlockingQueue<Runnable> workQueue;
private boolean addWorker(Runnable firstTask, boolean core) ;
Through the source code analysis of this method, the execution logic flow is as follows:
After the worker thread is created, maintain and hold the thread reference in the HashSet, so that the corresponding put
or remove
operation can be performed on the thread pool to manage the life cycle.
2.2 runWorker method
The implementation of the run method in the Worker class is actually delegated to the method, which is used to periodically specific thread tasks, and also analyze its execution logic:
The entire execution process continuously obtains and executes tasks through the while loop. The entire process also needs to constantly check the thread pool status and interrupt the thread execution in time. After the execution of this method is completed, the thread will be requested to destroy the action.
3. Task queue
The two core capabilities of the thread pool are the management of threads and tasks, and the decoupling of the two. The production and consumption mode is constructed through the management of tasks in the queue. Different queue types have their own access policies; LinkedBlockingQueue creates a queue with a linked list structure. The default Integer.MAX_VALUE
capacity is too large, you need to specify the queue size and manage it according to the principle of first-in, first-out;
3.1 getTask method
When acquiring a task, in addition to the necessary thread pool status judgment, it is necessary to check whether the thread of the current task needs to be recycled over time. It has been mentioned above that even the core thread pool can set the timeout period. If the task is not acquired, it is considered that the runWorker
method Execution complete:
3.2 reject method
Whether it is a thread pool or a task queue, there is a capacity boundary. When the capacity reaches the upper limit, the newly submitted task needs to be rejected. In the above case, ThreadPoolExecutor.AbortPolicy is used to discard tasks and throw exceptions, and there are several other types. The strategy can be selected as needed.
4. Monitoring and configuration
In most projects, the relevant parameters are directly defined for the thread pool. If adjustment is required, it basically needs to restart the service to complete. In fact, the thread pool has some open methods for parameter adjustment and query:
setCorePoolSize method
A series of logic checks are performed inside the method to ensure a smooth transition of the thread pool. The entire process is rigorous and complex. Combined with the thread pool parameter acquisition method, dynamic parameter configuration and monitoring can be performed to achieve controllable thread pool management. :
Finally more details on thread pools, you can read more source code documents and practice with cases; the principle of thread pools is applied in many components, such as various connection pools, parallel computing, etc. It is also worth learning in depth and summary.
5. Reference source code
应用仓库:
https://gitee.com/cicadasmile/butte-flyer-parent
组件封装:
https://gitee.com/cicadasmile/butte-frame-parent
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。