In addition to the ElasticJob introduced above, xxl-job has applications in many small and medium-sized companies (although the quality of its code and design is not very high, the license is not open enough, and it is individualistic, but its specific out-of-the-box convenience and functions are relatively complete XXL-JOB is a distributed task scheduling platform, and its core design goals are rapid development, simple learning, lightweight, and easy expansion. This article introduces the integration of XXL-JOB and SpringBoot. @pdai
Knowledge preparation
A basic understanding of the knowledge system of distributed tasks and xxl-Job is required. @pdai
what is xxl-job
XXL-JOB is a distributed task scheduling platform whose core design goals are rapid development, simple learning, light weight, and easy expansion. Now open source and connected to the online product lines of many companies, out of the box. The following content comes from xxl-job official website
The following features are supported:
- 1. Simple: support CRUD operations on tasks through web pages, easy to operate, and get started in one minute;
- 2. Dynamic: support dynamic modification of task status, start/stop tasks, and terminate running tasks, with immediate effect;
- 3. Dispatching center HA (centralized): The dispatching adopts a centralized design. The "dispatching center" self-developed dispatching components and supports cluster deployment, which can ensure the dispatching center HA;
- 4. Executor HA (distributed): distributed execution of tasks, task "executor" supports cluster deployment, which can ensure task execution HA;
- 5. Registration Center: The executor will automatically register tasks periodically, and the scheduling center will automatically discover the registered tasks and trigger execution. At the same time, it also supports manual entry of the actuator address;
- 6. Elastic capacity expansion and shrinkage: Once a new executor machine goes online or offline, the task will be reassigned the next time it is scheduled;
- 7. Trigger strategy: Provide rich task trigger strategies, including: Cron trigger, fixed interval trigger, fixed delay trigger, API (event) trigger, manual trigger, parent-child task trigger;
- 8. Scheduling expiration strategy: the compensation processing strategy for the scheduling center to miss the scheduling time, including: ignore, trigger compensation immediately, etc.;
- 9. Blocking processing strategy: The processing strategy when the scheduling is too intensive for the executor to process, the strategies include: stand-alone serial (default), discarding subsequent scheduling, and overwriting previous scheduling;
- 10. Task timeout control: support custom task timeout time, task running timeout will actively interrupt the task;
- 11. Task failure retry: Supports custom task failure retry times. When the task fails, it will actively retry according to the preset failure retry times; sharding tasks support failure retry at shard granularity;
- 12. Task failure alarm; email failure alarm is provided by default, and an extension interface is reserved, which can easily expand alarm methods such as SMS and DingTalk;
- 13. Routing strategy: Executor cluster deployment provides rich routing strategies, including: first, last, polling, random, consistent HASH, least frequently used, most recently unused, failover, busy transfer, etc. ;
- 14. Sharded broadcast task: When the executor cluster is deployed, when the task routing policy selects "Sharded Broadcast", a task scheduling will broadcast and trigger all executors in the cluster to execute a task, and sharded tasks can be developed according to the sharding parameters. ;
- 15. Dynamic sharding: The sharding broadcast task is sharded with the executor as the dimension, and supports the dynamic expansion of the executor cluster to dynamically increase the number of shards and perform business processing collaboratively; it can significantly improve task processing when performing large data volume business operations. ability and speed.
- 16. Failover: When the task routing policy selects "Failover", if a machine in the executor cluster fails, the failover will automatically switch to a normal executor to send scheduling requests.
- 17. Task progress monitoring: support real-time monitoring of task progress;
- 18. Rolling real-time log: support online viewing of scheduling results, and support real-time viewing of the complete execution log output by the executor in Rolling mode;
- 19. GLUE: Provides Web IDE, supports online development of task logic code, dynamic release, and real-time compilation takes effect, omitting the process of deployment and online. Supports historical version backtracking of 30 versions.
- 20. Script tasks: support developing and running script tasks in GLUE mode, including scripts of Shell, Python, NodeJS, PHP, PowerShell, etc.;
- 21. Command line tasks: natively provide common command line task Handlers (Bean tasks, "CommandJobHandler"); business parties only need to provide command lines;
- 22. Task Dependency: Supports configuring sub-task dependencies. When the parent task is executed and the execution is successful, it will actively trigger the execution of a sub-task. Multiple sub-tasks are separated by commas;
- 23. Consistency: "Scheduling Center" ensures the consistency of cluster distributed scheduling through DB locks, and one task scheduling will only trigger one execution;
- 24. Customize task parameters: support online configuration of scheduling task input parameters, which will take effect immediately;
- 25. Scheduling thread pool: The multi-threading of the scheduling system triggers the scheduling operation to ensure that the scheduling is executed accurately and is not blocked;
- 26. Data encryption: The communication between the dispatch center and the executor is encrypted to improve the security of dispatch information;
- 27. Email alarm: when the task fails, it supports email alarm, and supports configuring multiple email addresses to send alarm emails;
- 28. Push the maven central warehouse: The latest stable version will be pushed to the maven central warehouse, which is convenient for users to access and use;
- 29. Running report: Support real-time viewing of running data, such as the number of tasks, scheduling times, number of executors, etc.; and scheduling reports, such as scheduling date distribution diagram, scheduling success distribution diagram, etc.;
- 30. Fully asynchronous: The task scheduling process is fully asynchronously designed and implemented, such as asynchronous scheduling, asynchronous operation, asynchronous callback, etc., which can effectively reduce traffic peaks for intensive scheduling, and theoretically support the operation of tasks of any duration;
- 31. Cross-language: The scheduling center and the executor provide language-independent RESTful API services, and any third-party language can connect to the scheduling center or implement the executor accordingly. In addition, other cross-language solutions such as "multitasking mode" and "httpJobHandler" are also provided;
- 32. Internationalization: The dispatch center supports internationalization settings, providing two optional languages, Chinese and English, and the default is Chinese;
- 33. Containerization: Provide official docker images, and update and push dockerhub in real time to further realize out-of-the-box use of products;
- 34. Thread pool isolation: The scheduling thread pool is isolated and split, and slow tasks are automatically degraded into the "Slow" thread pool to avoid exhausting scheduling threads and improve system stability;
- 35. User management: support online management of system users, there are two roles of administrator and ordinary user;
- 36. Permission control: The executor dimension controls permissions, the administrator has full permissions, and ordinary users need to assign executor permissions before allowing related operations;
Architecture design of xxl-job
design thinking
The scheduling behavior is abstracted into a public platform of "dispatching center", and the platform itself does not undertake business logic, and the "dispatching center" is responsible for initiating scheduling requests.
The tasks are abstracted into scattered JobHandlers, which are managed by the "executor", and the "executor" is responsible for receiving scheduling requests and executing the business logic in the corresponding JobHandler.
Therefore, the two parts of "scheduling" and "task" can be decoupled from each other to improve the overall stability and scalability of the system;
System composition
Scheduling Module (Scheduling Center)
- Responsible for managing scheduling information, sending scheduling requests according to the scheduling configuration, and not responsible for business codes. The scheduling system is decoupled from the task, which improves the system availability and stability, and the performance of the scheduling system is no longer limited by the task module;
- Supports visual, simple and dynamic management of scheduling information, including task creation, update, deletion, GLUE development, and task alarms, etc. All the above operations will take effect in real time, and support monitoring of scheduling results and execution logs, as well as executor Failover.
Execution module (actuator):
- Responsible for receiving scheduling requests and executing task logic. The task module focuses on the execution of tasks and other operations, making development and maintenance simpler and more efficient;
- Receive execution requests, termination requests and log requests from the "Scheduling Center".
Architecture diagram
Implementation case
It mainly introduces how SpringBoot integrates xxl-job: Bean mode (method-based and class-based); and GLUE mode based on online configuration code/script.
Bean pattern (method based)
Bean mode tasks support method-based development, and each task corresponds to a method. For tasks based on method development, the bottom layer will generate a JobHandler proxy. Like the class-based method, tasks will also exist in the executor task container in the form of JobHandler.
Advantages :
- You only need to develop one method for each task and add the "@XxlJob" annotation, which is more convenient and faster.
- Supports automatic scanning of tasks and injection into executor containers.
Disadvantages : requires Spring container environment;
Job development environment dependencies
Maven dependencies
<dependency>
<groupId>com.xuxueli</groupId>
<artifactId>xxl-job-core</artifactId>
<version>2.3.1</version>
</dependency>
application.properties configuration
# web port
server.port=8081
# no web
#spring.main.web-environment=false
# log config
logging.config=classpath:logback.xml
### xxl-job admin address list, such as "http://address" or "http://address01,http://address02"
xxl.job.admin.addresses=http://127.0.0.1:8080/xxl-job-admin
### xxl-job, access token
xxl.job.accessToken=default_token
### xxl-job executor appname
xxl.job.executor.appname=xxl-job-executor-sample
### xxl-job executor registry-address: default use address to registry , otherwise use ip:port if address is null
xxl.job.executor.address=
### xxl-job executor server-info
xxl.job.executor.ip=
xxl.job.executor.port=9999
### xxl-job executor log-path
xxl.job.executor.logpath=/data/applogs/xxl-job/jobhandler
### xxl-job executor log-retention-days
xxl.job.executor.logretentiondays=30
Config configuration (PS: Here I am directly taking the configuration in the xxl-job demo. In actual development, a starter can be automatically injected)
package tech.pdai.springboot.xxljob.config;
import com.xxl.job.core.executor.impl.XxlJobSpringExecutor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* xxl-job config
*
* @author xuxueli 2017-04-28
*/
@Configuration
public class XxlJobConfig {
private Logger logger = LoggerFactory.getLogger(XxlJobConfig.class);
@Value("${xxl.job.admin.addresses}")
private String adminAddresses;
@Value("${xxl.job.accessToken}")
private String accessToken;
@Value("${xxl.job.executor.appname}")
private String appname;
@Value("${xxl.job.executor.address}")
private String address;
@Value("${xxl.job.executor.ip}")
private String ip;
@Value("${xxl.job.executor.port}")
private int port;
@Value("${xxl.job.executor.logpath}")
private String logPath;
@Value("${xxl.job.executor.logretentiondays}")
private int logRetentionDays;
@Bean
public XxlJobSpringExecutor xxlJobExecutor() {
logger.info(">>>>>>>>>>> xxl-job config init.");
XxlJobSpringExecutor xxlJobSpringExecutor = new XxlJobSpringExecutor();
xxlJobSpringExecutor.setAdminAddresses(adminAddresses);
xxlJobSpringExecutor.setAppname(appname);
xxlJobSpringExecutor.setAddress(address);
xxlJobSpringExecutor.setIp(ip);
xxlJobSpringExecutor.setPort(port);
xxlJobSpringExecutor.setAccessToken(accessToken);
xxlJobSpringExecutor.setLogPath(logPath);
xxlJobSpringExecutor.setLogRetentionDays(logRetentionDays);
// Bean方法模式
// 通过扫描@XxlJob方式注册
// 注册Bean类模式
XxlJobExecutor.registJobHandler("beanClassDemoJobHandler", new BeanClassDemoJob());
return xxlJobSpringExecutor;
}
}
Job development
Development steps:
- Task development: In the Spring Bean instance, develop the Job method;
- Annotation configuration: Add the annotation "@XxlJob(value="custom jobhandler name", init = "JobHandler initialization method", destroy = "JobHandler destruction method")" to the Job method, the value of the annotation corresponds to the new task of the dispatch center The value of the JobHandler property.
- Execution log: The execution log needs to be printed through "XxlJobHelper.log";
- Task result: The default task result is "success", which does not need to be set actively; if you have a request, such as setting the task result to failure, you can set the task result independently through "XxlJobHelper.handleFail/handleSuccess";
package tech.pdai.springboot.xxljob.job;
import java.io.BufferedInputStream;
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import java.util.Arrays;
import com.xxl.job.core.context.XxlJobHelper;
import com.xxl.job.core.handler.annotation.XxlJob;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Component;
/**
* XxlJob开发示例(Bean模式 - 方法)
*
*/
@Slf4j
@Component
public class BeanMethodDemoJob {
/**
* 1、简单任务示例(Bean模式)
*/
@XxlJob("demoJobHandler")
public void demoJobHandler() {
XxlJobHelper.log("demoJobHandler execute...");
}
/**
* 2、分片广播任务
*/
@XxlJob("shardingJobHandler")
public void shardingJobHandler() throws Exception {
// logback console日志
log.info("shardingJobHandler execute...");
// 通过xxl记录到DB中的日志
XxlJobHelper.log("shardingJobHandler execute...");
// 分片参数
int shardIndex = XxlJobHelper.getShardIndex();
int shardTotal = XxlJobHelper.getShardTotal();
XxlJobHelper.log("分片参数:当前分片序号 = {}, 总分片数 = {}", shardIndex, shardTotal);
// 业务逻辑
for (int i = 0; i < shardTotal; i++) {
if (i==shardIndex) {
XxlJobHelper.log("第 {} 片, 命中分片开始处理", i);
} else {
XxlJobHelper.log("第 {} 片, 忽略", i);
}
}
}
/**
* 3、命令行任务
*/
@XxlJob("commandJobHandler")
public void commandJobHandler() throws Exception {
XxlJobHelper.log("commandJobHandler execute...");
String command = XxlJobHelper.getJobParam();
int exitValue = -1;
BufferedReader bufferedReader = null;
try {
// command process
ProcessBuilder processBuilder = new ProcessBuilder();
processBuilder.command(command);
processBuilder.redirectErrorStream(true);
Process process = processBuilder.start();
//Process process = Runtime.getRuntime().exec(command);
BufferedInputStream bufferedInputStream = new BufferedInputStream(process.getInputStream());
bufferedReader = new BufferedReader(new InputStreamReader(bufferedInputStream));
// command log
String line;
while ((line = bufferedReader.readLine())!=null) {
XxlJobHelper.log(line);
}
// command exit
process.waitFor();
exitValue = process.exitValue();
} catch (Exception e) {
XxlJobHelper.log(e);
} finally {
if (bufferedReader!=null) {
bufferedReader.close();
}
}
if (exitValue==0) {
// default success
} else {
XxlJobHelper.handleFail("command exit value(" + exitValue + ") is failed");
}
}
/**
* 4、跨平台Http任务
* 参数示例:
* "url: http://www.baidu.com\n" +
* "method: get\n" +
* "data: content\n";
*/
@XxlJob("httpJobHandler")
public void httpJobHandler() throws Exception {
XxlJobHelper.log("httpJobHandler execute...");
// param parse
String param = XxlJobHelper.getJobParam();
if (param==null || param.trim().length()==0) {
XxlJobHelper.log("param[" + param + "] invalid.");
XxlJobHelper.handleFail();
return;
}
String[] httpParams = param.split("\n");
String url = null;
String method = null;
String data = null;
for (String httpParam : httpParams) {
if (httpParam.startsWith("url:")) {
url = httpParam.substring(httpParam.indexOf("url:") + 4).trim();
}
if (httpParam.startsWith("method:")) {
method = httpParam.substring(httpParam.indexOf("method:") + 7).trim().toUpperCase();
}
if (httpParam.startsWith("data:")) {
data = httpParam.substring(httpParam.indexOf("data:") + 5).trim();
}
}
// param valid
if (url==null || url.trim().length()==0) {
XxlJobHelper.log("url[" + url + "] invalid.");
XxlJobHelper.handleFail();
return;
}
if (method==null || !Arrays.asList("GET", "POST").contains(method)) {
XxlJobHelper.log("method[" + method + "] invalid.");
XxlJobHelper.handleFail();
return;
}
boolean isPostMethod = method.equals("POST");
// request
HttpURLConnection connection = null;
BufferedReader bufferedReader = null;
try {
// connection
URL realUrl = new URL(url);
connection = (HttpURLConnection) realUrl.openConnection();
// connection setting
connection.setRequestMethod(method);
connection.setDoOutput(isPostMethod);
connection.setDoInput(true);
connection.setUseCaches(false);
connection.setReadTimeout(5 * 1000);
connection.setConnectTimeout(3 * 1000);
connection.setRequestProperty("connection", "Keep-Alive");
connection.setRequestProperty("Content-Type", "application/json;charset=UTF-8");
connection.setRequestProperty("Accept-Charset", "application/json;charset=UTF-8");
// do connection
connection.connect();
// data
if (isPostMethod && data!=null && data.trim().length() > 0) {
DataOutputStream dataOutputStream = new DataOutputStream(connection.getOutputStream());
dataOutputStream.write(data.getBytes("UTF-8"));
dataOutputStream.flush();
dataOutputStream.close();
}
// valid StatusCode
int statusCode = connection.getResponseCode();
if (statusCode!=200) {
throw new RuntimeException("Http Request StatusCode(" + statusCode + ") Invalid.");
}
// result
bufferedReader = new BufferedReader(new InputStreamReader(connection.getInputStream(), "UTF-8"));
StringBuilder result = new StringBuilder();
String line;
while ((line = bufferedReader.readLine())!=null) {
result.append(line);
}
String responseMsg = result.toString();
XxlJobHelper.log(responseMsg);
return;
} catch (Exception e) {
XxlJobHelper.log(e);
XxlJobHelper.handleFail();
return;
} finally {
try {
if (bufferedReader!=null) {
bufferedReader.close();
}
if (connection!=null) {
connection.disconnect();
}
} catch (Exception e2) {
XxlJobHelper.log(e2);
}
}
}
/**
* 5、生命周期任务示例:任务初始化与销毁时,支持自定义相关逻辑;
*/
@XxlJob(value = "demoJobHandler2", init = "init", destroy = "destroy")
public void demoJobHandler2() throws Exception {
XxlJobHelper.log("demoJobHandler2, execute...");
}
public void init() {
log.info("init");
}
public void destroy() {
log.info("destroy");
}
}
(@pdai: From a design point of view, xxl-job can subdivide the above different types)
Job scheduling configuration and execution
Add a new job, and fill in the custom jobhandler name in the above @XxlJob(value=" custom jobhandler name", init = "JobHandler initialization method", destroy = "JobHandler destruction method") into JobHandler.
Other configurations are as follows:
You can choose to execute a task once in the operation, or start it (according to Cron execution)
Can view execution records
Further, you can see the execution log of each execution record
Bean pattern (class based)
Bean mode tasks support class-based development, and each task corresponds to a Java class.
Advantages : no restrictions on the project environment, good compatibility. Even frameless projects, such as projects started directly by the main method, can provide support, you can refer to the sample project "xxl-job-executor-sample-frameless";
Disadvantages :
- Each task needs to occupy a Java class, resulting in a waste of classes;
- Automatic scanning of tasks and injection into executor containers is not supported, and manual injection is required.
Job development environment dependencies
Same as Bean pattern (method based)
Job development
Development steps:
In the executor project, develop the Job class:
- Develop a JobHandler class that inherits from "com.xxl.job.core.handler.IJobHandler" and implement the task method.
- Manually inject into the executor container as follows.
register jobHandler
-
XxlJobExecutor.registJobHandler("xxxxxJobHandler", new xxxxxJobHandler());
-
Job development
package tech.pdai.springboot.xxljob.job;
import com.xxl.job.core.handler.IJobHandler;
import lombok.extern.slf4j.Slf4j;
/**
* @author pdai
*/
@Slf4j
public class BeanClassDemoJob extends IJobHandler {
@Override
public void execute() throws Exception {
log.info("BeanClassDemoJob, execute...");
}
}
Register jobHandler (@pdai: The xxl-job here is not well designed, it can be automatically registered through IJobHandler)
XxlJobExecutor.registJobHandler("beanClassDemoJobHandler", new BeanClassDemoJob());
Start the SpringBoot application, you can find the registered
...
20:34:15.385 logback [main] INFO c.x.job.core.executor.XxlJobExecutor - >>>>>>>>>>> xxl-job register jobhandler success, name:beanClassDemoJobHandler, jobHandler:tech.pdai.springboot.xxljob.job.BeanClassDemoJob@640ab13c
...
Job scheduling configuration and execution
Same as Bean pattern (method based)
After adding execution to the scheduler, the log of the background execution is as follows:
20:41:00.021 logback [xxl-job, EmbedServer bizThreadPool-1023773196] INFO c.x.job.core.executor.XxlJobExecutor - >>>>>>>>>>> xxl-job regist JobThread success, jobId:5, handler:tech.pdai.springboot.xxljob.job.BeanClassDemoJob@640ab13c
20:41:00.022 logback [xxl-job, JobThread-5-1654681260021] INFO t.p.s.xxljob.job.BeanClassDemoJob - BeanClassDemoJob, execute...
GLUE mode
Tasks are maintained in the scheduling center in the form of source code, support online update through Web IDE, real-time compilation and validation, so there is no need to specify JobHandler.
Configure and start the process
The development process is as follows:
Create a Job of type GLUE (here takes Java as an example)
Select the specified task, click the "GLUE" button on the right side of the task, and you will go to the Web IDE interface of the GLUE task, where the task code can be developed (you can also copy and paste it into the editor after the development is completed in the IDE).
Version retrospective function (supports 30 versions of version retrospective): In the Web IDE interface of the GLUE task, select the drop-down box "Version Retrospective" in the upper right corner, and the update history of the GLUE will be listed. Select the corresponding version to display the version code. After saving, the GLUE code will roll back to the corresponding historical version;
The record after execution is as follows
What else is there in GLUE mode
xxl-job supports the following GLUE modes:
- GLUE mode (Java): The task is maintained in the scheduling center by source code; the task of this mode is actually a Java class code inherited from IJobHandler and maintained by "groovy" source code, it runs in the executor project, you can use @Resource /@Autowire injects other services in the executor;
- GLUE mode (Shell): The task is maintained in the scheduling center by source code; the task in this mode is actually a "shell" script;
- GLUE mode (Python): The task is maintained in the dispatch center by source code; the task in this mode is actually a "python" script;
- GLUE mode (PHP): The task is maintained in the dispatch center by source code; the task in this mode is actually a "php" script;
- GLUE mode (NodeJS): The task is maintained in the scheduling center in the form of source code; the task of this mode is actually a "nodejs" script;
- GLUE mode (PowerShell): The task is maintained in the scheduling center in the form of source code; the task in this mode is actually a "PowerShell" script;
More configuration instructions
+ 基础配置:
- 执行器:任务的绑定的执行器,任务触发调度时将会自动发现注册成功的执行器, 实现任务自动发现功能; 另一方面也可以方便的进行任务分组。每个任务必须绑定一个执行器, 可在 "执行器管理" 进行设置;
- 任务描述:任务的描述信息,便于任务管理;
- 负责人:任务的负责人;
- 报警邮件:任务调度失败时邮件通知的邮箱地址,支持配置多邮箱地址,配置多个邮箱地址时用逗号分隔;
+ 触发配置:
- 调度类型:
+ 无:该类型不会主动触发调度;
+ CRON:该类型将会通过CRON,触发任务调度;
+ 固定速度:该类型将会以固定速度,触发任务调度;按照固定的间隔时间,周期性触发;
+ 固定延迟:该类型将会以固定延迟,触发任务调度;按照固定的延迟时间,从上次调度结束后开始计算延迟时间,到达延迟时间后触发下次调度;
- CRON:触发任务执行的Cron表达式;
- 固定速度:固件速度的时间间隔,单位为秒;
- 固定延迟:固件延迟的时间间隔,单位为秒;
+ 高级配置:
- 路由策略:当执行器集群部署时,提供丰富的路由策略,包括;
FIRST(第一个):固定选择第一个机器;
LAST(最后一个):固定选择最后一个机器;
ROUND(轮询):;
RANDOM(随机):随机选择在线的机器;
CONSISTENT_HASH(一致性HASH):每个任务按照Hash算法固定选择某一台机器,且所有任务均匀散列在不同机器上。
LEAST_FREQUENTLY_USED(最不经常使用):使用频率最低的机器优先被选举;
LEAST_RECENTLY_USED(最近最久未使用):最久未使用的机器优先被选举;
FAILOVER(故障转移):按照顺序依次进行心跳检测,第一个心跳检测成功的机器选定为目标执行器并发起调度;
BUSYOVER(忙碌转移):按照顺序依次进行空闲检测,第一个空闲检测成功的机器选定为目标执行器并发起调度;
SHARDING_BROADCAST(分片广播):广播触发对应集群中所有机器执行一次任务,同时系统自动传递分片参数;可根据分片参数开发分片任务;
- 子任务:每个任务都拥有一个唯一的任务ID(任务ID可以从任务列表获取),当本任务执行结束并且执行成功时,将会触发子任务ID所对应的任务的一次主动调度。
- 调度过期策略:
- 忽略:调度过期后,忽略过期的任务,从当前时间开始重新计算下次触发时间;
- 立即执行一次:调度过期后,立即执行一次,并从当前时间开始重新计算下次触发时间;
- 阻塞处理策略:调度过于密集执行器来不及处理时的处理策略;
单机串行(默认):调度请求进入单机执行器后,调度请求进入FIFO队列并以串行方式运行;
丢弃后续调度:调度请求进入单机执行器后,发现执行器存在运行的调度任务,本次请求将会被丢弃并标记为失败;
覆盖之前调度:调度请求进入单机执行器后,发现执行器存在运行的调度任务,将会终止运行中的调度任务并清空队列,然后运行本地调度任务;
- 任务超时时间:支持自定义任务超时时间,任务运行超时将会主动中断任务;
- 失败重试次数;支持自定义任务失败重试次数,当任务失败时将会按照预设的失败重试次数主动进行重试;
Sample source code
https://github.com/realpdai/tech-pdai-spring-demos
more content
Say goodbye to fragmented learning, one-stop systematic learning without routines Back-end development: Java full stack knowledge system https://pdai.tech
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。