1. Quartz

Quartz is a powerful open source task scheduling framework provided by OpenSymphony to execute timing tasks. For example, we need to export data from the database at three o'clock in the morning every day. At this time, we need a task scheduling framework to help us execute these programs automatically. How does Quartz achieve it?

1) First, we need to define an interface for running business logic, namely Job. Our class inherits this interface to implement business logic, such as reading the database and exporting data at three in the morning.

2) After having the job, the job needs to be executed on time, which requires a trigger Trigger. The trigger Trigger is to execute the job defined by us at three o'clock in the morning every day according to our requirements.

3) Once you have the task Job and the trigger Trigger, you need to combine them so that the trigger Trigger can call the Job at the specified time. At this time, a Schedule is needed to realize this function.

Therefore, Quartz mainly consists of three parts:
Scheduler: Scheduler
Task: JobDetail
Trigger: Trigger, including SimpleTrigger and CronTrigger

The process of creating a Quartz task is as follows:

//定义一个作业类,实现用户的业务逻辑
public class HelloJob implements Job {
     ......
     实现业务逻辑
}
//根据作业类得到JobDetail
JobDetail jobDetail = JobBuilder.newJob(HelloJob.class)
//定义一个触发器,按照规定的时间调度作业
Trigger trigger = TriggerBuilder.newTrigger("每隔1分钟执行一次")
//根据作业类和触发器创建调度器
Scheduler scheduler = scheduler.scheduleJob(jobDetail,trigger);
//启动调度器,开始执行任务
scheduler .start()

2. Basic principles of Elastic-Job

2.1 Sharding

In order to improve the concurrency of tasks, Elastic-Job introduces the concept of fragmentation, that is, a task is divided into multiple fragments, and then multiple execution machines receive these fragments for execution. For example, there are 100 million pieces of data in a database, these data need to be read out and calculated, and then written into the database. The 100 million pieces of data can be divided into 10 shards, and each shard reads the 10 million pieces of data, and then writes them into the database after calculation. The 10 fragments are numbered 0, 1, 2...9. If there are three machines to execute, machine A is divided into fragments (0, 1, 2, 9), and machine B is divided into fragments (3, 4, 5) The C machine is divided into shards (6, 7, 8).

2.2 Job scheduling and execution

Elastic-Job is a decentralized task scheduling framework. When multiple nodes are running, a master node is selected first. When the execution time is reached, each instance starts to execute the task. The master node is responsible for the division of shards, and other nodes wait After the division is completed, the master node stores the result of the division in zookeeper, and then each node obtains the divided shard items from zookeeper, and passes the shard information as a parameter to the local task function to execute the task .

2.3 Types of assignments

Elastic-job supports three types of job task processing!

Simple type job: Simple type is used for general task processing, only need to implement SimpleJob interface. This interface only provides a single method for coverage, and this method will be executed periodically, similar to the Quartz native interface.

Dataflow type job: The Dataflow type is used to process data flows and needs to implement the DataflowJob interface. This interface provides 2 methods for coverage, which are used to fetch (fetchData) and process (processData) data respectively.

Script type job: Script type job means script type job, which supports all types of scripts such as shell, python, perl, etc. Just configure scriptCommandLine through the console or code, no coding is required. The execution script path can contain parameters. After the parameters are passed, the job framework will automatically append the last parameter as the job runtime information.

3. The execution principle of Elastic-Job

3.1 Elastic-Job startup process

The following is a SimpleJob type task to illustrate the elastic-job startup process

public class MyElasticJob implements SimpleJob {
    public void execute(ShardingContext context) {
         //实现业务逻辑
          ......
    }
   
     // 对zookeeper进行设置,作为分布式任务的注册中心
    private static CoordinatorRegistryCenter createRegistryCenter() {
        CoordinatorRegistryCenter regCenter = new ZookeeperRegistryCenter(new ZookeeperConfiguration("xxxx"));
        regCenter.init();
        return regCenter;
    }

    //设置任务的执行频率、执行的类
    private static LiteJobConfiguration createJobConfiguration() {
        JobCoreConfiguration simpleCoreConfig = JobCoreConfiguration.newBuilder("demoSimpleJob", "0/15 * * * * ?", 10).build();
        // 定义SIMPLE类型配置
        SimpleJobConfiguration simpleJobConfig = new SimpleJobConfiguration(simpleCoreConfig, MyElasticJob.class.getCanonicalName());
        // 定义Lite作业根配置
        LiteJobConfiguration simpleJobRootConfig = LiteJobConfiguration.newBuilder(simpleJobConfig).build();
        return simpleJobRootConfig;
    }
   //主函数
 public static void main(String[] args) {
        new JobScheduler(createRegistryCenter(), createJobConfiguration()).init();
    }
}

Create an Elastic-Job task and execute it, the steps are as follows:
1) The basic information of zookeeper needs to be set first. Elastic-Job uses zookeeper for distributed management, such as master selection, metadata storage and reading, and distributed monitoring mechanism.

2) Create a Job class that executes tasks, take the Simple type job as an example, create a class that inherits SimpleJob, and implement the execute function in this class.

3) Set the basic information of the job, set the name of the job (jobName), the time expression of the job execution (cron), and the total number of shards (shardingTotalCount) in JobCoreConfiguration; then set the Job class that executes the job in SimpleJobConfiguration, and finally Define the Lite job root configuration.

4) Create an instance of JobScheduler (job scheduler), and then perform job initialization in the init() method of JobScheduler, so that the job starts to run.

The job scheduling of Elastic-Job is completed in JobScheduler. The following describes the JobScheduler method in detail. The definition of JobScheduler is as follows:

public class JobScheduler {
    
    public static final String ELASTIC_JOB_DATA_MAP_KEY = "elasticJob";
    
    private static final String JOB_FACADE_DATA_MAP_KEY = "jobFacade";
     
    //作业配置
    private final LiteJobConfiguration liteJobConfig;
    
   //注册中心 
   private final CoordinatorRegistryCenter regCenter;
    
    //调度器门面
    private final SchedulerFacade schedulerFacade;
    
    //作业门面
    private final JobFacade jobFacade;
 
     private JobScheduler(final CoordinatorRegistryCenter regCenter, final LiteJobConfiguration liteJobConfig, final JobEventBus jobEventBus, final ElasticJobListener... elasticJobListeners) {
        JobRegistry.getInstance().addJobInstance(liteJobConfig.getJobName(), new JobInstance());
 
        this.liteJobConfig = liteJobConfig;
 
        this.regCenter = regCenter;
 
        List<ElasticJobListener> elasticJobListenerList = Arrays.asList(elasticJobListeners);
 
        setGuaranteeServiceForElasticJobListeners(regCenter, elasticJobListenerList);
 
        schedulerFacade = new SchedulerFacade(regCenter, liteJobConfig.getJobName(), elasticJobListenerList);
 
        jobFacade = new LiteJobFacade(regCenter, liteJobConfig.getJobName(), Arrays.asList(elasticJobListeners), jobEventBus);
    }

As above, in the construction method of JobScheduler, set up job configuration information liteJobConfig, registration center regCenter, a series of listeners elasticJobListenerList, scheduler facade, and job facade.

After creating the JobScheduler instance, initialize the job, as follows:

/**
     * 初始化作业.
     */
    public void init() {
        JobRegistry.getInstance().setCurrentShardingTotalCount(liteJobConfig.getJobName(), liteJobConfig.getTypeConfig().getCoreConfig().getShardingTotalCount());
        JobScheduleController jobScheduleController = new JobScheduleController(createScheduler(), createJobDetail(liteJobConfig.getTypeConfig().getJobClass()), liteJobConfig.getJobName());
        JobRegistry.getInstance().registerJob(liteJobConfig.getJobName(), jobScheduleController, regCenter);
        schedulerFacade.registerStartUpInfo(liteJobConfig);
        jobScheduleController.scheduleJob(liteJobConfig.getTypeConfig().getCoreConfig().getCron());
    }

As above,
1) JobRegistry is the job registry, which stores job metadata in the form of a singleton, and sets the total number of shards and other information in the JobRegistry.

2) jobScheduleController is a job scheduling controller, which can be executed in jobScheduleController: scheduling jobs, rescheduling jobs, suspending jobs, resuming jobs, and resuming jobs immediately. So the start, pause, and resume of the job are all executed in jobScheduleController.

3) Set the job name, job scheduler, and registration center in the job registry JobRegistry.

4) Execute the registerStartUpInfo method of the scheduler facade schedulerFacade, register the job startup information in this method, the code is as follows:

/**
     * 注册作业启动信息.
     * 
     * @param liteJobConfig 作业配置
     */
    public void registerStartUpInfo(final LiteJobConfiguration liteJobConfig) {
        regCenter.addCacheData("/" + liteJobConfig.getJobName());
        // 开启所有监听器
        listenerManager.startAllListeners();
        // 选举主节点
        leaderService.electLeader();
        //持久化job的配置信息
        configService.persist(liteJobConfig);
        LiteJobConfiguration liteJobConfigFromZk = configService.load(false);
        // 持久化作业服务器上线信息
       serverService.persistOnline(!liteJobConfigFromZk.isDisabled());
        // 持久化作业运行实例上线相关信息,将服务实例注册到zk
        instanceService.persistOnline();
        // 设置 需要重新分片的标记
        shardingService.setReshardingFlag();
        // 初始化 作业监听服务
        monitorService.listen();
        // 初始化 调解作业不一致状态服务
        if (!reconcileService.isRunning()) {
            reconcileService.startAsync();
        }
    }

As above,
1) Turn on all the monitors and use zookeeper's watch mechanism to monitor the changes of various metadata in the system, so as to perform corresponding operations.

2) Election of the master node, use zookeeper's distributed lock to select a master node, the master node mainly divides the shards.

3) Persist various metadata to zookeeper, such as job configuration information, information about each service instance, etc.

4) Set the flag that needs to be fragmented, and re-fragment is required when the task is executed for the first time or the service instance in the system is increased or decreased.

After the job startup information is registered, the scheduleJob method of jobScheduleController is called to schedule the job, so that the job starts to execute. The code of the scheduleJob method is as follows:

/**
     * 调度作业.
     * 
     * @param cron CRON表达式
     */
    public void scheduleJob(final String cron) {
        try {
            if (!scheduler.checkExists(jobDetail.getKey())) {
                scheduler.scheduleJob(jobDetail, createTrigger(cron));
            }
            scheduler.start();
        } catch (final SchedulerException ex) {
            throw new JobSystemException(ex);
        }
    }

From the previous Quartz explanation, the scheduler combines jobDetail and trigger Trigger, and then calls scheduler.start(), thus starting the job call.
It can be known from the above code analysis. The start process of the job is as follows:

3.2 Elastic-Job execution process

From the previous explanation of Quartz, we can see that the execution of the task is actually to run the business logic defined in JobDetail. We only need to look at the content in jobDetail to know the process of job execution.

private JobDetail createJobDetail(final String jobClass) {
    JobDetail result = JobBuilder.newJob(LiteJob.class).withIdentity(liteJobConfig.getJobName()).build();
    //忽略其它代码
}

From the above code, we can see that the task performed is the content of the LiteJob class

public final class LiteJob implements Job {
    
    @Setter
    private ElasticJob elasticJob;
    
    @Setter
    private JobFacade jobFacade;
    
    @Override
    public void execute(final JobExecutionContext context) throws JobExecutionException {
        JobExecutorFactory.getJobExecutor(elasticJob, jobFacade).execute();
    }
}

LiteJob obtains the job executor (AbstractElasticJobExecutor) through JobExecutorFactory and executes it:

public final class JobExecutorFactory {
    
    /**
     * 获取作业执行器.
     *
     * @param elasticJob 分布式弹性作业
     * @param jobFacade 作业内部服务门面服务
     * @return 作业执行器
     */
    @SuppressWarnings("unchecked")
    public static AbstractElasticJobExecutor getJobExecutor(final ElasticJob elasticJob, final JobFacade jobFacade) {
        // ScriptJob
        if (null == elasticJob) {
            return new ScriptJobExecutor(jobFacade);
        }
        // SimpleJob
        if (elasticJob instanceof SimpleJob) {
            return new SimpleJobExecutor((SimpleJob) elasticJob, jobFacade);
        }
        // DataflowJob
        if (elasticJob instanceof DataflowJob) {
            return new DataflowJobExecutor((DataflowJob) elasticJob, jobFacade);
        }
        throw new JobConfigurationException("Cannot support job type '%s'", elasticJob.getClass().getCanonicalName());
    }
}

It can be seen that the job executor factory JobExecutorFactory returns the corresponding job executor according to different job types, and then executes the execute() function of the corresponding job executor. Let's take a look at the execute function

// AbstractElasticJobExecutor.java
public final void execute() {
   // 检查作业执行环境
   try {
       jobFacade.checkJobExecutionEnvironment();
   } catch (final JobExecutionEnvironmentException cause) {
       jobExceptionHandler.handleException(jobName, cause);
   }
   // 获取当前作业服务器的分片上下文
   ShardingContexts shardingContexts = jobFacade.getShardingContexts();
   // 发布作业状态追踪事件(State.TASK_STAGING)
   if (shardingContexts.isAllowSendJobEvent()) {
       jobFacade.postJobStatusTraceEvent(shardingContexts.getTaskId(), State.TASK_STAGING, String.format("Job '%s' execute begin.", jobName));
   }
   // 跳过存在运行中的被错过作业
   if (jobFacade.misfireIfRunning(shardingContexts.getShardingItemParameters().keySet())) {
       // 发布作业状态追踪事件(State.TASK_FINISHED)
       if (shardingContexts.isAllowSendJobEvent()) {
           jobFacade.postJobStatusTraceEvent(shardingContexts.getTaskId(), State.TASK_FINISHED, String.format(
                   "Previous job '%s' - shardingItems '%s' is still running, misfired job will start after previous job completed.", jobName, 
                   shardingContexts.getShardingItemParameters().keySet()));
       }
       return;
   }
   // 执行作业执行前的方法
   try {
       jobFacade.beforeJobExecuted(shardingContexts);
       //CHECKSTYLE:OFF
   } catch (final Throwable cause) {
       //CHECKSTYLE:ON
       jobExceptionHandler.handleException(jobName, cause);
   }
   // 执行普通触发的作业
   execute(shardingContexts, JobExecutionEvent.ExecutionSource.NORMAL_TRIGGER);
   // 执行被跳过触发的作业
   while (jobFacade.isExecuteMisfired(shardingContexts.getShardingItemParameters().keySet())) {
       jobFacade.clearMisfire(shardingContexts.getShardingItemParameters().keySet());
       execute(shardingContexts, JobExecutionEvent.ExecutionSource.MISFIRE);
   }
   // 执行作业失效转移
   jobFacade.failoverIfNecessary();
   // 执行作业执行后的方法
   try {
       jobFacade.afterJobExecuted(shardingContexts);
       //CHECKSTYLE:OFF
   } catch (final Throwable cause) {
       //CHECKSTYLE:ON
       jobExceptionHandler.handleException(jobName, cause);
   }
}

The main process of the execute function:

  1. Check the job execution environment
  2. Get the fragmentation context of the current job server. That is, the current sharding information is obtained through the function jobFacade.getShardingContexts(), and the master node divides the sharding items according to the corresponding sharding strategy. After the division, the result of the division is stored in the zookeeper, and other nodes are then taken from the zookeeper. Get the result of the division.
  3. Post job status tracking event
  4. Skip running jobs that have been missed
  5. The method before the execution of the job
  6. Perform normal triggered jobs
    Finally, the execute method in MyElasticJob will be called to achieve the purpose of executing the user's business logic.
    The execution process of the entire Elastic-Job is as follows:

4. Elastic-Job optimization practice

4.1 Idling problem

Elastic-Job jobs can be divided into two types according to whether there are implementation classes: jobs with implementation classes and jobs without implementation classes. For example, Simple type and DataFlow type jobs require users to define their own implementation classes, inheriting SimpleJob or DataFlowJob classes; the other is jobs that do not need to be implemented, such as Script type jobs and Http type jobs, corresponding to this kind of jobs that do not need to be implemented. , The user only needs to fill in the corresponding configuration on the configuration platform, and we will periodically pull the latest registered tasks from the configuration platform in the background, and then execute the script or Http type jobs that the user recently registered.

In the production environment, there are many machines in the cluster that execute jobs, but there are few shards for each job registered by the user (most of them have only 1 shard). According to the previous analysis, it can be seen that there is only one sharded task. , All machines in the cluster will participate in the operation, but since only the machine that gets that shard will actually run, the rest will be idle because there is no shard, which is undoubtedly a waste of computing resources.

4.2 Solution

In order to solve the problem of idling caused by a small number of shards and a large number of execution servers, our solution here is to specify the corresponding execution server when the user configures the platform registration task, the number of execution servers M = the number of shards + 1 ( The extra machines are used as redundant backups). If the user's job is divided into 2, the background sorts according to the current load of the machine every day, and selects 3 machines with the lightest load as the execution server. In this way, when these machines regularly pull tasks from the configuration platform, if they find that they do not belong to the execution server of this task, they will not run the job, and only the execution server that belongs to the current task will run. This not only ensures reliability, but also avoids excessive machine idling and improves efficiency.

5. OPPO Mass Job Scheduling Scheme

Elastic-Job uses zookeeper to achieve flexible distributed functions, which can meet user needs when the task volume is small, but it also has the following shortcomings:

  1. The elastic distributed function of Elastic-Job strongly relies on zookeeper, and zookeeper easily becomes a performance bottleneck.
  2. The number of shards divided by the task may be less than the number of instances executing the task, causing some machines to run idle.

Based on the above shortcomings of Elastic-Job, the OPPO middleware team adopted a centralized scheduling scheme when dealing with massive task scheduling. User jobs do not need to be triggered regularly by Quartz, but instead trigger local tasks by receiving messages from the server. The user first registers the task on the registration platform, and the server periodically scans the database of the registration platform for the tasks that need to be executed in the latest period (30 seconds), and then generates a delay message according to the actual execution time of the task and writes the message with the delay function Queue, the user then pulls data from the message queue and triggers the execution of the job. This centralized scheduling method is triggered by the central server to execute the message, which not only overcomes the performance bottleneck of zookeeper, but also avoids the idling of the task server, and can meet the execution requirements of a large number of tasks.

Summarize

Elastic-Job uses quartz to schedule jobs, and at the same time introduces zookeeper to realize the function of distributed management. On the basis of the high-availability solution, the idea of elastic expansion and data sharding is added to facilitate the use of distributed servers to a greater extent. The resources thus realize the function of distributed task scheduling. At the same time, due to the idea of sharding, the servers that have not obtained the sharding will also be in an idling state, which can be circumvented in actual production.

Author profile
Xinchun OPPO Senior Backend Engineer
He is currently responsible for the research and development of distributed job scheduling, focusing on middleware technologies such as message queues, redis databases, and ElasticSearch.

Get more exciting content, scan the code to follow the [OPPO Digital Intelligence Technology] public account


OPPO数智技术
612 声望950 粉丝