Introduction: This article introduces how the asynchronous task processing system solves the problem of long time-consuming and high concurrency in business.
Author: Bu Fang (Alibaba Cloud Serverless Technology Leader)
When we build an application, we always want it to be responsive and inexpensive. In practice, our system faces various challenges, such as unpredictable traffic spikes, dependent downstream services becoming slow, and a small number of requests consuming a lot of CPU/memory resources. These factors often cause the entire system to be slowed down or even unable to respond to requests. In order to make application services always respond quickly, more computing resources have to be reserved in many cases, but most of the time, these computing resources are idle. A better approach is to separate the processing logic that is time-consuming or consumes a lot of resources from the main request processing logic and hand it over to a more resource-elastic system for asynchronous execution, which not only allows the request to be processed quickly and returned to the user , and also save costs.
Generally speaking, logic that is time-consuming, consumes a lot of resources, or is prone to error is very suitable for being separated from the main request process and executed asynchronously. For example, when a new user registers, the system usually sends a welcome email after the registration is successful. The act of sending a welcome email can be decoupled from the registration process. Another example is when a user uploads an image, which usually needs to generate thumbnails of different sizes after the image is uploaded. However, the process of image processing does not need to be included in the image upload processing process. After the user uploads the image successfully, the process can be ended, and the processing logic such as generating thumbnails can be executed as an asynchronous task. In this way, the application server can avoid being overwhelmed by computationally intensive tasks such as image processing, and users can also get a faster response. Common asynchronous execution tasks include:
- Send Email/Instant Message
- Check for spam
- Document processing (converting formats, exporting, ...)
- Audio and video, image processing (thumbnail generation, watermarking, yellowing, transcoding,...)
- Calling external third-party services
- rebuild the search index
- Import/export large amounts of data
- web crawler
- Data cleaning
- ...
Slack , Pinterest , Facebook and other companies widely use asynchronous tasks to achieve better service availability and lower costs. According to Dropbox statistics , there are more than 100 different types of asynchronous tasks in their business scenarios. A fully functional asynchronous task processing system can bring significant benefits:
- Faster system response time . Separating the time-consuming and resource-heavy logic from the request processing process and executing it asynchronously elsewhere can effectively reduce the request response delay and bring a better user experience.
- Better handling of large bursts of requests . In many scenarios such as e-commerce, there are often a large number of sudden requests that impact the system. Similarly, if the resource-heavy resource consumption logic is stripped from the request processing flow and executed asynchronously elsewhere, a system with the same resource capacity can respond to larger peaks of request traffic.
- lower cost. The execution time of asynchronous tasks is usually between hundreds of milliseconds to several hours. According to different task types, reasonable selection of task execution time and more flexible use of resources can achieve lower costs.
- Improved retry strategy and error handling capabilities . The task is guaranteed to be executed reliably (at-least-once) and retried according to the configured retry policy, thus achieving better fault tolerance. For example, if a third-party downstream service is called, if it can be turned into an asynchronous task, a reasonable retry strategy should be set. Even if the downstream service is occasionally unstable, it will not affect the success rate of the task.
- Complete tasks faster . The execution of multiple tasks is highly parallelized. By scaling the resources of the asynchronous task processing system, massive tasks can be completed faster and at a reasonable cost.
- Better task priority management and flow control . Tasks are usually processed with different priorities depending on their type. The asynchronous task management system can help users better isolate tasks of different priorities, so that high-priority tasks can be processed faster, and low-priority tasks will not be starved to death.
- More diverse task triggering methods . There are various ways of triggering tasks, such as submitting tasks directly through API, or triggering them through events, or executing them regularly.
- better observability . Asynchronous task processing systems usually provide capabilities such as task logs, indicators, status query, and link tracking, so that asynchronous tasks can be better observed and problems can be diagnosed more easily.
- Higher R&D efficiency . Users focus on the implementation of task processing logic, task scheduling, resource expansion and contraction, high availability, flow control, task priority and other functions are completed by the task processing system, and the R&D efficiency is greatly improved.
Task Processing System Architecture
A task processing system usually consists of three parts: task API and observables, task distribution and task execution. We first introduce the functions of these three subsystems, and then discuss the technical challenges and solutions faced by the entire system.
Task API/Dashboard
This subsystem provides a set of task-related APIs, including task creation, query, deletion, and more. Users use system functions through GUI, command-line tools, and the latter by directly calling APIs. The ability to be observable in ways like Dashboard is also very important. A good task processing system should include the following observable capabilities:
- Logs: It can collect and display task logs, and users can quickly query the logs of specified tasks.
- Indicators: The system needs to provide key indicators such as the number of queued tasks to help users quickly judge the execution of tasks.
- Link tracking: The time-consuming of each link during the task from submission to execution. Such as the time to queue in the queue, the actual execution time, etc. The diagram below shows the tracing capabilities of the Netflix Cosmos platform.
task distribution
Task distribution is responsible for the scheduling and distribution of tasks. A task distribution system that can be used in a production environment usually has the following functions:
- Reliable distribution of tasks: Once a task is submitted successfully, the system should ensure that the task is scheduled for execution no matter what the situation is.
- Scheduled/delayed distribution of tasks: Many types of tasks are expected to be executed at a specified time, such as sending emails/messages regularly, or generating data reports on a regular basis. Another situation is that the task can be delayed for a long time. For example, the data analysis task submitted before work can be completed before going to work the next day. Such tasks can be executed in the early morning when the resource consumption is low. , reduce costs through peak staggered execution.
- Deduplication of tasks : We always do not want tasks to be repeated. In addition to wasting resources, duplication of tasks can have more serious consequences. For example, a metering task miscalculated the bill due to repeated execution. To execute a task exactly-once, it needs to be done in every link of task submission, distribution, and execution of the entire chain, including when the user implements the task processing code, the execution must be successful or the execution failed, etc. In every case, do it exactly-once. How to implement a complete exactly-once is more complex and beyond the scope of this article. Many times it is also valuable for the system to provide a simplified semantics, that a task is only successfully executed once . Task deduplication requires the user to specify a task ID when submitting a task, and the system uses the ID to determine whether the task has been submitted and successfully executed.
- Task error retry : A reasonable task retry strategy is critical to efficient and reliable task completion. The retry of the task should consider several factors: 1) To match the processing capacity of the downstream task execution system. For example, if a flow control error from the downstream task execution system is received, or if the task execution is perceived as a bottleneck, exponential backoff is required to retry. Retry should not increase the pressure on the downstream system and overwhelm the downstream; 2) The retry strategy should be simple and clear, and easy for users to understand and configure. First of all, it is necessary to classify errors, distinguish non-retryable errors, retryable errors, and flow control errors. Non-retryable errors refer to errors that fail deterministically, and retry is meaningless, such as parameter errors, permission problems, and so on. A retryable error means that the factors that cause the task to fail are accidental, and the task will eventually succeed by retrying, such as internal system errors such as network timeout. A flow control error is a special kind of retryable error, which usually means that the downstream is already fully loaded, and the retry needs to use a backoff mode to control the amount of requests sent to the downstream.
- Load balancing of tasks: The execution time of tasks varies greatly, ranging from hundreds of milliseconds to tens of hours. Distributing tasks in a simple round-robin manner will result in uneven load on the execution nodes. A common pattern in practice is to place tasks in a queue, and the execution node actively pulls tasks according to the execution of their own tasks. Use queues to save tasks, distribute tasks to appropriate nodes according to the load of nodes, and balance the load of nodes. Task load balancing usually requires the cooperation of the distribution system and the execution subsystem.
- Tasks are distributed by priority : Task processing systems usually interface with many business scenarios, and their task types and priorities vary. Experience-related tasks at the core of the business have higher priority than edge tasks. Even if it is also a news notification, the importance of a product review notification received by a buyer on Taobao is definitely lower than the nucleic acid test notification in the new crown epidemic. But on the other hand, the system must also maintain a certain degree of fairness, so that high-priority tasks do not always preempt resources and starve low-priority tasks.
- Task flow control : The typical usage scenario of task flow control is to cut peaks and fill valleys. For example, a user submits hundreds of thousands of tasks at one time and expects to process them slowly within a few hours. Therefore, the system needs to limit the distribution rate of tasks to match the ability of downstream tasks to execute. Task flow control is also an important means to ensure the reliability of the system. The amount of submitted tasks for certain types of tasks increases suddenly and explosively. The system should limit its impact on the system through flow control and reduce the impact on other tasks.
- Batch suspend and delete tasks : In the actual production environment, it is very important to provide batch suspension and deletion of tasks. Users always encounter various situations. For example, if there are some problems in the execution of tasks, it is best to suspend the execution of subsequent tasks. After manual inspection, there is no problem, and then resume execution; or temporarily suspend low-priority tasks to release computing resources. to perform higher priority tasks. Another case is that the submitted task is problematic and the execution does not make sense. Therefore, the system should allow users to easily delete tasks that are being executed and queued. The suspension and deletion of tasks need to be implemented with the distribution and execution subsystems.
The architecture of task distribution can be divided into pull mode and push mode. Pull mode distributes tasks through a task queue. The instance that executes the task actively pulls the task from the task queue, and then pulls the new task after processing. Compared to the pull mode, the push mode adds a distributor role. The dispatcher reads tasks from the task queue, schedules them, and pushes them to the appropriate task execution instance.
The structure of the pull mode is clear, and a task distribution system can be quickly built based on popular software such as Redis, which performs well in simple task scenarios. However, if you want to support functions required by complex business scenarios such as task deduplication, task priority, batch suspension or deletion, and elastic resource expansion and contraction, the implementation complexity of the pull mode will increase rapidly. In practice, the pull mode faces some of the following main challenges:
- Resource auto-scaling and load balancing are complex. The task execution instance establishes a connection with the task queue and pulls the task. When the scale of the task execution instance is large, it will cause great pressure on the connection resources of the task queue. Therefore, a layer of mapping and allocation is required, and task instances are only connected to the corresponding task queues. The figure below shows the architecture of Slack 's asynchronous task processing system. Worker nodes are only connected to some Redis instances. This solves the ability of worker nodes to scale massively, but increases the complexity of scheduling and load balancing.
- From the perspective of supporting tasks such as priority, isolation and flow control, it is better to use different queues. However, there are too many queues, which increases the consumption of management and connection resources. It is very challenging to balance them.
- Functions such as task deduplication, batch suspension or deletion of tasks rely on the message queue function, but few message products can meet all requirements and often need to be developed by themselves. For example, from the perspective of scalability, it is usually impossible to have a separate task queue for each type of task. When the task queue contains multiple types of tasks, it is more complicated to suspend or delete a certain type of tasks in batches.
- The task type of the task queue is coupled with the task processing logic. If the task queue contains multiple types of tasks, the task processing logic is required to implement corresponding processing logic, which is not user-friendly. In practice, user A's task processing logic does not expect to receive other user tasks, so the task queue is usually managed by the user, which further increases the user's burden.
The core idea of the push mode is to decouple the task queue from the task execution instance, so that the boundary between the platform side and the user is clearer. Users only need to focus on the implementation of task processing logic, and the management of task queues and task execution node resource pools are all handled by the platform. The decoupling of the push mode also makes the expansion of task execution nodes no longer limited by the connection resources of the task queue, and can achieve higher flexibility. However, the push mode also introduces a lot of complexity. Task priority management, load balancing, scheduling and distribution, and flow control are all handled by the distributor, which needs to be linked with the upstream and downstream systems.
In general, when the task scene becomes complex, the system complexity is not low in either pull or push mode. However, the push mode makes the boundary between the platform and the user clearer, and simplifies the user's use complexity. Therefore, teams with strong technical strength usually choose the push mode when implementing a platform-level task processing system.
task execution
The task execution subsystem manages a batch of worker nodes that execute tasks and execute tasks in an elastic and reliable manner. A typical task execution subsystem needs to have the following functions:
- Reliable execution of tasks . Once the task is submitted successfully, no matter what the situation, the system should ensure that the task is executed. For example, if the node executing the task is down, the task should be scheduled to other nodes for execution. The reliable execution of tasks is usually achieved by the cooperation of task distribution and task execution subsystems.
- Shared resource pool . Different types of task processing resources share a unified resource pool, so as to cut peaks and fill valleys, improve resource utilization efficiency, and reduce costs. For example, by scheduling different types of tasks such as computing-intensive and IO-intensive to the same worker node, resources of multiple dimensions such as CPU, memory, and network on the node can be more fully utilized. The shared resource pool puts forward higher requirements for capacity management, task resource quota management, task priority management, and resource isolation.
- Elastic scaling of resources . The system can scale and execute node resources according to the execution of the load to reduce costs. The timing and amount of scaling is critical. The common scaling is based on the CPU, memory and other resource water levels of the task execution node, which takes a long time and cannot meet the scenarios with high real-time requirements. Many systems also scale using metrics such as the number of queued tasks. Another point of concern is that the expansion of the execution node needs to match the capabilities of the upstream and downstream systems. For example, when the task distribution subsystem uses queues to distribute tasks, the expansion of worker nodes should match the connection capacity of the queues.
- Task resource isolation . The resources are isolated from each other when multiple different tasks are executed on the worker nodes. Usually implemented using the container's isolation mechanism.
- Task resource quota . Users have diverse usage scenarios, often including multiple task types and priorities. The system should support users to set resource quotas for tasks or processing functions with different priorities, reserve resources for high-priority tasks, or limit the resources that low-priority tasks can use.
- Simplify the coding of task processing logic . A good task processing system allows users to focus on implementing a single task processing logic, and the system automatically executes tasks in parallel, elastically, and reliably.
- Smooth upgrades . Upgrades to the underlying system do not interrupt the execution of long-running tasks.
- Execution result notification . Real-time notification of task execution status and results. For tasks that fail to execute, the input of the task is saved in a dead letter queue, which is convenient for users to manually retry at any time.
The task execution subsystem usually uses the container cluster managed by K8s as a resource pool. K8s can manage nodes and schedule container instances that execute tasks to appropriate nodes. K8s also has built-in support for jobs (Jobs) and cron jobs (Cron Jobs), which simplifies the difficulty for users to use Job loads. K8s helps to achieve shared resource pool management, task resource isolation and other functions. However, the main capability of K8s is still in POD/instance management. In many cases, more functions need to be developed to meet the needs of asynchronous task scenarios. E.g:
- The HPA of K8s is generally difficult to meet the automatic scaling in task scenarios. Open source projects such as Keda provide models that scale by metrics such as the number of queued tasks. AWS also provides a similar solution in conjunction with CloudWatch.
- K8s generally needs to cooperate with queues to implement asynchronous tasks, and the management of queue resources requires users to be responsible.
- The native job scheduling and startup time of K8s is relatively slow, and the tps for submitting jobs is generally less than 200, so it is not suitable for high tps and short-latency tasks.
Note: There are some differences between jobs in K8s and tasks discussed in this article. A K8s job usually involves processing one or more tasks. The task of this paper is an atomic concept, a single task is executed on only one instance. The execution time varies from tens of milliseconds to hours.
Capability Layering of Asynchronous Task Processing System
According to the aforementioned analysis of the architecture and functions of the asynchronous task processing system, we divide the capabilities of the asynchronous task processing system into the following three layers:
- Level 1 : Generally, a R&D team of 1-5 people is required. The system is built by integrating the capabilities of open source software/cloud services such as K8s and message queues. The capabilities of the system are limited by the open source software/cloud services it relies on, and it is difficult to customize it according to business needs. The use of resources is static, and it does not have the ability to scale and balance resources. The business scale that can be carried is limited. With the growth of business scale and complexity, the cost of system development and maintenance will increase rapidly.
- Level 2 : A R&D team of 5-10 people is generally required. On the basis of open source software/cloud services, it has certain independent R&D capabilities to meet common business needs. It does not have complete task priority, isolation, and flow control capabilities, and usually configures different queues and computing resources for different business parties. The management of resources is relatively extensive and lacks real-time resource scaling and capacity management capabilities. The system lacks scalability and refined resource management capabilities, making it difficult to support large-scale complex business scenarios.
- Level 3 : Generally, 10+ R&D teams are required to build platform-level systems. Ability to support large-scale and complex business scenarios. Using a shared resource pool, it has complete capabilities in task scheduling, isolation flow control, load balancing, and resource scaling. The boundaries between the platform and users are clear, and the business side only needs to focus on the development of task processing logic. Has full observability.
<span class="lake-fontsize-1515">Level 1</span> | <span class="lake-fontsize-1515">Level 2</span> | <span class="lake-fontsize-1515">Level 3</span> | |
<span class="lake-fontsize-1515">Reliable distribution of tasks</span> | <span class="lake-fontsize-1515">Support</span> | <span class="lake-fontsize-1515">Support</span> | <span class="lake-fontsize-1515">Support</span> |
<span class="lake-fontsize-1515">Task timing/delayed sending</span> | <span class="lake-fontsize-1515">Depends on the selected message queue capability. Generally supports scheduled tasks, but does not support delayed tasks</span> | <span class="lake-fontsize-1515">Support</span> | <span class="lake-fontsize-1515">Support</span> |
<span class="lake-fontsize-1515">Task deduplication</span> | <span class="lake-fontsize-1515">Not supported</span> | <span class="lake-fontsize-1515">Support</span> | <span class="lake-fontsize-1515">Support</span> |
<span class="lake-fontsize-1515">Task error auto-retry</span> | <span class="lake-fontsize-1515">Limited support. Generally rely on the built-in retry strategy of K8s Jobs. For tasks that do not use K8s Jobs, users need to implement their own in the task processing logic</span> | <span class="lake-fontsize-1515">Limited support. Generally rely on the built-in retry strategy of K8s Jobs. For tasks that do not use K8s Jobs, users need to implement their own in the task processing logic</span> | <span class="lake-fontsize-1515">Yes. Clear boundaries between platforms and users, retry according to user-set policies</span> |
<span class="lake-fontsize-1515">Task load balancing</span> | <span class="lake-fontsize-1515">Limited support. Implemented through message queue in the case of small task execution instance scale</span> | <span class="lake-fontsize-1515">Limited support. Implemented through message queue in the case of small task execution instance scale</span> | <span class="lake-fontsize-1515">Yes. The system has the load balancing capability of large-scale nodes</span> |
<span class="lake-fontsize-1515">Task priority</span> | <span class="lake-fontsize-1515">Not supported</span> | <span class="lake-fontsize-1515">Limited support. Allow users to reserve resources for high-priority tasks, or limit resource usage for low-priority tasks</span> | <span class="lake-fontsize-1515">Yes. High-priority tasks can preempt the resources of low-priority tasks, and the system will take into account fairness to prevent low-priority tasks from being starved to death</span> |
<span class="lake-fontsize-1515">Task flow control</span> | <span class="lake-fontsize-1515">Not supported</span> | <span class="lake-fontsize-1515">Not supported. Generally, independent queues and computing resources are configured for different task types or business parties</span> | <span class="lake-fontsize-1515">Have flow control capabilities in every link of the system, and the system will not submit avalanches due to tasks bursting</span> |
<span class="lake-fontsize-1515">Batch pause/delete tasks</span> | <span class="lake-fontsize-1515">Not supported</span> | <span class="lake-fontsize-1515">Limited support. Depends on whether to configure separate queues and computing resources for different task types or business parties</span> | <span class="lake-fontsize-1515">Support</span> |
<span class="lake-fontsize-1515">Shared resource pool</span> | <span class="lake-fontsize-1515">Limited support. Depends on the scheduling capability of K8s. Generally, different clusters are built for each business party</span> | <span class="lake-fontsize-1515">Limited support. Depends on the scheduling capability of K8s. Generally, different clusters are built for each business party</span> | <span class="lake-fontsize-1515">Yes. Different types of tasks and different business scenarios share the same resource pool</span> |
<span class="lake-fontsize-1515">Resource elastic scaling</span> | <span class="lake-fontsize-1515">Not supported. The HPA of K8s is usually difficult to meet the scaling requirements in task scenarios</span> | <span class="lake-fontsize-1515">Not supported. The HPA of K8s is usually difficult to meet the scaling requirements in task scenarios</span> | <span class="lake-fontsize-1515">Yes. Real-time scaling according to the number of queued tasks, node resource utilization, etc.</span> |
<span class="lake-fontsize-1515">Task resource isolation</span> | <span class="lake-fontsize-1515">Yes. Depends on the resource isolation capability of the container</span> | <span class="lake-fontsize-1515">Yes. Depends on the resource isolation capability of the container</span> | <span class="lake-fontsize-1515">Yes. Depends on the resource isolation capability of the container</span> |
<span class="lake-fontsize-1515">Task resource quota</span> | <span class="lake-fontsize-1515">Not supported</span> | <span class="lake-fontsize-1515">Support</span> | <span class="lake-fontsize-1515">Support</span> |
<span class="lake-fontsize-1515">Simplify task processing logic coding</span> | <span class="lake-fontsize-1515">Not supported. The task processing logic needs to pull the task by itself and execute the task</span> | <span class="lake-fontsize-1515">Not supported. The task processing logic needs to pull the task by itself and execute the task</span> | <span class="lake-fontsize-1515">Support</span> |
<span class="lake-fontsize-1515">System smooth upgrade</span> | <span class="lake-fontsize-1515">Not supported</span> | <span class="lake-fontsize-1515">Not supported</span> | <span class="lake-fontsize-1515">Support</span> |
<span class="lake-fontsize-1515">Execution result notification</span> | <span class="lake-fontsize-1515">Not supported</span> | <span class="lake-fontsize-1515">Not supported</span> | <span class="lake-fontsize-1515">Support</span> |
<span class="lake-fontsize-1515">Observability</span> | <span class="lake-fontsize-1515">Rely on the observability of open source software such as K8s and message queues. Have basic task status query</span> | <span class="lake-fontsize-1515">Rely on the observability of open source software such as K8s and message queues. Have basic task status query</span> | <span class="lake-fontsize-1515">Full observability from task to system level</span> |
<span class="lake-fontsize-1515">Compare item</span> | <span class="lake-fontsize-1515">Function Compute Asynchronous Tasks</span> | <span class="lake-fontsize-1515">K8S Jobs</span> |
<span class="lake-fontsize-1515">Applicable scenarios</span> | <span class="lake-fontsize-1515">Suitable for real-time tasks with a task execution time of tens of milliseconds and offline tasks with a task execution time of tens of hours</span> | <span class="lake-fontsize-1515">Suitable for offline tasks with low requirements for task submission speed, relatively fixed task load, and low task real-time requirements</span> |
<span class="lake-fontsize-1515">Task observability</span> | <span class="lake-fontsize-1515">Yes. Provide logs, task queues and other indicators, task link time-consuming, task status query and other rich observable capabilities</span> | <span class="lake-fontsize-1515">self-integrated open source software implementation. </span> |
<span class="lake-fontsize-1515">Automatic scaling of task instances</span> | <span class="lake-fontsize-1515">Yes. Instance resource usage is automatically scaled according to the number of queued tasks</span> | <span class="lake-fontsize-1515">Not supported. Generally, automatic scaling and instance load balancing are implemented by task queues, which is highly complex</span> |
<span class="lake-fontsize-1515">Task instance scaling speed</span> | <span class="lake-fontsize-1515">Milliseconds</span> | <span class="lake-fontsize-1515">minutes</span> |
<span class="lake-fontsize-1515">Task instance resource utilization</span> | <span class="lake-fontsize-1515">Users only need to select the appropriate instance size, the instance will be automatically scaled and measured according to the actual processing time of the task, and the resource utilization rate is high</span> | <span class="lake-fontsize-1515">The size and number of instances need to be determined when the job is submitted. Instances are difficult to automatically scale and load balance, and resource utilization is low</span> |
<span class="lake-fontsize-1515">Task submission speed</span> | <span class="lake-fontsize-1515">A single user supports tens of thousands of tasks submitted per second</span> | <span class="lake-fontsize-1515">The entire cluster can launch hundreds of jobs per second at most</span> |
<span class="lake-fontsize-1515">Task timing/delayed submission</span> | <span class="lake-fontsize-1515">Support</span> | <span class="lake-fontsize-1515">Supports scheduled tasks, but does not support delayed tasks</span> |
<span class="lake-fontsize-1515">Task deduplication</span> | <span class="lake-fontsize-1515">Support</span> | <span class="lake-fontsize-1515">Not supported</span> |
<span class="lake-fontsize-1515">Pause/resume task execution</span> | <span class="lake-fontsize-1515">Support</span> | <span class="lake-fontsize-1515">Alpha state (K8S v1.21)</span> |
<span class="lake-fontsize-1515">Abort specified task</span> | <span class="lake-fontsize-1515">Support</span> | <span class="lake-fontsize-1515">Limited support. Indirectly by terminating the task instance</span> |
<span class="lake-fontsize-1515">Task flow control</span> | <span class="lake-fontsize-1515">Yes. Flow control can be performed at different granularities such as users, task processing functions, etc.</span> | <span class="lake-fontsize-1515">Not supported</span> |
<span class="lake-fontsize-1515">Automatic callback of task results</span> | <span class="lake-fontsize-1515">Support</span> | <span class="lake-fontsize-1515">Not supported</span> |
<span class="lake-fontsize-1515">devops cost</span> | <span class="lake-fontsize-1515">Only need to implement the processing logic of the task</span> | <span class="lake-fontsize-1515">Need to maintain K8S cluster</span> |
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。