1

SpringBoot is a suite based on the Java Spring framework. It pre-installs a series of Spring components, allowing developers to create stand-alone applications with minimal configuration. In the cloud-native world, there are a large number of platforms that can run SpringBoot applications, such as virtual machines, containers, etc. But one of the most attractive is to run SpringBoot applications in a serverless way. Through a series of articles, I will analyze the advantages and disadvantages of running SpringBoot applications on the Serverless platform from five aspects: architecture, deployment, monitoring, performance, and security. In order to make the analysis more representative, I chose the e-commerce application mall with more than 50k stars on github as an example. This is the fourth article in a series that shows you how to tune serverless application performance.

Instance startup speed optimization

In the actual combat tutorial of the previous article, I believe that everyone can feel the beauty of the convenience of Serverless. You can easily launch a flexible and highly available web application just by uploading the code package and mirroring. However, it still has the problem of "cold start delay" for the first startup. The startup of the Mall application instance is about 30 seconds, and users will experience a long cold start delay. In this "instant era", the application response will be somewhat slow. The flaws do not hide the beauty. (“cold start” refers to the state when a function serves a specific call request. When there is no request for a period of time, the serverless platform will recycle the function instance; when there is another request next time, the system will pull up the instance again in real time. This process Call it a cold start.)

Before optimizing the cold start, we must first analyze the time-consuming of each stage of the cold start. First, enable the link tracking function on the service configuration interface of the Function Compute (FC) console.

Initiate a request to the mall-admin service, check the FC console after success, and we can see the corresponding request information. Make sure to turn off "view function errors only" so that all requests will be displayed. There will be a certain delay in indicator monitoring and invoking link data collection. If it is not displayed, please wait for a while before refreshing. Find the cold start flagged request and click Request Details under More.

The call link shows the time taken for each link of the cold start. Cold start includes the following steps:

  • Code preparation (PrepareCode): mainly download the code package or mirror. Since we have enabled the image acceleration function, we do not need to download the entire image, so the delay in this step is very short.
  • Runtime initialization (RuntimeInitialization): from the start of the function, until the function computing (FC) system detects that the application port is ready. This includes the application startup time. Execute s mall-admin logs on the command line to view the corresponding log time, we can also see that the startup of the Spring Boot application takes a lot of time.
  • Application initialization (Initialization): Function Compute provides the Initializer interface, and users can put some initialization logic in the initializer to execute.
  • Invocation: The delay in processing the request, which is very short.

From the above link tracing diagram, instance startup time is the bottleneck, and we can take various ways to optimize it.

1.1. Using Reserved Instances

Java-like applications generally start slowly. When the application is initialized, it also needs to interact with many external services, which takes a long time. This kind of process is required by business logic, and it is difficult to optimize the delay. Therefore, Function Compute provides Reserved Instance functionality. The start and stop of a reserved instance is controlled by the user, and it will stay there if there is no request, so there is no cold start problem. Of course, the user needs to pay for the operation of the entire instance, even if the instance does not process any requests.

In the Function Compute console, we can set reserved instances for functions on the "Auto Scaling" page.

The user configures the minimum and maximum number of instances in the console. The platform will reserve instances with the minimum number of instances, and the maximum instance refers to the upper limit of instances under this function. Users can also set rules for timed reservations and reservations by metrics.

Once a reservation rule is created, a reserved instance is created. When the reserved instance is ready, we will not have a cold start when we access the function again.

1.2. Optimize instance startup speed

lazy initialization

In Spring Boot 2.2 and higher, a global lazy initialization flag can be turned on. This will speed up startup, but at the cost of potentially longer latency for the first request, as it needs to wait for the component to initialize for the first time.

The following environment variables can be configured in s.yaml for the relevant application

SPRING_MAIN_LAZY_INITIATIALIZATION=true

Turn off optimizing compiler

By default, the JVM has multiple stages of JIT compilation. While these phases can gradually improve the efficiency of the application, they also increase the overhead of memory usage and increase startup time. For short-running serverless applications, consider turning this optimization off to trade long-term efficiency for shorter startup times.

The following environment variables can be configured in s.yaml for the relevant application:

JAVA_TOOL_OPTIONS="-XX:+TieredCompilation -XX:TieredStopAtLevel=1"

Example of setting environment variables in s.yaml:

As shown in the figure below, configure environment variables for the mall-admin function. Then execute sudo -E s mall-admin deploy to deploy.

Log in to the instance to check if the environment variables are configured correctly

Find the corresponding request in the request list on the console function details page, and click the "Instance Details Link" in More.

On the instance details page, click "Login to Instance".

Execute the echo command in the shell interface to check whether the corresponding environment variables are set correctly.

Note: For non-reserved instances, the Function Compute system will automatically reclaim the instance after a period of no requests. It is no longer possible to log in to the instance at this point (the login instance button in the instance details page above will be grayed out). So please log in as soon as possible after executing the call before the instance is recycled.

2. Configure reasonable instance parameters

When we choose the application instance size, such as 2C4G or 4C8G, we want to know how many requests an instance handles to fully utilize resources and ensure performance. When the processed requests exceed a limit, the system can quickly pop up instances to ensure smooth application performance. How to measure instance overload has multiple dimensions, such as qps exceeding a certain threshold, or instance CPU/Memory/Network/Load and other indicators exceeding the threshold, etc. Function Compute uses Instance Concurrency as a measure of instance load and a basis for instance scaling. Instance concurrency refers to the number of requests that an instance can execute at the same time. For example, setting the instance to 20 means that an instance can a maximum of 20 requests at the same time at any time.

Note: Please distinguish the difference between instance concurrency and QPS.

Using instance concurrency to measure load has the following advantages:

  • The system can quickly count the value of the instance concurrency index to expand and shrink the capacity. Instance-level metrics such as CPU/Memory/Network/Load are usually background statistics, and it takes dozens of seconds for metrics statistics to be scaled, which is difficult to meet the elastic scaling requirements of online applications.
  • Under various conditions, the instance concurrency index can stably reflect the system load level. If the request delay is used as an indicator, it is difficult for the system to distinguish whether the overload of the instance causes the delay to increase, or the downstream service becomes the bottleneck and the delay increases. For example, a typical Web application usually accesses a MySQL database. If the database becomes the bottleneck and the request delay increases, the expansion will not only make no sense at this time, but will overwhelm the database and make the situation worse. QPS is related to request delay, and there are also the above problems.

Although instance concurrency as a scaling basis has the above advantages, users often do not know how much instance concurrency should be set. I recommend following the process below to determine a reasonable degree of concurrency:

  1. Set the maximum number of instances of the application function to 1 to ensure that the performance of a single instance is measured.
  2. Use the load stress testing tool to stress the application and view indicators such as tps and request delay
  3. Gradually increase the instance concurrency. If the performance is still good, continue to increase it; if the performance does not meet expectations, decrease the concurrency.

Summarize

For more content, pay attention to the Serverless WeChat official account (ID: serverlessdevs), which brings together the most comprehensive content of serverless technology, regularly holds serverless events, live broadcasts, and user best practices.


Serverless
69 声望265 粉丝