About believes that with the development of cloud computing, serverless will become the default computing paradigm in the cloud era, and more and more enterprise customers will adopt this technology.
Author: Luo Hao
Video analysis
Click here to view the relevant live broadcast~
Serverless application engine component architecture
In the earliest days, people designed software generally based on a monolithic architecture, including software-related databases, storage, etc., and would be directly deployed on a physical server. But the problem with monolithic applications is that as the scale of the enterprise gradually increases, the scalability is poor and the publishing efficiency is very low. Later, we entered the era of microservices, and the main framework for microservices was based on the Java language. One advantage of the microservice architecture is that iterative efficiency is very high and the scalability is relatively good, but the resource occupation and cost of microservices are relatively high. With the evolution of technology, containerization has accelerated the landing of microservices. However, not all enterprises are suitable for microservices. With the increase of system complexity, the efficiency of microservices and operation and maintenance costs are also increasing. Whether an enterprise chooses a monolithic architecture or a microservice depends on the complexity of the system.
With the development of public clouds, more and more users will deploy their services to the cloud. As the depth of use of the cloud increases, the advantages of the architecture become more apparent. The first stage is called Rehost, which is to rehost, replace the local physical server with the cloud host without changing the application, but this hosting model is the most basic way to use the cloud, and its efficiency has not been maximized. With further development, we need Re-platform to replace self-built application infrastructure with managed cloud services, and basically do not change the application. But Re-platform is not the best way. With further development, we can re-architect this application, namely Refactor. At this time, you can use microservices and containers to reconstruct the underlying architecture and software architecture to maximize the value of the cloud. From a long-term perspective, the overall benefits are the greatest, but in the short term, its migration cost requirements are also relatively high.
If the application can be refactored and developed according to the products or services native to the cloud, you will be able to enjoy the convenience of cloud computing most deeply. But at the same time it has several problems:
- Input cost (migration/reconstruction);
- The degree of cloud vendor binding;
- The ease of use of the cloud (threshold to get started/maintenance);
- safety.
Alibaba Cloud launched the Serverless Application Engine (SAE), which provides a fully managed platform specifically for applications or microservices. For example, Java microservices can currently be deployed on the cloud with zero transformation migration and support complete microservice governance capabilities. If users want to do containerized upgrades, they can also use this platform.
The core technology of Serverless application engine
What are the components of SAE? How to combine various product capabilities? You can look at this component architecture. The green part in the picture is what users need to pay attention to, and it is a variety of business applications. At the same time, SAE will provide various tools, such as Cloudtoolkit plug-in to assist local code deployment to the cloud, such as docking cloud effects to provide pipeline capabilities. The orange part in the picture is the SAE platform, which provides many capabilities. For example, if you write a store application, the front desk is an independent service module that can be iterated, developed or managed independently. You can also configure an elastic strategy for the front desk service. For example, during a big promotion, the front desk service can automatically scale according to the actual traffic. This is also a core value of SAE. Therefore, SAE can not only provide resource management capabilities, but also provide application lifecycle management and microservice governance. It is a fully managed application platform. At the resource level, SAE encapsulates the K8s cluster. Under K8s is the infrastructure, which is constructed by Shenlong servers and secure containers. At the resource level, SAE will help users provide resource management and scheduling capabilities.
Next, let's talk about the core capabilities of SAE. First, let's take a look at the entire process of deploying applications for traditional enterprise users. First, you need to purchase ECS resources, then build a cluster, initialize the cluster, and then build the environment. After the R&D and development are completed, start the test deployment, in addition to deploy monitoring, log and other components. After all the services are online, they enter the maintenance state, including resource operation and maintenance and business operation and maintenance. Using SAE can save many steps. First, the underlying K8s cluster is maintained by cloud vendors. Users only need to submit an image or JAR package to deploy the system to the SAE platform. Secondly, the monitoring and log systems are already provided by the platform, and users only need to pay attention to business logic and do not need to maintain resources.
What if users want to publish in grayscale? SAE also provides users with single batch, batch, canary and other release strategies. This capability is also provided by default so that the services deployed on the platform can be updated without stopping the machine.
Regarding canary releases with very strong user demands, SAE can achieve grayscale based on requested content and grayscale based on traffic proportions. For example, if you want to make the traffic ratio gray scale, you can release 50% directly in batches, and the two batches can be sent out. At the same time, the gray scale can also be performed according to the precise flow rate.
Users using this platform will also pay great attention to flexibility, and SAE provides a very rich flexible configuration. The horizontal scaling can be triggered based on basic monitoring indicators (CPU, Mem) and business monitoring indicators (QPS, RT). Flexible expansion and contraction according to this load model is generally more suitable for burst traffic or application scenarios with typical pulses. Such as Internet games, social platforms. The second type is timing flexibility. This model is more suitable for application scenarios with peaks and troughs such as catering and travel.
So can elasticity efficiency keep up with the demand for elasticity? Under normal circumstances, when we deploy an image to the platform, the system has to go through a resource scheduling, and then create a POD, pull user images, create a container, and start the container. In order to improve this efficiency, SAE first made in-situ upgrade capabilities for applications. It is for application upgrade and release, which can directly pull the user's latest image to update and deploy directly on the original resources, avoid rebuilding the POD, thereby helping users to improve the deployment efficiency by 42%.
Secondly, SAE has also built image acceleration capabilities, which can help users increase flexibility and efficiency by 30%. That is, when the user creates the container, it will pull the user image synchronously on demand, which can reduce the service startup time.
Third, SAE has also accelerated the launch of Java applications. The Dragonwell JDK version provided can generate a cache when the JVM and process are started, and then accelerate the application when it is restarted to shorten the startup time.
Finally, SAE will provide users with such monitoring and application diagnostic capabilities, which can query the service call chain, interface response time, GC times, slow SQL, etc., to help users quickly locate problems.
Best practices for serverless application engines
There are probably several steps for migrating microservices/applications to SAE. First of all, if it is a single unit, you can directly pack a compressed package and deploy it to the platform, but the single application needs to be separated from storage and calculation, that is, the database and the calculation code are separated, and the calculation part is deployed to SAE. Microservice applications can choose to write a docker file, make it into a mirror, and then push it to the mirror warehouse to complete the deployment. Microservice applications can also be directly deployed to SAE as a JAR/WAR package.
Regarding cost reduction, SAE also introduced a one-key start-stop function. For different environments, you can open the timing start-stop application. For example, for the test environment, when no one is using it at night, you can turn off the test environment directly to save costs.
SAE provides a variety of tools and methods to build a DevOps system. For example, Jenkins, which is commonly used by most enterprise users, or choose cloud effects on the cloud, to do CI/CD. On the application side, the SAE platform can be configured with scheduled start and stop, monitoring alarms, etc. to complete business operation and maintenance.
Regarding environmental management and authority division that enterprise users are more concerned about, SAE recommends the use of namespaces to isolate the environment. Applications in different namespaces cannot access each other. In addition, SAE recommends using the Permission Assistant to generate permissions policies for different teams that correspond to empty names or application services, so that applications between different teams can be invisible and inoperable.
Other users will pay attention to the comparison between SAE and ECS. What capacity enhancements have they made? The first is to provide this kind of free operation and maintenance full hosting capability, and the second is the one-stop full application lifecycle management capability, as well as the governance and optimization of microservices, application monitoring, etc., which are all value-added experiences provided by SAE to users.
Serverless application engine customer case
The first Timing app is an online course learning app in the education field. It is a typical single application that restructures into microservices and migrates to the SAE platform. With the development of the epidemic, after Timing's traffic surged, the original architecture was difficult to carry the development of the business, and the microservice transformation began. In the process of microservices, the SAE platform was selected, and the user center, learning center, self-study center, library center, etc., were all disassembled into independent service modules. Compared with the method of using cloud hosting to build microservices, it saves about 35% of the cost.
Another case I want to share is iQiyi Sports, whose entire business is deployed on the SAE platform. In June and July this year, iQiyi·Sports broadcasted the European Cup events, and the traffic was very high at that time; but after the sports events ended, traffic began to fall again, so flexibility is particularly important for it. The rich flexibility of SAE can help save a lot of operation and maintenance costs, and the expansion efficiency has increased by 40% compared to before. Secondly, the built-in application monitoring platform improves the troubleshooting efficiency by 30% when the business encounters problems. On the whole, SAE helped iQiyi·Sports also improve resource utilization by nearly 50%.
I believe that with the development of cloud computing, serverless will become the default computing paradigm in the cloud era, and more and more enterprise customers will adopt this technology.
below here , go to the official website of the Serverless community to see more related information!
Copyright statement: content of this article is contributed spontaneously by Alibaba Cloud real-name registered users. The copyright belongs to the original author. The Alibaba Cloud Developer Community does not own its copyright and does not assume corresponding legal responsibilities. For specific rules, please refer to the "Alibaba Cloud Developer Community User Service Agreement" and the "Alibaba Cloud Developer Community Intellectual Property Protection Guidelines". If you find suspected plagiarism in this community, fill in the infringement complaint form to report it. Once verified, the community will immediately delete the suspected infringing content.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。