Introduction to original microservice users need to build a lot of components by themselves, including some technical frameworks for PaaS microservices, operation and maintenance of IaaS, K8s, and observable components. SAE has made an overall solution for these aspects, so that users only need to pay attention to their own business systems, which greatly reduces the threshold for users to use microservice technology.
Author | Tao Chen
Advantages and pain points of microservice architecture
1. The background of the birth of the microservice architecture
Back to the early days of the Internet, which is Web1.0 era , was mainly because some portals, single application was the mainstream, R & D team is relatively small, the challenge this time is that the complexity of the technology and technical personnel The scarcity.
In the new century mutual and Internet era , some large-scale applications, such as social networking, e-commerce, etc., have increased by hundreds of thousands of people, and even have a huge increase in traffic and people. After the expansion of the R&D team, collaboration problems became a problem. The SOA solution is a product of the Internet, and its core is distributed, split, and so on. However, because of some single-point components such as ESB, they have not been well promoted. The technologies such as HSF and open source Dubbo launched by Alibaba at that time were actually a solution similar to a distributed solution. At that time, the concept of microservice architecture was already in place.
The official name of the architecture was born in the 160ffbd08dae4d mobile Internet era . At this time, life has been fully Internetized, and various lifestyle apps have emerged, and the complexity of netizens and traffic has increased significantly compared with the Internet era in the new century. In addition, large-scale R&D teams have also become mainstream. At this time, everyone generally has a higher pursuit of efficiency, not just that only a few giants need to have this technology. The introduction of microservice architecture and microservice technology, such as the popularization of Spring Cloud, Dubbo and other frameworks, has greatly promoted microservice technology.
Now we have entered the comprehensive digital era , the society is fully Internetized, and all kinds of units (including government and enterprises, relatively traditional units) need strong research and development capabilities. The challenge of traffic, the challenge of business complexity, and the expansion of the R&D team make everyone have higher requirements for efficiency. At this time, the microservice architecture has been further promoted and popularized.
After so many years of development, the microservice architecture is an enduring technology. Why can it continue to develop?
2. The advantages of microservice architecture
Let's review the difference between microservice architecture and monolithic architecture, as well as the core advantages of microservice architecture.
The core problem of the monolithic architecture is that the conflict domain is too large, including the shared code base. It is particularly prone to conflicts in the R&D process; the boundaries and the scale of the modules are not clear, so that the efficiency of the team will also be reduced.
Under the microservice architecture, the core lies in the separation, including the decoupled R&D state and the decoupled deployment state, which greatly releases the team’s R&D efficiency. From the road to simplicity, this is also one of the reasons why the microservice architecture can continue to develop.
3. Pain points in the era of microservices
According to the law of conservation of complexity, we have solved a problem, and the problem will appear in another form, and we need to solve it again. It can be seen that in the era of microservices, a lot of pain points will be introduced, and the core is stability. Because after the original local calls are changed to remote calls, there may be a surge in stability, including scheduling amplification, that is, some instability in the upper layer may be caused by some remote call problems at the bottom, and the current limit needs to be done during the period. Downgrade, call chain, etc.
In the era of microservices, the complexity of positioning a problem will also increase exponentially, and service governance may also be required. In addition, if there is no better design and some preconceived ideas, there may be an explosion of microservice applications, including the collaboration between R&D personnel and testers.
After so many years of development of microservice technology, the industry has actually already had some solutions.
As shown in the figure above, if you want to better play with microservice technology, in addition to developing your own business system, you may have to build multiple systems, including CI/CD, publishing system, R&D process, and some related to microservice components. Tools, as well as observability-related real-time monitoring, alarm systems, service governance, call chains, etc., also require basic IaaS resources for operation and maintenance. In this era, in order to better operate and maintain IaaS resources, you may also need to maintain a K8s cluster yourself.
Therefore, in this context, many companies will choose to build an operation and maintenance team, or a middleware team, or part-time by some back-end R&D students. But just imagine, how many companies are satisfied with this internal system? What is the iterative efficiency of the system? Have you stepped on some open source pits? Have these pits been resolved now? These should be a continuing pain point in the minds of the enterprise's CTOs and architects.
Solutions in the Serverless Era
1. Serverless era
Serverless was first proposed in 2012, and after launching a detonating product like Lambda in 2014, it briefly reached a peak of influence. But such a new thing suddenly came into the real and complex production environment. In fact, there are many unsuitable things, including areas that need improvement, so it may enter a low point in the next few years.
However, Serverless's of "160ffbd08db0c8 will be simple to users, and complexity will be left to the platform " is actually a very correct direction. Therefore, in the open source industry, including the industry, in fact, some exploration and development of serverless are continuously carried out.
Alibaba Cloud launched Function Compute (FC) in 2017 and the Serverless application engine SAE in 2018. In 2019 and the following years, Alibaba Cloud has continued to invest in the serverless field, including image deployment. , Reserve strength, microservice scenarios, etc.
2. Serverless market overview
In the latest Forrester evaluation in 2021, Alibaba Cloud's serverless product capabilities are the first in China and the world's leading, and the proportion of Alibaba Cloud's serverless users is also the first in China. This side shows that Alibaba Cloud Serverless is increasingly entering the real production environment of enterprises, and more and more enterprises recognize the capabilities and value of Serverless and Alibaba Cloud Serverless.
3. SAE solution
It can be seen that under the traditional microservice architecture, enterprises need to develop a lot of solutions to make good use of microservice-related technologies. Then in the serverless era, how is it solved in the SAE product?
We can see that SAE has carried forward the concept of Serverless to the extreme, not only hosting IaaS resources, including the upper-level K8s, but also integrating white-screen PaaS and enterprise-level microservice-related packages and observable-related packages. In the SAE overall solution, these are well integrated, providing users with an out-of-the-box microservice solution, allowing enterprises and developers to easily use microservices.
1, 0 threshold PaaS
As you can see in the figure, SAE provides users with a white-screen operating system at the top level. Its design concept is very consistent with the general PaaS system of the enterprise, including publishing systems or some open source PaaS systems, which greatly reduces The threshold for companies to get started with SAE can even be said to be zero, including that it also integrates some of Alibaba's best releases, that is, the release of three axes-observable, grayscale, and rollback.
In addition, it also provides some enhancements to enterprise-level capabilities, including namespace environment isolation, fine-grained access control, and so on. As can be seen from the figure, in an enterprise, if there are two relatively independent modules, they can be isolated through the namespace process.
2. Enhanced microservice governance
In terms of microservice governance enhancements, especially in the Java language, SAE uses an agent, which is equivalent to non-intrusive, non-perceptive, and zero-upgrade for users, and the agent is fully compatible with open source, so that users can hardly modify it. , You can use the lossless lower limit, API management, current limit downgrade, link tracking and other capabilities.
3. Full link gray scale of front and back ends
Two capabilities are expanded here. The first capability is full-link grayscale at the front and back ends. SAE uses the aforementioned agent technology to open a full link from the web request to the gateway to the consumer to the provider, allowing users to implement a gray-scale publishing scenario through a simple white-screen configuration. And if an enterprise needs to build such a technology by itself, everyone should be very clear about the complexity.
4. CloudToolkit end-to-cloud joint debugging
The second ability is CloudToolkit's end-to-cloud joint debugging. Everyone knows that the number of applications under the microservices scene is showing an explosive trend. If so many applications need to be developed locally, how can one safely and conveniently coordinate a service on the cloud? Now with the help of CloudToolkit, users can easily get through the cloud environment locally and carry out a joint debugging between the end and the cloud, which greatly reduces the threshold for development and testing.
5. Powerful application monitoring & diagnosis
In the scenario of microservices, because of the rapid divergence of microservices and the extreme growth of call links, it is very complicated to locate the problem in a problematic scenario. And SAE integrates various observable products of Alibaba Cloud, including Prometheus, IaaS, SLS, basic monitoring, etc., and provides a wealth of solutions in Tracing Logging Metrics, including request link query, commonly used diagnosis Scenario indicator analysis, basic monitoring, real-time logs, event notifications, etc., all of which can greatly reduce some of the daily positioning problems of enterprises in microservice desk operating scenarios.
SAE's technical principles and extremely flexible construction
I have already explained the three parts, namely, zero-threshold PaaS, enterprise-level microservice suite, and observability. So now I want to introduce a core module of Serverless, that is, the free operation and maintenance and the construction of elastic capabilities at the IaaS level.
1. SAE business architecture
Through this SAE business architecture diagram, everyone can see relatively clearly that the IaaS resources in SAE, including storage, network, etc., do not need users to care about. In addition, SAE also hosts a component of the PaaS layer of K8s, which means that users do not need to operate and maintain K8s by themselves. Above the K8s layer, SAE provides enhanced capabilities such as microservice governance and application lifecycle management. In addition, in terms of resilience, SAE's resilience has reached 15 seconds. I believe that in many enterprise-level scenarios, this can already help developers better cope with sudden traffic situations. In addition, through multiple sets of environments and some best practices, a cost reduction and efficiency increase effect can be achieved.
2. SAE technical architecture
So how does SAE build O&M-free? For users, it is equivalent to an IaaS resource and K8s resource that do not need to be managed?
As you can see in the above figure, the bottom layer of SAE actually uses a secure container technology. Compared with Docker, secure container provides a security solution at the virtual machine level. In the RunC scenario, since the shared kernel is actually on a public cloud product, user a may penetrate into a container of user b, causing some security risks. The use of secure container technology, that is, virtual machine-related security technology, has reached a production-level security isolation, including secure containers also entered the K8s and container ecology. In this way, the combination of safe container + container ecology achieves a better balance of safety + efficiency.
In addition, in terms of storage and network isolation, SAE not only needs to consider network isolation on traditional K8s, but also needs to consider public cloud products. Most users already have a lot of storage resources and network resources on public clouds. These also need to be opened up.
SAE adopts the ENI network card technology of cloud products and directly connects the ENI network card to the security sandbox, which is equivalent to the user not only realizing the isolation of a computing layer, but also realizing the connection of the network layer.
It can be seen that the current mainstream secure container technologies include Kata, Firecracker, gVisor, etc. In SAE, the earliest and most mature Kata technology is used to achieve a secure isolation of computing. In addition, the secure container not only realizes a safe isolation, but also realizes a performance isolation and fault isolation.
To give a better-understood example, in the scenario where everyone in RunC shares the kernel, a user's Container causes some kernel failures, which may directly affect the physical machine. On the basis of SAE's use of safe containers, there is no risk in this regard, and at most only that safe container will be affected.
3. Extreme flexibility and extreme cost
As you can see in the figure below, if the elastic efficiency reaches an extreme, the user's cost can also reach an extreme. By comparing the graphs on the left and right, everyone can understand the effect that flexibility can achieve on user costs.
1. SAE's extremely flexible construction: deployment & restart
What did SAE do in terms of resilience? It can be seen that the creation process of a Pod of traditional K8s needs to go through the stages of scheduling, creation of init container, pulling user image, creating user container, starting user container, application operation, etc. Although it conforms to the design philosophy and specifications of K8s, However, in a production environment, for some scenarios that require relatively more efficiency, in fact, it does not meet the enterprise-level requirements. And SAE uses the in-situ upgrade strategy of the CloneSet component in Alibaba's open source, which is equivalent to not rebuilding the entire Pod, but only rebuilding the container inside, eliminating the need for scheduling and the process of innt containr creation, and the deployment efficiency has reached 42%. The promotion.
2. SAE Extremely Flexible Construction: Flexible Expansion
SAE also implements a parallel scheduling in the mirror warm-up scene. It can be seen that in a standard scenario, scheduling to the user to pull the mirror is a serial process. Then an optimization is made here, that is, when it recognizes that the pod is about to be transferred to a single physical machine, it will start to pull the user's image in parallel, which can also achieve a 30% improvement in elastic efficiency.
3. SAE's extremely flexible construction: Java startup acceleration
So in this stage of application startup, we have also done some things to improve flexibility and efficiency. For example, Java applications have always had the pain point of slow startup in the serverless scenario. The core is that Java needs to be loaded one by one. In some enterprise-level applications, it must be a relatively slow process for the loading of thousands of classes.
SAE combined with Alibaba's open source Dragonwell to implement the App CDS technology. It will load the class into a compressed package when the application is launched for the first time. The subsequent application load only needs to load the compressed package. A serialized loading of a large number of classes has achieved a 45% increase in deployment efficiency.
4. SAE's extremely flexible construction
Finally, in the application running state, we also made some elastic enhancements. Microservice applications usually need to configure a lot of threads, and these threads usually correspond to the underlying threads of Linux one-to-one. In high concurrency scenarios, there will be a large thread switching overhead. SAE combines Alibaba's open source Dragonwell and WISP thread technology to map the hundreds of threads in the upper layer to a dozen threads in the bottom layer, which greatly reduces the overhead of thread switching.
The figure above is our pressure test data. The red line is the use of Dragonwell and WISP technologies, and it can be seen that the operating efficiency has been improved by about 20%.
The above are some of the technical principles and effects of SAE in Serverless, IaaS hosting and K8s hosting, as well as in terms of flexibility and efficiency.
Summary and outlook
The original microservice users need to build a lot of components by themselves, including some technical frameworks of PaaS microservices, operation and maintenance of IaaS, K8s, and observable components. SAE has made an overall solution for these aspects, so that users only need to pay attention to their own business systems, which greatly reduces the threshold for users to use microservice technology.
Subsequent SAE will continue to build capacity for each module. include:
- In terms of zero-threshold PaaS, microservices will continue to integrate some cloud products, including the CICD tool chain. In addition, enterprise-level capabilities will be enhanced, such as approval flow.
- In terms of serverless free operation and maintenance, and extreme flexibility, we will also provide more and more flexibility capabilities, flexibility indicators, and flexibility efficiency, which will also continue to be built. In addition, flexible solutions like AI prediction will be provided to reduce the mental burden on users when setting flexibility indicators.
- In terms of the microservice ecology, we will also do more integration with the microservice enterprise suite to further lower the threshold for everyone to use microservice technology, such as chaos engineering and enhanced remote debugging capabilities.
Finally, in terms of observability, SAE is equivalent to operating and maintaining user applications. Observability is also a very important capability for SAE itself or the platform itself. In this regard, we will continue to do some corresponding monitoring alarms, including plans and grayscale construction. For users, it is also necessary to host its applications on SAE, which requires the product to lower the threshold for users to use this aspect, and the application market, event center, etc. will be built in the future.
Copyright Notice: content of this article is contributed spontaneously by Alibaba Cloud real-name registered users, and the copyright belongs to the original author. The Alibaba Cloud Developer Community does not own its copyright and does not assume corresponding legal responsibilities. For specific rules, please refer to the "Alibaba Cloud Developer Community User Service Agreement" and the "Alibaba Cloud Developer Community Intellectual Property Protection Guidelines". If you find suspected plagiarism in this community, fill in the infringement complaint form to report it. Once verified, the community will immediately delete the suspected infringing content.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。