Author: Yichuan
review & proofreading: Xiao Jiang, Xiaohang
Editing & Typesetting: Wen Yan
Introduction to Microservice Architecture
The birth background of microservice architecture
In the era of Web 1.0 in the early days of the Internet, single applications were popular at the time, and the R&D team was relatively small, mainly external web pages, and then news portals. In the Web 2.0 era of the Internet era in the new century, the number of netizens has increased sharply, and there have been a number of Internet users. For Internet products at the giant level such as business and social networking, there are hundreds or even thousands of R&D teams. In one scenario, the traffic and business complexity have undergone qualitative changes compared with the previous era. Therefore, the single service Disadvantages: problems such as R&D efficiency are revealed.
At this time, an architecture called SOA appeared. Its architectural idea is very similar to microservices. It has a centralized component similar to ESB. Ali's HSF, including the later open source Double, was born at this stage.
After the emergence of the mobile Internet era, various APPs were born and life began to be fully Internetized. Large-traffic, high-concurrency, and large-scale R&D teams have become more and more common, and the corresponding requirements for high technology and productivity have gradually increased. At this time, the concept of microservices came into being.
In fact, microservices have been throughout the development process of the entire architecture. In the Java technology stack, frameworks similar to Spring Cloud and Double are already very popular. It is not difficult to find that the entire society has entered a stage of rapid digital development. At this time, bigger problems are involved, such as increased traffic, increased application complexity, expanded R&D teams, increased requirements for efficiency, and so on.
Single period version 1.0
Most companies or early businesses have gone through such a process (as shown in the figure): First, the client needs to be accessed through an entrance. In the above figure, SLB is a load balancing service of Alibaba Cloud, which is equivalent to a network entrance. , It can correspond to ECS (ECS is the virtual machine of Alibaba Cloud) to hit the corresponding single service, and at this time they will share a database. This is the first period.
Single period version 2.0
In the second period, SOA architecture: At this time, the idea of divide and conquer appears, and it will split some businesses. But it does not split the service and the bottom layer, such as the split of the storage database, which in essence still shares a set of databases, so it is still a single architecture.
Microservice period
In the microservice period, if the client accesses the gateway through SLB (as shown in the figure), the corresponding service will be forwarded, and some calls will be generated between the service and the service; each service will correspond to a separate database or cache, And each service will be registered, discovered, and configured through a service similar to Nacos.
After the introduction of microservices, although the separation of architecture and business has been resolved, the R&D team can be specialized in a certain field and business, but from the perspective of the overall architecture, it is actually more complicated than before. Yes, so it also brings some operation and maintenance problems.
In a monolithic architecture, for monolithic applications, problems such as unclear boundaries, module coupling, and shared code bases are prone to conflicts. At the same time, if the team is large, the efficiency of collaboration will be relatively low. But the core of the microservice architecture is decoupling. If the decoupling after the split is achieved, the efficiency of the development team can be released.
Development of Microservice Architecture in the Cloud Native Era
Technology introduction of microservice technology in the cloud-native era
Cloud native is a very macro concept. If we look at the changes and evolution brought by cloud native to microservices from the starting point of microservices, it can better help us understand what cloud native is.
What is the nature of microservices and monolithic applications? (As shown in the figure) It actually splits a single application from a giant application into several tiny services, and collaborates to complete the business services equivalent to the original single application. At this time, a dependency relationship will be formed between microservices and microservices, which need to be deployed on one or more resources, and the resources at this time are computing resources.
In the past, the relationship between monolithic applications and resources was very simple, and the synergy of monolithic applications was also some internal coordination, and there was no external dynamic dependency. However, after the architecture is converted to microservices, due to the explosion of external dependencies and the number of nodes, the entire system will become a mesh and it will be very complicated to manage. More than 50% of enterprises feel that the biggest challenge in adopting microservice architecture is the complex operation and maintenance, that is, the management of the entire service life cycle.
Nowadays, it is more recognized that the foundation of cloud native lies in container and container management orchestration (K8s). And the technology of containers and K8s can help us solve the complicated operation and maintenance problems in the microservice system.
First of all, there will be heterogeneity between different microservices, that is, a team, in order to maximize the effectiveness of the microservice system, may allow different small teams to use different programming languages and operating environments to run microservices. Therefore, when we initially operated and managed microservices, there was no uniform standard to deal with these heterogeneous environments. This has led to the popularity of cloud-native container technology, because the role of this technology is to limit the deployment of microservices through a layer of standardized runtime and packaging. In this way, from the perspective of life cycle and management, the differences between each microservice are reduced, which is very conducive to resource scheduling.
Subsequently. The container platform is derived based on container scheduling. The container platform manages containers. As far as K8s is concerned, it can run microservices on the underlying resources in a standard and convenient way. Then the storage computing network can be unified encapsulated through the K8s layer, with a layer of abstraction and encapsulation, which is similar to The operating system in the cloud-native era.
What specific help will it provide? There is a concept in K8s called POD. POD is a combination of a group of containers, coupled with the life cycle of the microservice entity. In a POD, it can run one or more containers.
When the microservice architecture is adopted, the main body of the microservice operation is generally placed in the main container, that is, the main logic executed by the microservice is placed in the main container. At this time, the life cycle of the main container and the life cycle of the POD are completely coupled. When the POD dies, the operating body of the microservice dies. In addition, we will also run some sidecar containers-Sidecar, which mainly provides auxiliary functions for the main container, such as log collection, network proxy, and identity authentication. At this time, in addition to providing its own core business, microservices can also dynamically provide additional auxiliary capabilities, which makes the management of microservices more stable and convenient.
The POD model also provides many very useful functions, such as status information. (Status information refers to: POD will provide a standard interface to display the status of the runtime) The status of this information can be used to determine the running status of the microservice or container, such as whether it is running and whether the business is ready to be greeted For traffic access, POD provides a guarantee for overall stability. The other is the address service function. Each POD will have a standardized DNS address service, which is very helpful for APIs that need to be exposed uniformly, and log monitoring and tracking capabilities. Access and exposed observability information through DNS log addresses can quickly discover runtime problems. It can be summed up from this: containers and container platforms can help microservices with more capabilities at the micro level.
The figure shows 4 release models:
- Rolling update
- Fixed update
- Blue-green deployment
- Canary release (grayscale release)
Traffic governance
Microservices split the static communication relationships in the past single period into dynamic runtimes. Usually the communication and collaboration between services need to be managed separately, and the microservice framework helps us abstract and implement the common functions of each service.
The abstract level includes two aspects: business logic and communication, traffic, and service governance capabilities. We can abstract the underlying general capabilities into a concrete framework, but the frameworks between different microservices cannot call each other. In the cloud-native era, it can use different development languages and models for programming to realize the development of microservices.
Service Mesh Service Mesh is to solve the problem of traffic management in multi-language and multi-environment.
At the data level, Sidecar is responsible for traffic hijacking, forwarding and management. The typical Sidecar implementation of this function is Envoy.
As shown in the figure, it will first abstract the above part from the framework level and then directly decouple it from the business, put the general capabilities in the Sidecar, and manage it through the communication and forwarding between the Sidecars; this will make the problem much simpler. Developers only need to communicate between traffic management and Sidecar, and microservice instances of different technology stacks can communicate with each other.
In addition to the data level, we also need support from the control level. A component is needed to implement the management of policy rules in the original microservice system. The classic implementation is Istio. For example, the original service registration, service discovery, and traffic observation capabilities in the microservice system need to be completed by the main line of the management and control level. With these capabilities, it forms a Service Mesh. We can manage the traffic in the POD and the single points at the data level, let them form a network structure, and become a cluster to achieve traffic distribution, security, and observation.
The programming model in the figure is related to functional computing
Request-driven is based on dynamic elastic scaling of requests and simplifies the logic of request processing. The invocation of microservices, after the traffic comes in, will be distributed to different microservice instances through 4-layer or 7-layer load balancing; but within the same microservice instance process, there are generally two logics: the first is Request management, it may be an HTTP server, or some Handler, it may also be some queue management, the composition of the request distribution capabilities; these components will eventually submit the request to the second part, which is the request processing, and the request processing is also the development Some logic that the reader really needs to implement.
For example, Java Go and Python have their own set of request management logic. There will be a strong coupling between request management and request processing. This example includes both request management and request processing logic. Under this architecture, there is no globally independent control layer that can perceive requests for traffic management. Only the processing layer of the entire instance itself interprets requests in this way. Even if the microservice instance is already overloaded at this time, it is difficult to forward the request to other microservice instances for load balancing again. Therefore, the request-driven system is to check data and solve these two elements. What developers are actually doing is request-driven decoupling.
As shown in the figure, first, the request transmitted by the external system will be standardized, and there will be an adapter; after the standardization, it will be placed in the request load balancer, which can understand the semantics of the request itself; then it can Drive and process. When the processing unit is not enough, it can be expanded through the manager; when there are more logical units, it can also be reduced, thus forming a dynamic management, which can save developers a lot of costs.
Request-driven model:
- Request standardization
- Request routing
- Processing management
The combination of request standardization, request routing, processing management, etc. is consistent with the concept of Serverless. Developers don't need to care about Server at all, only need to focus on business logic. This is actually the process of integrating the microservice system and the platform-based serverless architecture. Alibaba Cloud's FC (Functional Computing) and SAE (Application Engine) both focus on solving these problems.
Best practices of microservices + serverless
Serverless has actually gone through many years of development, and its concept can be traced back to 2012; and in 2014, AWS officially launched Lambda, which set off the Serverless wave; but then came a period of quiet development. Why does this happen? Analysis is because the development model of functional computing is very different from the original model. It is more suitable for the front-end rather than some applications in the form of long running, and it prefers some processing based on requests. Therefore, those services or application architectures that need to run for a long time will not be able to enjoy the flexibility and cost reduction and efficiency benefits brought by Serverless.
Pain points of microservice architecture
The pain point of microservices is stability. Microservices bring many other components. For example, service discovery, or some other tool-like products, these will become more complicated in a single case, because the entire architecture becomes a network structure. To some extent, containers and container platforms help us to host the operation and maintenance of microservices, but they themselves, such as container K8s, have certain complexity.
K8s architecture diagram
K8s is not only complicated but also has some pain points:
- Differences in container image deployment methods
- The complexity of K8s component operation and maintenance
- Learning cost
The most attractive thing for developers is that they can focus on business logic without changing the original development method. The ideal state of microservices is that developers only need to pay attention to the business system in the architecture. Other parts such as: gateway CICD publishing system, inspection process, registration center, alarm monitoring, analysis log, these all no longer need developers To care. Its advantages can be summarized as:
- Let developers focus on business logic
- Do not change the original development method
- No need to care about and operate the underlying resources
- Having flexibility can reduce idle costs
- Excellent tool chain
Summarize
Microservice systems have different events throughout the era of cloud computing development. For example, the initial deployment is traditional IT facilities, such as IDC computer rooms, microservices provide static physical computing resources.
Then the second step is the era of cloud hosting, which is what we all know as VM. In Ali’s words, it is ECS. It can provide elastic computing resources, but it has not changed substantially. It just becomes elastic in resources. The deployment of microservices, including management, operation and maintenance, has not changed much in nature.
In the third stage of the cloud-native era, cloud platforms and cloud services can undertake these complex operations, configuration, and management. Microservices provide an operating environment and platform. At this time, users only need to care about the business system and how to implement the business system. Making complex technologies more and more simple, so that users no longer perceive those complicated operations, and let the platform replace users to do repetitive and difficult-to-maintain tasks, which is also in line with the overall development direction of computer technology.
author:
Yichuan|Alibaba Cloud Cloud Native Team
is currently engaged in the research and development of Alibaba Cloud's serverless application engine, focusing on aPaas, microservices, distributed systems, serverless tool chains, etc., and is committed to building the next generation of serverless platforms, so that developers of traditional applications can have zero transformation and low-cost Enjoy technical bonuses such as Serverless and K8S.
related links:
*1) Community official website
http://www.serverless-devs.com/
2) Project warehouse
https://:.com/Serverless-Devs/Serverless-Devs
3) Serverless Desktop desktop client
https://serverlessdevs.resume.net.cn/zh-cn/desktop/index.html
4) Serverless application developer kit
http://serverless-dk.oss.devsapp.net/docs/tutorial-dk/intro/react
5)Serverless Devs CLI
https://serverlessdevs.resume.net.cn/zh-cn/cli/index.html
6) Serverless Hub Application Center
https://serverlesshub.resume.net.cn/#/hubs/special-view*
Click here , to see the relevant video version analysis~
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。