1

Author: Wang Chen, Mu Huan, Xi Yang, etc.
, Hongliang, Zhang Lei, Zhi Min
Editing & Typesetting: Wine Circle

The essence of container is an isolation technology, which solves the unsolved problems of his predecessor-virtualization: slow startup speed of the operating environment and low resource utilization. The two core concepts of container technology, Namespace and Cgroup, are just right. Solved these two problems. Namespace, as a seemingly isolated technology, has replaced Hypervise and GuestOS. The original operating environment on the two OSs has evolved into one. The operating environment is lighter and faster to start up. Cgroup is used as an isolated technology, limiting A process can only consume part of the CPU and memory of the entire machine.

Of course, container technology is popular because it provides standardized software development deliverables-container images. Based on the container image, continuous delivery can really land.

We can also list many reasons for using container technology, so I won't repeat them here.

At the same time, cloud computing solves the elastic scaling of the basic resource layer, but does not solve the batch and rapid deployment problems caused by the elastic scaling of the PaaS layer application with the basic resource layer. Thus, the container orchestration system came into being.

According to third-party survey data, containers and Kubernetes have become mainstream choices in the cloud-native era, but when they actually landed, they were in trouble. We tried to summarize some common points and countermeasures, which may provide some references for companies that are implementing container technology.

Where is it difficult to use?

The advancement of containers and Kubernetes is unquestionable, but when a large number of enterprises began to embrace the de facto standard Kubernetes in the container orchestration field, they were in a dilemma. "K8s is like a double-edged sword. It is not only the best container orchestration technology, but also has considerable complexity and high barriers to application. This process often leads to some common errors." Even Google, the founder and core promoter of Kubernetes, admits this problem.

In an interview, Alibaba senior technical expert Zhang Lei analyzed the nature of Kubernetes. He pointed out, “Kubernetes itself is a distributed system rather than a simple SDK or programming framework, which has increased its complexity to a system-level distributed system. The location of open source projects. In addition, Kubernetes popularized the idea of declarative API in the field of open source infrastructure for the first time, and based on this, proposed a series of usage paradigms such as container design patterns and controller models. The advanced and forward-looking design also makes the Kubernetes project accepted by the public, there will be a certain learning cycle."

We roughly summarized the 4 major complexities of Kubernetes.

1. Complex cognition: It is different from the original back-end R&D system. It extends a set of brand-new theories and provides a series of brand-new technical concepts, and these concepts, such as Pod, sidecar, Service, resource management, scheduling Algorithms and CRDs, etc., are mainly designed for platform R&D teams rather than application developers, providing many powerful and flexible capabilities. However, this not only brings a steep learning curve, affects the experience of application developers, and even in many cases improper understanding can lead to incorrect operations and even production failures.

2. Development is complex: K8s uses a declarative approach to orchestrate and manage containers. In order to achieve this, a YAML file needs to be configured, but in a complex application, the introduction of new links affects the productivity and agility of developers. In addition, lack of a built-in programming model, developers need to rely on third-party libraries to deal with dependencies between programs, which will affect development efficiency and increase unnecessary DevOps overhead.

3. Complicated migration: Migrating existing applications to K8s is more complicated, especially non-microservice architecture. In many cases, specific components or the entire architecture must be refactored, and the application needs to be rebuilt using cloud-native principles, such as state Dependence, such as writing local directories, ordering, network dependence, such as hard-coded IP, quantity dependence, such as fixed copy and so on.

4. Complex operation and maintenance: K8s's declarative API subverts the traditional procedural operation and maintenance model, and the declarative API corresponds to the final state-oriented operation and maintenance model. As the scale of K8s clusters grows, and based on open source K8s with bare hands, the difficulty of operation and maintenance will also increase linearly. Cluster management, application release, monitoring, logging and other links will face extremely high challenges to cluster stability.

Is there another solution?

Technology always has two sides. Containers have revolutionized the infrastructure of cloud computing and become a new computing interface. On the other hand, Kubernetes has built a unified infrastructure abstraction layer, shielding the platform team from "computing", "networking", "storage" and other infrastructure concepts that we had to pay attention to in the past, allowing us to easily build based on Kubernetes Any vertical business system we want without worrying about the details of any infrastructure layer. This is the fundamental reason why Kubernetes is called the Linux of the cloud computing industry and the "Platform for Platforms".

But is it the only choice for us to apply container technology directly to operate Kubernetes? the answer is negative. During the evolution of container technology, we have also discovered many open source projects and commercial products that can lower the threshold of container orchestration. Next, we will introduce them one by one from low to high degree of liberation of hands.

Open source tools around the Kubernetes ecosystem

OAM/KubeVela is an open source project hosted in CNCF. It aims to reduce the complexity of K8s in application development and operation and maintenance. It was originally initiated by Alibaba Cloud and Microsoft Cloud.

KubeVela, as a standard implementation of the open application architecture model OAM, has nothing to do with the underlying infrastructure and is natively scalable, and most importantly, it is completely application-centric. In KubeVela, "applications" are designed as "first-class citizens" of the entire platform. The application team only needs to focus on several cross-platform and cross-environmental upper-level abstractions such as components, operation and maintenance features, and workflows for application delivery and management, without paying attention to any infrastructure details and differences; platform administrators can use IaC at any time Configure features such as component types and operation and maintenance capability sets supported by the platform in a way to adapt to any application hosting scenario.

KubeVela is built completely based on K8s, with natural ability to be integrated and universal, naturally revealing all the capabilities of K8s and its ecology, instead of superimposing abstraction. Therefore, KubeVela is suitable for those technical teams who have certain K8s platform development and operation and maintenance capabilities, and hope to use the full set of K8s capabilities to continuously expand the platform capabilities.

Containers have evolved from an isolation technology to an ecosystem. Open source tools such as KubeVela that can greatly reduce the complexity of using K8s will gradually release their vitality, allowing developers to enjoy the efficiency brought by cloud native without being K8s experts. And convenience.

Sealer is an open source distributed application package delivery and operation solution, which greatly simplifies the delivery complexity and consistency issues of container projects. The product built by sealer can be called a "cluster mirror" with embedded K8s. The "cluster mirror" can be pushed to the registry and shared with other users. You can also find very general distributed software directly in the official warehouse. use.

Delivery is another problem in the container ecology. It faces the problem of dependence on complexity and consistency. Especially for industrial-grade Kubernetes delivery projects, the delivery cycle becomes longer and the delivery quality requirements are high. Sealer is very suitable for software developers, ISVs, etc. Enterprises can shorten the deployment time to the hour level.

Open and standardized enterprise-level Kubernetes service

Most cloud vendors provide Kubernetes as a Service container platform capabilities, such as AWS EKS and Alibaba Cloud’s ACK, which can greatly simplify the deployment, operation and maintenance, network storage, security management and other capabilities of K8s clusters, and provide CNCF standardized certification The K8s service can meet the workload requirements of almost all scenarios, and provides a wealth of expansion and customization capabilities. In addition, most cloud vendors will be based on the open source Kubernetes framework and do different levels of encapsulation in the upper layer to adapt to the needs of different enterprise backgrounds and different scenarios to provide release and Pro versions. For example, Alibaba Cloud’s ACK Pro provides The ability to host master and fully managed node pools, fully integrate IaaS capabilities, and be more efficient, safer, and smarter, and provide enterprises with the best practices and full stack optimization of various sub-scenarios of container clusters as built-in services.

Judging from the scale of existing users, this is the mainstream choice for most Internet companies to implement container technology.

For more information, please go to: "Alibaba Cloud Container Service Multiple Heavy Releases: A New Generation Platform with Efficient, Intelligent, Secure and Unbounded" .

Kubernetes services evolving to serverless

Traditional Kubernetes adopts a node-centric architecture design: a node is the carrier of Pod, and the Kubernetes scheduler selects the appropriate node in the worker node pool to run the Pod.

For Serverless Kubernetes, the most important concept is to decouple the runtime of the container from the specific node operating environment. Users do not need to pay attention to node operation and maintenance and security, reducing operation and maintenance costs; and greatly simplifies the implementation of container elasticity, and does not need to create container application Pods on demand according to capacity planning; in addition, serverless containers can be used by the entire cloud elastic computing infrastructure when running Supported to ensure the overall flexibility of the cost and scale.

Many cloud vendors have also further integrated containers and Serverless: for example, ASK of Alibaba Cloud's Serverless container service, AutoPilot of Google GKE, which reduces the complexity of customers' operations on K8s nodes and clusters by avoiding operation and maintenance, without the need to purchase servers. Container applications can be deployed directly; at the same time, container applications can still be deployed through the K8s command line and API, making full use of the orchestration capabilities of K8s, and paying on-demand based on the amount of CPU and memory resources configured by the application.

This type of service is very good at handling some Job-type tasks, such as algorithm model training in the AI field, and has a relatively consistent development experience in the K8s environment, which is a very good supplement to the container service ecosystem.

More information can be moved to: "Serverless Kubernetes: Ideal, Reality and Future" .

A new generation of PaaS service supported by container and serverless technology

The needs of the enterprise-level market are always hierarchical and diversified, which is closely related to the distribution of technical talents. Not every company can build a team with sufficient technical strength, especially in cities other than Beijing, Shanghai, Guangzhou and Shenzhen. And when a new technology is implemented, it is always planned in stages, which gives birth to market space for more product forms.

Although K8s provides full lifecycle management of container applications, it is too rich, too complex, and too flexible. This is both an advantage and sometimes a disadvantage. Especially for the R&D operation and maintenance personnel who are used to managing applications from the perspective of applications in the virtual machine era, even if AWS EKS and Alibaba Cloud ASK have reduced the operational complexity of K8s to a certain extent, they still hope In some way, the barriers to use of container technology can be further reduced.

Containers and K8s do not have to be used in bundles. In some new PaaS services, such as Alibaba Cloud's Serverless Application Engine (SAE), the underlying virtualization technology is transformed into container technology, making full use of container isolation technology to improve startup time And resource utilization, while in the application management layer, the original microservice application management paradigm is retained, and users do not need to learn the huge and complex K8s to manage applications. This new type of PaaS service usually has a full set of microservice governance capabilities built in. Customers do not need to consider framework selection, let alone data isolation, distributed transactions, fuse design, current limit degradation, etc., and there is no need to worry about the limited maintenance of the community. Custom development issues.

In addition, after the underlying computing resource is pooled, its natural Serverless attribute makes users no longer need to purchase and continue to maintain servers separately, but configure the required computing resources according to the amount of CPU and memory resources, so that the container + Serverless + PaaS can be combined. For one, it enables the integration of technological advancement, resource utilization optimization, and constant development, operation and maintenance experience. Therefore, compared with the other solutions in this article, this type of solution is characterized by providing a PaaS experience and making the landing of new technologies more stable.

Most traditional industries, some Internet companies whose technical capabilities are biased toward the business layer, and some startup companies that do not want to be restricted by the back-end and affect the business delivery iteration, most of them will tend to the PaaS form of products, regardless of the corporate attributes, PaaS category The service has more delivery advantages in handling the following scenarios:

  • A new project is launched, and I want to verify it quickly, not to fail, and to control the cost of manpower input;
  • The volume of business is rising rapidly, with more and more users, business stability is a bit unholdable, new version releases, online application management and other links are beginning to be a little daunting, but the technical reserves cannot respond to current changes in time;
  • It was decided to upgrade the original monolithic architecture to a micro-service architecture, but due to the lack of micro-service experts in the team, after evaluating the project, it was found that the upgrade risk was relatively high.

For more information, you can move to: "Break the boundary of Serverless landing, .

More extreme serverless service-FaaS

With the emergence of FaaS, business scenarios with flexible and flexible demands have better options. More and more large and medium-sized enterprises have stripped out the execution units that have flexible requirements for expansion in the traditional back-end field and run them on the serverless architecture.

This makes FaaS (Functional Computing) an alternative to containers and K8s for general computing power.

Like Serverless services such as Google Cloud Run and App Runner, the product form of FaaS is becoming more and more open, with fewer and fewer operational restrictions. In addition to being suitable for event-driven computing models, it is also suitable for Web monolithic applications, Jobs, etc. Scenarios can help users maximize their flexibility and further improve the utilization of computing resources.

For example, Lilith in the game industry applies function calculations to battle verification to verify whether the battle uploaded by the player's client is cheated. Battle verification generally needs to be calculated frame by frame, and the CPU consumption will be very high. Usually, the battle of the 1 team v 1 team takes n milliseconds, and the battle of the 5 team v 5 team needs corresponding 5n milliseconds, which requires high flexibility. In addition, the SLB mounted under the container architecture will not be able to sense the actual load of the Pod due to the polling mechanism, resulting in uneven load, resulting in infinite loops and stability risks.

The function-computing scheduling system helps Lilith to arrange each request reasonably. For the problem of infinite loops, it also provides a time-out kill process mechanism and sinks the complexity of the scheduling system to the infrastructure. In addition, the cold start delay after the function calculation is deeply optimized is greatly reduced. From scheduling, to obtaining computing resources, and then to service startup, it is basically about 1 second+.

In addition, the emergence of FaaS has also greatly liberated the full-stack engineers of startups to spend on DevOps to host small programs, websites and other Web monolithic applications. For example, functional computing reduces the server of front-end languages such as Node.js. Maintenance threshold, as long as you can write JS code, you can maintain the Node service.

For more information, you can move to: "Alibaba Cloud Function Computing releases 7 major technological breakthroughs across industry stumbling blocks"

The right is the best

The more demand, the more investment will be. This is the unchanging truth of Henggu. After we decide to introduce container technology, before using K8s, we need to think about why we need K8s.

If we want to make full use of the full set of capabilities of K8s, and the team has certain technical reserves, then KubeVela is an ideal open source option, and sealer can also help us reduce the complexity of delivery; if we want to transfer the K8s upper-layer packaging work to the cloud to varying degrees Vendors can deal with it to more efficiently adapt to the needs of different business scenarios, then the commercial container service provided by cloud vendors is a good choice; if the container and K8s cannot meet the demands of flexible business, FaaS can be chosen.

But if our application is not that complicated, but simply hopes to simplify application lifecycle management and underlying infrastructure, ensure high business availability, and focus on business development, then there may be no need to use K8s to orchestrate container applications. , After all, K8s is derived from Google’s Borg, which is used to manage Google’s massive container applications.

Reference article:

  1. "The Past and Present of Cloud Computing", Liu Chao
  2. "Flexible and Efficient Cloud Native Cluster Management Experience: Using K8s to Manage K8s", Huaiyou, Linshi
  3. "Will complexity be the "fatal wound" of Kubernetes? ", Zhao Yuying
  4. 《Simplifying Kubernetes For Developers》,Rishidot Research
  5. "KubeVela Officially Open Source: A Highly Scalable Cloud Native Application Platform and Core Engine", OAM project maintainer
  6. "KubeVela 1.0: Opening the Future of Programmable Application Platforms", OAM project maintainer

Related Links:

1. Project address:
https://github.com/oam-dev/kubevela

2. Project address:
https://github.com/alibaba/sealer


阿里云云原生
1.1k 声望321 粉丝