头图

Written by: Xi Yang
Review and proofreading: Tianyuan, Haizhu
Editing & Typesetting: Wen Yan

Cloud native is becoming an accelerator for enterprise business innovation and solving scale challenges.

The changes brought about by cloud native are by no means limited to technical aspects such as infrastructure and application architecture, but are also changes in R&D concepts, delivery processes and IT organization methods, and are also promoting changes in enterprise IT organizations, processes, and culture. Behind the growing popularity of cloud-native architecture, the DevOps culture and its automation tools and platform capabilities that support its implementation have played a key value.

R&D and operation and maintenance collaboration interface brought by cloud native

Compared with cloud native, DevOps is nothing new, and its practice has already penetrated into the modern enterprise application architecture. DevOps emphasizes communication and rapid feedback between teams, and achieves the purpose of quickly responding to business needs, delivering products, and improving delivery quality by building automated continuous delivery (Continuous Delivery) and pipelined application release methods. With the large-scale application of container technology in enterprises, capabilities such as cloud computing programmable infrastructure and Kubernetes declarative API have accelerated the integration of development and operation and maintenance roles.

The general trend of cloud native has made going cloud become the standard equipment of enterprises, and it is inevitable to define the next-generation R&D platform around cloud native. It has also forced further changes in IT organization-new platform engineering teams have begun to emerge. In this context, how to implement DevOps more efficiently in a cloud-native environment has become a new topic and demand.

1.png

Evolution trend of next-generation DevOps platform

With the gradual improvement of the Kubernetes ecosystem from the bottom to the application layer capabilities, the platform engineering team can more easily build different application platforms based on business scenarios and the actual needs of end users, but it also brings challenges and troubles to upper-level application developers.

The Kubernetes ecosystem itself has a rich capability pool, but the community does not have a scalable, convenient and fast way to introduce a consistent upper-level abstraction to model application delivery for the hybrid and distributed deployment environment under the cloud native architecture. The lack of abstraction capabilities at the upper level of the application delivery process prevents the complexity of Kubernetes from shielding application developers.

The following figure shows a typical process of a DevOps pipeline under cloud native. The first is the development of the code, the code is hosted on Github, and then the unit testing tool Jenkins is connected. At this time, the basic research and development has been completed. And then to the construction of the mirror, which involves configuration, orchestration, and so on. HELM can be used to package applications in cloud native. The packaged application is deployed to each environment. But the whole process will face many challenges.

2.png

First of all, different operations and maintenance capabilities are required in different environments. Second, to create a database on the cloud during the configuration process, you need to open a console to create the database. You also need to configure load balancing. After the application is started, additional functions need to be configured, including logs, policies, security protection, and so on. It can be found that cloud resources and DevOps platform experience are separated, and they are filled with the process of creating with the help of external platforms. This is very painful for novices.

The traditional DevOps model before the advent of containers required different processes and workflows. Container technology is built from the perspective of DevOps. The functions provided by abstract containers will affect how we view DevOps, because with the emergence of microservices, traditional architecture development will change. This means following the best practices of running containers on Kubernetes, and extending the concept of DevOps to GitOps and DevSecOps, making DevOps under cloud native more efficient, secure, stable, and reliable.

OAM (Open Application Model) attempts to provide a modeling language for cloud-native applications to separate the perspectives of R&D and operation and maintenance, so that the complexity of Kubernetes does not need to be transmitted to R&D. Operation and maintenance provide modularity, portability, and The extended feature components support various complex application delivery scenarios, so as to realize the agility and platform independence of cloud-native application delivery. Its complete implementation of KubeVela on Kubernetes has been recognized by the industry as the core tool for building next-generation continuous delivery methods and DevOps practices.

3.png

Recently, Alibaba Cloud released the cloud-effect delivery platform AppStack at the 2021 Yunqi Conference, aiming to further accelerate the scale of enterprise cloud-native DevOps. According to the cloud effect delivery platform AppStack R&D team, it fully supports native Kubernetes and OAM/KubeVela at the beginning of the design to achieve no binding and no intrusion to the application deployment architecture, so that enterprises do not have to worry about the cost of migration and technological transformation. This also marks that KubeVela is becoming an important cornerstone of application delivery in the cloud-native era.

Based on KubeVela, build an application-centric delivery system

4.png

With the rapid popularity of cloud native concepts, hybrid environment deployment (hybrid cloud/multi-cloud/distributed cloud/edge) has become the inevitable choice for most enterprise applications, SaaS services, and continuous application delivery platforms. The development trend of cloud native technology It is also moving towards "consistent, cross-cloud, and cross-environment application delivery".

5.png

As an out-of-the-box application delivery and management platform for modern microservice architecture, KubeVela has officially released version 1.1. In this version, KubeVela focuses more on the application delivery process for hybrid environments, bringing multiple out-of-the-box capabilities such as multi-cluster delivery, delivery process definition, grayscale release, public cloud resource access, and more user-friendly users Experience, to help developers from the initial stage of "static configuration, templates, glue code", directly upgrade to the "automated, declarative, unified model, naturally oriented to multiple environments" next-generation workflow-centric delivery experience.

Based on KubeVela, users can easily handle the following scenarios:

Multi-environment, multi-cluster application delivery

Multi-environment, multi-cluster delivery for Kubernetes has become a standard requirement. Starting from version 1.1, KubeVela not only realizes multi-cluster application delivery, but also can work independently and directly manage multiple clusters, and it can also integrate various multi-cluster management tools such as OCM and Karmada to perform more complex delivery actions. On the basis of the multi-cluster delivery strategy, the user can also control the delivery sequence, conditions and other workflow steps of the delivery to different clusters by defining Workflow.

Define the delivery workflow (Workflow)

There are many specific usage scenarios for Workflow. For example, in a multi-environment application delivery scenario, users can define the order and preconditions for delivery of different environments. KubeVela’s workflow is CD-oriented and also declarative, so it can be used as a CD system to directly interface with CI systems (such as Jenkins, etc.), or it can be embedded in the existing CI/CD system as an enhancement and supplement. The landing method is very flexible.

In the model, Workflow is composed of a series of Steps, and in terms of implementation, each Step is an independent capability module, and its specific types and parameters determine the capabilities of its specific steps. In version 1.1, KubeVela's built-in Steps are already relatively rich and easy to expand, helping users to easily dock existing platform capabilities and achieve seamless migration.

Application-centric cloud resource delivery

KubeVela is designed from an "application-centric" perspective, so it can help developers manage cloud resources better and more conveniently in a completely serverless way, instead of struggling with various cloud products and consoles. In terms of implementation, KubeVela integrates Terraform as a cloud resource orchestration tool, and can support the deployment, binding, and management of hundreds of different types of cloud services from various cloud vendors with a unified application model.

In terms of use, KubeVela currently divides cloud resources into the following three categories:

  • As components: such as databases, middleware, SaaS services, etc. For example, the Alibaba-RDS service in KubeVela belongs to this
  • As operation and maintenance features: services such as log analysis, monitoring visualization, monitoring alarms, etc.
  • As an application operating infrastructure: such as Kubernetes managed clusters, SLB load balancing, NAS file storage services, etc.

GitOps continuous delivery practices that are easier to implement

As a declarative application delivery control plane, KubeVela can naturally be used in GitOps (can be used alone, or with tools such as ArgoCD), and can provide more end-to-end capabilities and enhancements for GitOps scenarios and help GitOps The concept is implemented in the enterprise in a way that is more friendly to the people and solves practical problems. These capabilities include:

  • Define application delivery workflow (CD pipeline)
  • Handle various dependencies and topologies in the deployment process
  • Provide a unified upper-level abstraction on top of the semantics of various existing GitOps tools to simplify the application delivery and management process
  • Unified declaration, deployment and service binding of cloud services
  • Provide out-of-the-box delivery strategies (canary, blue-green release, etc.)
  • Provide out-of-the-box hybrid environment/multi-cluster deployment strategies (placement rules, cluster filtering rules, cross-environment Promotion, etc.)
  • Provide Kustomize-style patches in multi-environment delivery to describe deployment differences, without users having to learn any details of Kustomize itself
  • ……

KubeVela 1.2 will be released soon

It is the goal and vision of the Kubevela project to continue to build an enterprise application operating system that is naturally oriented to a hybrid environment and allow developers to enjoy the process of delivering applications. In the next version 1.2, KubeVela will bring an application-centric control panel UI to implement convenient enterprise application assembly, distribution, and delivery processes, providing developers with a simpler application delivery experience, while covering edge application delivery and more. More usage scenarios.

6.png

KubeVela 1.2 will be released at KubeCon China in December 2021. Stay tuned for the KubVela community and Alibaba cloud native dynamics!

You can learn more about KubeVela and the OAM project in the following ways:

1) Project code base:
github.com/oam-dev/kubevela
Welcome to Star/Watch/Fork!

2) Project official homepage and documentation:
kubevela.io/
Starting from version 1.1, Chinese and English documents have been provided, and developers are welcome to translate documents in more languages.

3) Project DingTalk Group: 23310022; Slack: CNCF #kubevela Channel

4) Join the WeChat group: Please scan the QR code to add the following maintainer WeChat account, indicating that you have entered the KubeVela user group:
7.png


阿里云云原生
1k 声望302 粉丝