Introduction to Before the beginning of the text, let's review the concept and design concept of unitized deployment. In the edge computing scenario, computing nodes have obvious geographical distribution properties, and the same application may need to be deployed on computing nodes in different regions.
Author | Zhang Jie (Bing Yu)
Source | Alibaba Cloud Native Public
background
Before the text begins, let's review the concept and design concept of unitized deployment. In the edge computing scenario, computing nodes have obvious geographical distribution properties, and the same application may need to be deployed on computing nodes in different regions. Taking Deployment as an example, as shown in the figure below, the traditional approach is to first set computing nodes in the same region to the same label, and then create multiple Deployments. Different Deployments select different labels through NodeSelectors to implement the same application deployment. To the needs of different regions.
However, as the geographical distribution becomes more and more, the operation and maintenance becomes more and more complicated, which is manifested in the following aspects:
- When the mirror version is upgraded, a large number of related Deployment mirror version configurations need to be modified.
- Need to customize the deployment naming convention to indicate the same application.
- Lack of a higher perspective for unified management and operation and maintenance of these Deployments. The complexity of operation and maintenance increases linearly with the increase in applications and geographic distribution.
Based on the above requirements and problems, the unitized deployment (UnitedDeployment) provided by the yurt-app-manager component of openyurt manages these sub-Deployments through a higher level of abstraction: automatically create/update/delete, which greatly simplifies the operation The problem of dimensional complexity.
yurt-app-manager component:
https://github.com/openyurtio/yurt-app-manager
As shown below:
Unit deployment (UnitedDeployment) carries on a higher level of abstraction to these workloads. UnitedDeployment contains two main configurations: WorkloadTemplate and Pools. The format of workloadTemplate can be Deployment or Statefulset . Pools is a list, each list has a pool configuration, and each pool has its name, replicas, and nodeSelector configuration. Through nodeSelector, a group of machines can be selected, so in the edge scenario, Pool can simply think that it represents a group of machines in a certain area. Using the WorkloadTemplate + Pools , we can easily distribute a Deployment or Statefulset application to different regions.
The following is a specific UnitedDeployment example:
apiVersion: apps.openyurt.io/v1alpha1
kind: UnitedDeployment
metadata:
name: test
namespace: default
spec:
selector:
matchLabels:
app: test
workloadTemplate:
deploymentTemplate:
metadata:
labels:
app: test
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- image: nginx:1.18.0
imagePullPolicy: Always
name: nginx
topology:
pools:
- name: beijing
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- beijing
replicas: 1
- name: hangzhou
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- hangzhou
replicas: 2
The specific logic of the UnitedDeployment controller is as follows:
The user defines a UnitedDeployment CR, which defines a DeploymentTemplate and two Pools.
- The DeploymentTemplate format is a Deployment format definition. The Image used in this example is nginx:1.18.0 .
- The name of Pool1 replicas \=1, nodeSelector is apps.openyurt.io/nodepool=beijing. It means that the UnitedDeployment controller will create a child Deployment, replicas is 1, nodeSelector is apps.openyurt.io/nodepool=beijing, and other configurations are inherited from DeploymentTemplate configuration.
- Pool2 The name of replicas \=2, nodeSelector is apps.openyurt.io/nodepool=hangzhou, which means that the UnitedDeployment controller will create a child Deployment, replicas is 2, nodeSelector is apps.openyurt.io/ nodepool=hangzhou, other configurations are inherited from DeploymentTemplate configuration.
After the UnitedDeployment controller detects that the UnitedDeployment CR instance with the name test is created, it will first generate a Deployment template object according to the configuration in the DeploymentTemplate, and generate the name prefix test-hangzhou according to the configuration of Pool1 and Pool2 and the deployment template object. -And test-beijing- two deployment resource objects, these two Deployment resource objects have their own nodeselector and replica configuration. In this way, by using the form of workloadTemplate+Pools, the workload can be distributed to different regions without requiring users to maintain a large number of Deployment resources.
Problems solved by UnitedDeployment
UnitedDeployment can automatically maintain multiple Deployment or Statefulset resources through a unitized deployment instance, and each Deployment or Statefulset resource follows a unified naming convention. At the same time, differentiated configurations of Name, NodeSelectors, and Replicas can also be realized. It can greatly simplify the user's operation and maintenance complexity in edge scenarios.
New demand
UnitedDeployment can meet most of the needs of users, but during our promotion and customer landing and discussions with community students, we gradually discovered that in some special scenarios, the functions provided by UnitedDeployment are still a bit insufficient, such as the following scenarios:
- When applying the image upgrade, the user plans to perform verification in a certain node pool first, and if the verification is successful, it will be updated and released in all node pools in full.
- In order to speed up the image pull speed, users may build their own private image warehouses in different node pools, so the image name of the same application under each node pool will be different.
- The number of servers, specifications, and business access pressures under different node pools are inconsistent. Therefore, the CPU, memory and other configurations of pods under different node pools for the same application will be different.
- The same application may use different configmap resources under different node pools.
These requirements have prompted UnitedDeployment to provide some personalized configuration functions for each Pool, allowing users to do some personalized configuration according to the actual situation under different node pools, such as mirroring, pod request and limit, and so on. In order to provide maximum flexibility, after discussion, we decided to add the Patch field to the Pool, allowing users to customize the patch content, but it needs to follow the Kubernetes strategic merge patch specification, and its behavior is somewhat similar to our commonly used kubectl patch.
Patches are added to the pool, an example is as follows:
pools:
- name: beijing
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- beijing
replicas: 1
patch:
spec:
template:
spec:
containers:
- image: nginx:1.19.3
name: nginx
The content defined in the patch needs to follow the Kubernetes strategic merge patch specification. If you have used kubectl patch, you can easily know how to write the patch content. For details, you can refer to using kubectl patch to update the Kubernetest api object.
Next we demonstrate the use of UnitedDeployment patch.
Feature demonstration
1. Environmental preparation
- Provide a K8s cluster or OpenYurt cluster with at least 2 nodes in the cluster. The label of one node is: apps.openyurt.io/nodepool=beiing, and the label of the other node is: apps.openyurt.io/nodepool=hangzhou.
- The yurt-app-manager component needs to be installed in the cluster.
yurt-app-manager component:
https://github.com/openyurtio/yurt-app-manager
2. Create a UnitedDeployment instance
cat <<EOF | kubectl apply -f -
apiVersion: apps.openyurt.io/v1alpha1
kind: UnitedDeployment
metadata:
name: test
namespace: default
spec:
selector:
matchLabels:
app: test
workloadTemplate:
deploymentTemplate:
metadata:
labels:
app: test
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- image: nginx:1.18.0
imagePullPolicy: Always
name: nginx
topology:
pools:
- name: beijing
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- beijing
replicas: 1
- name: hangzhou
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- hangzhou
replicas: 2
EOF
The workloadTemplate in the example uses the Deployment template, and the image whose name is nginx is nginx:1.18.0. At the same time, two pools are defined in the topology: beijing and hangzhou, and the number of replicas is 1 and 2 respectively.
3. View the Deployment created by UnitedDeployment
# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
test-beijing-rk8g8 1/1 1 1 6m4s
test-hangzhou-kfhvj 2/2 2 2 6m4s
It can be seen that the yurt-app-manager controller has created two Deployments, which distribute the pools corresponding to beijing and hangzhou. The naming convention of Deployments is prefixed with {UnitedDeployment name}-{pool name}. Looking at these two Deployment configurations, we can find that Replicas and Nodeselector inherit the configuration of each corresponding Pool, while other configurations inherit the configuration of the workloadTemplate template.
4. View the corresponding Pod created
# kubectl get pod
NAME READY STATUS RESTARTS AGE
test-beijing-rk8g8-5df688fbc5-ssffj 1/1 Running 0 3m36s
test-hangzhou-kfhvj-86d7c64899-2fqdj 1/1 Running 0 3m36s
test-hangzhou-kfhvj-86d7c64899-8vxqk 1/1 Running 0 3m36s
You can see that one pod with the name prefix test-beijing is created, and two pods with the name prefix test-hangzhou are created.
5. Use patch capabilities for differentiated configuration
Use the kubectl edit ud test command to add a patch field to the pool of beijing. The content in the patch is to modify the container image version with the name nginx to: nginx:1.19.3.
The format is as follows:
- name: beijing
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- beijing
replicas: 1
patch:
spec:
template:
spec:
containers:
- image: nginx:1.19.3
name: nginx
6. View the Deploy instance configuration
Re-checking the Deployment with the prefix test-beijing, you can see that the mirror configuration of the container has changed to 1.19.3.
kubectl get deployments test-beijing-rk8g8 -o yaml
to sum up
Through the workloadTemplate + Pools form of UnitedDeployment, the workload can be quickly distributed to different regions through inherited templates. With the addition of Pool's patch capability, it can provide more flexible and differentiated configurations while inheriting the configuration of the template, which can basically meet the special needs of most customers in edge scenarios.
If you have any questions about OpenYurt, please use the Dingding search group number (31993519) to join the Dingding exchange group.
Copyright Statement: content of this article is contributed spontaneously by Alibaba Cloud real-name registered users. The copyright belongs to the original author. The Alibaba Cloud Developer Community does not own its copyright, and does not assume corresponding legal responsibilities. For specific rules, please refer to the "Alibaba Cloud Developer Community User Service Agreement" and the "Alibaba Cloud Developer Community Intellectual Property Protection Guidelines". If you find suspected plagiarism in this community, fill in the infringement complaint form to report it. Once verified, the community will immediately delete the suspected infringing content.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。