Author: vivo Internet Server Team - Zhang Rong

1. Background

With the growth of vivo's business migration to K8s, we need to deploy K8s to multiple data centers. How to efficiently and reliably manage multiple large-scale K8s clusters in the data center is a key challenge we face. Kubernetes nodes require the installation and configuration of OS, Docker, etcd, K8s, CNI, and network plugins. Maintaining these dependencies is cumbersome and error-prone.

In the past, the deployment and expansion of the cluster were mainly performed through ansible orchestration tasks, black screen operations, and configuration of the cluster's inventory and vars to execute ansible playbooks. The main difficulties in cluster operation and maintenance are as follows:

  • Manual black-screen cluster operation and maintenance operations are required, and there are operational errors and cluster configuration differences.
  • The deployment script tool has no specific version control, which is not conducive to cluster upgrades and configuration changes.
  • It takes a lot of time to verify that the deployment script goes live without specific test cases and CI verification.
  • Ansible tasks are not split into modular installations and should be broken down into pieces. Specific to the modular management of K8s, etcd, addons and other roles, ansible tasks can be executed independently.
  • Mainly through binary deployment, you need to maintain a cluster management system by yourself. The deployment process is cumbersome and inefficient.
  • The parameter management of components is confusing, and parameters are specified through the command line. The components of K8s can have more than 100 parameter configurations at most. It changes with each major version iteration.

This article will share the Kubernetes-Operator we developed, which adopts the declarative API design of K8s, which allows cluster administrators to interact with the CR resources of Kubernetes-Operator to simplify and reduce the risk of tasks. Only one cluster administrator can maintain thousands of K8s nodes.

2. Cluster Deployment Practice

2.1 Introduction to Cluster Deployment

It is mainly based on cluster deployment tasks such as OS, Docker, etcd, k8s, and addons defined by ansible.

The main process is as follows:

  1. Bootstrap OS
  2. Preinstall step
  3. Install Docker
  4. Install etcd
  5. Install Kubernetes Master
  6. Install Kubernetes node
  7. Configure network plugin
  8. Install Addons
  9. Postinstall setup

The above is the key process of one-click deployment of the cluster. When K8s clusters are deployed in multiple data centers, such as security vulnerabilities of cluster components, the launch of new functions, and the upgrade of components, etc., you need to be careful when making changes to the online cluster. We have done the whole thing and dealt with a single module. Avoid executing ansible scripts in full, increasing the difficulty of maintenance. For modular management and operation and maintenance of Docker, etcd, K8s, network-plugin, and addons, a separate ansible script entry is required to provide more refined operation and maintenance operations, covering most of the life cycle management of the cluster. At the same time, when the api of kubernetes-operator is designed, it is convenient to select the corresponding operation yml to perform the operation.

Cluster deployment optimization operations are as follows:

(1) The component parameter management of K8s identifies the configuration file through the API provided by ComponentConfig[ 1 ].

  • [Maintainability] Configuration becomes difficult to manage when there are more than 50 component parameters.
  • [Upgradability] For upgrades, the parameters of versioned configuration are easier to manage. Because the parameters of a large version of the community have not changed.
  • [Programmability] Templates for component (JSON/YAML) objects can be patched. If you enable the dynamic kubelet configuration option, the modified parameters will take effect automatically without restarting the service.
  • [Configurability] Many types of configuration cannot be expressed in key-value form.

(2) Plan to switch to kubeadm deployment

  • Use kubeadm to manage the life cycle of the K8s cluster and reduce the cost of maintaining the cluster yourself.
  • Use kubeadm's certificate management, such as uploading the certificate to the secret to reduce the time consumption of the certificate copying on the host and the function of regenerating the certificate.
  • Use kubeadm's kubeconfig to generate the admin kubeconfig file.
  • Other functions of kubeadm include image management, configuration center upload-config, automatic labeling and smearing of control nodes, etc.
  • Install coredns and kube-proxy addons.

(3) Ansible usage specification

  • Use ansible's own modules to handle deployment logic.
  • Avoid using hostvars.
  • Avoid using delegate_to.
  • Enable --limit mode.
  • and many more.

2.2 CI Matrix Test

The deployed cluster requires a large number of scenario tests and simulations. Ensure the reliability and stability of online environment changes.

Some test cases of the CI matrix are as follows.

(1) Grammar test:

  • ansible-lint
  • shellcheck
  • yamllint
  • syntax-check
  • pep8

(2) Cluster deployment test:

  • Deploy the cluster
  • Scaling and shrinking control nodes, computing nodes, etcd
  • Upgrade the cluster
  • etcd, Docker, K8s and addons parameter changes, etc.

(3) Performance and functional testing:

  • Check if kube-apiserver is working
  • Check whether the network between nodes is normal
  • Check whether the computing node is normal
  • K8s e2e test
  • K8s conformance test
  • other tests

The CI process is built using open source software such as GitLab, gitlab-runner[ 2 ], ansible and kubevirt[ 3 ].

The detailed deployment steps are as follows:

  1. Deploy gitlab-runner on the K8s cluster and connect to the GitLab repository.
  2. Deploy the Containerized-Data-Importer (CDI) [ 4 ] component in the K8s cluster to create the image file of the pvc storage virtual machine.
  3. Deploy kubevirt on the K8s cluster to create virtual machines.
  4. Write gitlab-ci.yaml[ 5 ] in the code repository, and plan the cluster test matrix.

图片

As shown above, when a developer submits a PR in GitLab, a series of actions are triggered. This mainly shows the creation of virtual machines and cluster deployment. In fact, gitlab-runner for syntax checking and performance testing is also deployed in our cluster, and CI jobs are created through these gitlab-runners to execute the CI process.

The specific CI process is as follows:

  1. Developers submit PRs.
  2. Trigger CI to automatically do ansible syntax checking.
  3. Execute the ansible script to create namespace, pvc and kubevirt virtual machine templates, and finally the virtual machine runs on K8s. Ansible's K8s module [ 6 ] is mainly used here to manage the creation and destruction of these resources.
  4. Call the ansible script to deploy the K8s cluster.
  5. After the cluster is deployed, perform functional verification and performance testing.
  6. Destroy kubevirt, pvc and other resources. That is, delete the virtual machine and release resources.

图片

As shown in the figure above, when a developer submits multiple PRs, multiple jobs will be created in the K8s cluster, and each job will execute the above CI test without affecting each other. This ability to mainly use kubevirt implements the K8s on K8s architecture.

The main capabilities of kubevirt are as follows:

  • Provides standard K8s API, and manages the life cycle of these resources through ansible's K8s module.
  • The scheduling capabilities of K8s are reused, and resources are managed and controlled.
  • The network capabilities of K8s are reused and isolated by namespace, and each cluster network does not affect each other.

3. Kubernetes-Operator Practice

3.1 Introduction to Operators

Operator is an application-specific controller that extends the functionality of the K8s API to create, configure, and manage instances of complex applications on behalf of K8s users. Based on the concept of resources and controllers of K8s, it also covers the knowledge of specific fields or the application itself. Used to automate the lifecycle of the applications it manages.

The operator functions are summarized as follows:

  1. kubernetes controller
  2. Deploy or manage an application, such as database, etcd, etc.
  3. User-defined application lifecycle management
  • deploy
  • upgrade
  • Expansion
  • backup
  • self-healing
  • and many more

3.2 Introduction to Kubernetes-Operator CR

图片

kubernetes-operator uses many custom CR resources and controllers, here is a brief introduction to the functions and functions.

[ClusterDeployment] : The only CR configured by the administrator, among which MachineSet, Machine, and Cluster are its sub-resources or associated resources. ClusterDeployment is the entry for all configuration parameters, defining all configurations such as etcd, K8s, lb, cluster version, network and addons.

[MachineSet] : The set of cluster roles includes the configuration and execution status of control nodes, computing nodes, and etcd.

【Machine】 : The specific information of each machine, including the role it belongs to, the information of the node itself and the status of execution.

[Cluster] : Corresponding to ClusterDeployment, its status is defined as subresource, reducing

Trigger pressure for clusterDeployment. Mainly used to store the state of ansible executor executing scripts.

[ansible executor] : It mainly includes K8s' own job, configMap, Secret and self-developed job controller. The job is mainly used to execute ansible scripts, because the status of the K8s job has success and failure, so the job controller can easily observe the success or failure of ansible execution, and can also view the execution details of ansible through the pod log corresponding to the job process. configmap is mainly used to store inventory and variables that ansible depends on when executing, and hang it on the job. The secret mainly stores the key for logging in to the host, and is also mounted on the job.

[Extended controller] : An additional controller mainly used to extend the functions of cluster management. We have customized it when deploying kubernetes-operator, and you can choose the extended controller you need. For example, the addons controller is mainly responsible for the installation and management of addon plugins. clusterinstall mainly generates ansible executors. remoteMachineSet is used for multi-cluster management, synchronizing the machine status of metadata clusters and business clusters. There are other controllers such as docking with public cloud, dns, lb, etc.

3.3 Kubernetes-Operator Architecture

图片

Vivo's applications are distributed on multiple K8s clusters in the data center, providing key features such as centralized multi-cloud management, unified scheduling, high availability, and fault recovery. It mainly builds a metadata cluster pass platform to manage multiple business K8s clusters. Among the many key components, kubernetes-operator is deployed in the metadata cluster, and the machine controller is run separately to manage physical resources.

Some examples of scenarios are as follows:

scene one:

When a large number of applications are migrated to kubernets, administrators assess the need to expand the cluster. First, you need to approve the physical resources and generate the CR resources of the corresponding machine through the pass platform. At this time, the physical machine is in the standby machine pool, and the state of the machine CR is idle. When the administrator creates a ClusterDeploment, the MachineSet to which it belongs will associate with the idle machines and get the idle machine resources. We can observe the IP addresses of the machines that need to be manipulated to generate the corresponding inventory and variables, and create a configmap and mount it to job. Execute the expanded ansible script. If the job is successfully executed, it will update the state of the machine to deployed. At the same time, the controller that synchronizes nodes across the cluster will check whether the current expansion node is ready. If it is ready, it will update the current machine to the Ready state to complete the entire expansion process.

Scenario two:

When one of the business clusters fails and cannot provide services, the failure recovery process is triggered and unified resource scheduling is performed. At the same time, the business strategy is to allocate to multiple business clusters, and configure a standby cluster at the same time, and no instance is allocated to the standby cluster, and the standby cluster does not actually exist.

There are two situations as follows:

  1. Other service clusters can carry the services of the faulty cluster, and kubernetes-operator does not need to perform any operations.
  2. If other service clusters cannot carry the services of the faulty cluster. The container platform starts to estimate resources, calls kubernetes-operator to create a cluster, that is, creates a clusterDeployment to select a physical machine from the standby machine pool, observes the IP address of the machine that needs to be operated currently, generates the corresponding inventory and variables, creates a configmap and mounts it to the job. Execute the ansible script of the cluster installation, and start the business migration after the normal deployment of the cluster is completed.

3.4 Kubernetes-Operator execution process

图片

  1. The cluster administrator or the container platform triggers the creation of the CR of the ClusterDeployment to define the operation of the current cluster.
  2. The ClusterDeployment controller senses changes coming into the controller.
  3. Begin to create a machineSet and associated machine resources.
  4. The ClusterInstall controller senses changes in ClusterDeployment and Machineset, starts to count machine resources, creates configmaps and jobs, specifies the ansible yml entry for operations, and performs operations such as expansion, upgrade, and installation.
  5. The scheduler senses the pod resources created by the job and schedules them.
  6. The scheduler calls the K8s client to update the pod's binding resources.
  7. The kubelet perceives the scheduling result of the pod, creates a pod and starts executing the ansible playbook.
  8. The job controller senses the execution status of the job and updates the ClusterDeployment status. Under the general strategy, the job controller will clean up the configmap and job resources.
  9. NodeHealthy senses whether the node of K8s is ready and synchronizes the state of the machine.
  10. The addons controller senses whether the cluster is ready, and if it is ready, installs and upgrades related addons plugins.

4. Summary

In vivo's large-scale K8s cluster operation and maintenance practice, from the optimization of the underlying cluster deployment tools to a large number of CI matrix tests to ensure the safety and stability of our online cluster operation and maintenance. The K8s hosting K8s method is adopted to automatically manage the cluster (K8s as a service). When the operator detects the current cluster status and determines whether it is consistent with the target, the operator will initiate a specific operation process to drive the entire cluster to reach the target state. .

At present, vivo's applications are mainly distributed in multiple K8s clusters in self-built data centers. With the continuous growth of applications and complex business scenarios, it is necessary to provide multiple K8s clusters across self-built computer rooms and clouds to run the original cloud. s application. Kubernetes-Operator is required to provide docking with public cloud infrastructure, apiserver load balancing, network, dns and Cloud Provider, etc. It needs to be continuously improved in the future to reduce the difficulty of operation and maintenance of the K8s cluster.


vivo互联网技术
3.3k 声望10.2k 粉丝