Author: Rick
Jenkins can be well integrated with Kubernetes, whether it is a controller (controller) or a build node (agent), it can run on Kubernetes in the form of Pod. Users who are familiar with Jenkins know that Jenkins supports multiple types of build nodes, such as fixed configuration and dynamic configuration. The way the node and the controller are connected include: JNLP, SSH, etc. For users who are already fully embracing container technology, most use build nodes by connecting to Kubernetes clusters and dynamically starting and destroying Pods. As the types and numbers of construction nodes increase, how to maintain these Kubernetes-based nodes more effectively has gradually become a problem. In this article, I will introduce a configuration-as-code-based approach to managing and maintaining build nodes.
Configuration as Code (CasC for short) is a very good idea, which makes Jenkins users do not need to open the UI interface again and again to modify the system configuration. The advantage of modifying the configuration through the UI is that it is relatively easy to understand the meaning of the configuration item with the help of the description information on the page. However, the corresponding disadvantages are also very obvious: it is difficult to reuse, even if it is the exact same configuration, it needs to be manually operated again in other environments; the modification process cannot be tracked; it cannot be quickly rolled back when an error occurs. With the ability of CasC, we can save the system configuration of Jenkins to a Git code repository, as well as GitOps tools (eg Argo CD), and finally make modifying the system configuration of Jenkins a controllable and convenient job .
However, when the configuration of Jenkins becomes complex, the corresponding YAML configuration file may also become larger and harder to maintain.
Returning to the core problem we hope to solve, the expected solution is: only need to maintain the PodTemplate separately to achieve the maintenance of the Jenkins build node. In order to solve this problem, we need to solve the problem that the PodTemplate in the Jenkins configuration is inconsistent with the built-in PodTemplate in Kubernetes; and how to dynamically load the Jenkins configuration.
In order to solve the above problems, only one Deployment needs to be deployed. This component is responsible for monitoring the built-in PodTemplate of Kubernetes, loading it into the Jenkins system configuration (CasC YAML file), and then calling the Jenkins API to reload the configuration. In order to take full advantage of Kubernetes, we store the CasC configuration as a ConfigMap and mount it in Jenkins in the form of a Volume.
The following are the experimental steps (this article provides the core ideas and key steps, each specific file can be found in the code repository address provided at the end of the article):
Prepare a Kubernetes cluster to ensure that there are enough access rights to ensure that the existing business of the cluster will not be affected. It is recommended to use lightweight clusters such as MiniKube, Kind, and K3s that are easy to develop and test.
First, store the system configuration of Jenkins in the ConfigMap in CasC YAML format, for example:
apiVersion: v1
data:
jenkins_user.yaml: |
jenkins:
mode: EXCLUSIVE
numExecutors: 0
scmCheckoutRetryCount: 2
disableRememberMe: true
clouds:
- kubernetes:
name: "kubernetes"
serverUrl: "https://kubernetes.default"
skipTlsVerify: true
kind: ConfigMap
metadata:
name: jenkins-casc-config
namespace: kubesphere-devops-system
Then, mount the above ConfigMap into the Jenkins workload. It should be noted that the plugins that Jenkins used in the experiment must be installed are: kubernetes kubernetes-credentials-provider configuration-as-code. The reference is as follows:
spec:
template:
spec:
containers:
- image: ghcr.io/linuxsuren/jenkins:lts
env:
- name: CASC_JENKINS_CONFIG
value: "/var/jenkins_home/casc_configs/" # loading config file from a directory that was mount from a ConfigMap
volumeMounts:
- mountPath: /var/jenkins_home/casc_configs
name: casc-config # mount from a volume
volumes:
- configMap:
defaultMode: 420
name: jenkins-casc-config # clamin a ConfigMap volume, all the CasC YAML content will be here
name: casc-config
Next up is the core Kubernetes controller. Please refer to the following configuration to create the corresponding Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-agent
namespace: kubesphere-devops-system
spec:
template:
spec:
containers:
- image: kubespheredev/devops-controller:dev-v3.2.1-rc.3-6726130
name: controller
args:
- --enabled-controllers
- all=false,jenkinsagent=true,jenkinsconfig=true # only enable the necessary features of this controller
The controller listens for all PodTemplate resources with the label jenkins.agent.pod
and converts it into a Jenkins-style PodTemplate and loads it into the system configuration. Typically, this may have a delay of 3-5 seconds.
When you have completed all the above steps and ensured that the relevant components are started correctly, you can try adding a Kubernetes built-in PodTemplate. Then, you can create a pipeline to test the corresponding nodes.
References
- Sample configuration file
- core controller
This article is published by OpenWrite , a multi-post blog platform!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。