This article introduces how to combine our commonly used GitLab and Jenkins to realize automatic deployment of projects through K8s from a practical point of view. The production architecture diagram currently being used by the company is the focus of this explanation, as shown in the figure:
The tools and techniques covered in this article include:
- GitLab: commonly used source code management system;
- Jenkins (Jenkins Pipeline): A commonly used automated build and deployment tool, Pipeline organizes each step of build and deployment in a pipeline manner;
- Docker (dockerfile): container engine, all applications must eventually run in docker containers, dockerfile is the docker image definition file;
- Kubernetes: Google's open source container orchestration management system.
Environment Background:
- GitLab has been used for source code management, and the source code is divided into different branches according to different environments, such as: dev (development branch), test (test branch), pre (pre-release branch), master (production branch);
- Jenkins service has been built;
- Existing docker Registry service for storage of docker images (you can build your own based on docker Registry or Harbor, or use cloud services, this article uses Alibaba Cloud Container Image Service);
- A K8s cluster has been deployed.
Expected effect:
- Deploy applications in different environments to isolate the development environment, test environment, pre-release environment and production environment. Among them, the development, test, and pre-release environments are deployed in the same K8s cluster, but use different namespaces, and the production environment is deployed in Alibaba Cloud, using ACK container service;
- The configuration is as general as possible, and the automatic deployment configuration of a new project can be completed only by modifying a small number of configuration properties in a small number of configuration files;
- The development, testing and pre-release environments can be set to automatically trigger builds and deployments when pushing code. The specific configuration is based on the actual situation. The production environment uses a separate ACK cluster and a separate Jenkins system for deployment;
- The overall interaction flow chart is as follows:
project configuration file
First we need to add some necessary configuration files to the root path of the project. as the picture shows
include:
- dockerfile file, used to build the docker image file;
Docker_build.sh
file, used to tag the docker image and push it to the mirror warehouse;- Project Yaml file, this file is the main file for deploying the project to the K8s cluster.
dockerfile
Add a dockerfile file (the file name is dockerfile) in the project root directory to define how to build a docker image. Take a Java project as an example:
# 镜像来源
FROM xxxxxxxxxxxxxxxxxxxxxxxxxx.cr.aliyuncs.com/billion_basic/alpine-java:latest
# 拷贝当前目录的应用到镜像
COPY target/JAR_NAME /application/
# 声明工作目录,不然找不到依赖包,如果有的话
WORKDIR /application
# 声明动态容器卷
VOLUME /application/logs
# 启动命令
# 设置时区
ENTRYPOINT ["java","-Duser.timezone=Asia/Shanghai","-Djava.security.egd=file:/dev/./urandom"]
CMD ["-jar","-Dspring.profiles.active=SPRING_ENV","-Xms512m","-Xmx1024m","/application/JAR_NAME"]
docker_build.sh
Create a deploy
folder in the project root directory. This folder stores the configuration files of each environment project. The Docker_build.sh
file is designed to trigger the project to be packaged as a mirror file, re-tagged and pushed to the mirror warehouse. It also exists in the mirror warehouse. Example of a Java project:
# !/bin/bash
# 模块名称
PROJECT_NAME=$1
# 名称空间目录
WORKSPACE="/home/jenkins/workspace"
# 模块目录
PROJECT_PATH=$WORKSPACE/pro_$PROJECT_NAME
# jar 包目录
JAR_PATH=$PROJECT_PATH/target
# jar 包名称
JAR_NAME=$PROJECT_NAME.jar
# dockerfile 目录
dockerFILE_PATH="/$PROJECT_PATH/dockerfile"
# sed -i "s/VAR_CONTAINER_PORT1/$PROJECT_PORT/g" $PROJECT_PATH/dockerfile
sed -i "s/JAR_NAME/$JAR_NAME/g" $PROJECT_PATH/dockerfile
sed -i "s/SPRING_ENV/k8s/g" $PROJECT_PATH/dockerfile
cd $PROJECT_PATH
# 登录阿里云仓库
docker login xxxxxxxxxxxxxxxxxxxxxxxxxx.cr.aliyuncs.com -u 百瓶网 -p xxxxxxxxxxxxxxxxxxxxxxxxxx
# 构建模块镜像
docker build -t $PROJECT_NAME .
docker tag $PROJECT_NAME xxxxxxxxxxxxxxxxxxxxxxxxxx.cr.aliyuncs.com/billion_pro/pro_$PROJECT_NAME:$BUILD_NUMBER
# 推送到阿里云仓库
docker push xxxxxxxxxxxxxxxxxxxxxxxxxx.cr.aliyuncs.com/billion_pro/pro_$PROJECT_NAME:$BUILD_NUMBER
project.yaml
file
project.yaml
defines the project name, PV, PVC, namespace, number of copies, mirror address, service port, eye-catching self-check, project resource request configuration, file mount and service required for project deployment to K8s cluster:
# ------------------- PersistentVolume(定义PV) ------------------- #
apiVersion: v1
kind: PersistentVolume
metadata:
# 项目名称
name: pv-billionbottle-wx
namespace: billion-pro
labels:
alicloud-pvname: pv-billionbottle-wx
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
csi:
driver: nasplugin.csi.alibabacloud.com
volumeHandle: pv-billionbottle-wx
volumeAttributes:
server: "xxxxxxxxxxxxx.nas.aliyuncs.com"
path: "/k8s/java"
mountOptions:
- nolock,tcp,noresvport
- vers=3
---
# ------------------- PersistentVolumeClaim(定义PVC) ------------------- #
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-billionbottle-wx
namespace: billion-pro
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
alicloud-pvname: pv-billionbottle-wx
---
# ------------------- Deployment ------------------- #
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: billionbottle-wx
name: billionbottle-wx
# 定义 namespace
namespace: billion-pro
spec:
# 定义副本数
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: billionbottle-wx
template:
metadata:
labels:
k8s-app: billionbottle-wx
spec:
serviceAccountName: default
imagePullSecrets:
- name: registrykey-k8s
containers:
- name: billionbottle-wx
# 定义镜像地址
image: $IMAGE_NAME
imagePullPolicy: IfNotPresent
# 定义自检
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 60
successThreshold: 1
tcpSocket:
port: 8020
timeoutSeconds: 1
ports:
# 定义服务端口
- containerPort: 8020
protocol: TCP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 60
successThreshold: 1
tcpSocket:
port: 8020
timeoutSeconds: 1
# 定义项目资源配置
resources:
requests:
memory: "1024Mi"
cpu: "300m"
limits:
memory: "1024Mi"
cpu: "300m"
# 定义文件挂载
volumeMounts:
- name: pv-billionbottle-key
mountPath: "/home/billionbottle/key"
- name: pvc-billionbottle-wx
mountPath: "/billionbottle/logs"
volumes:
- name: pv-billionbottle-key
persistentVolumeClaim:
claimName: pvc-billionbottle-key
- name: pvc-billionbottle-wx
persistentVolumeClaim:
claimName: pvc-billionbottle-wx
---
# ------------------- Dashboard Service(定义service) ------------------- #
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: billionbottle-wx
name: billionbottle-wx
namespace: billion-pro
spec:
ports:
- port: 8020
targetPort: 8020
type: ClusterIP
selector:
k8s-app: billionbottle-wx
By default, the path of the mirror is defined through the Pipeline. You can directly replace $IMAGE_NAME
with a variable, and you can directly specify the port of the container here without changing the dockerfile file template (to allow the template file to be reused in various environments, it is usually not necessary to modify it ), at the same time, the configuration of ENV is added, and the configuration file of configmap can be directly read. Change the Service type from the default NodePort
to ClusterIp
to ensure that only internal communication between projects. When deploying different projects, you only need to modify the environment variables, project names and a few other configuration items in docker_build.sh
and Project.yaml
. The dockerfile files in the root directory can be reused in each environment.
When deploying, we need to pull the image from the Docker image repository in the K8s cluster, so we need to create the image repository access credentials (imagePullSecrets) in K8s first.
# 登录 docker Registry 生成 /root/.docker/config.json 文件
docker login --username=your-username registry.cn-xxxxx.aliyuncs.com
# 创建 namespace billion-pro (我这里时根据项目的环境分支名称创建的namespace)
kubectl create namespace billion-pro
# 在 namespace billion-pro 中创建一个 secret
kubectl create secret registrykey-k8s aliyun-registry-secret --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson --name=billion-pro
Jenkinsfile (Pipeline)
Jenkinsfile is a Jenkins Pipeline configuration file that follows the Groovy scripting specification. For the build and deployment of Java projects, the Pipeline script file of Jenkinsfile is as follows:
#!/bin/sh -ilex
def env = "pro"
def registry = "xxxxxxxxxxxxxxx.cn-shenzhen.cr.aliyuncs.com"
def git_address = "http://xxxxxxxxx/billionbottle/billionbottle-wx.git"
def git_auth = "1eb0be9b-ffbd-457c-bcbf-4183d9d9fc35"
def project_name = "billionbottle-wx"
def k8sauth = "8dd4e736-c8a4-45cf-bec0-b30631d36783"
def image_name = "${registry}/billion_pro/pro_${project_name}:${BUILD_NUMBER}"
pipeline{
environment{
BRANCH = sh(returnStdout: true,script: 'echo $branch').trim()
}
agent{
node{
label 'master'
}
}
stages{
stage('Git'){
steps{
git branch: '${Branch}', credentialsId: "${git_auth}", url: "${git_address}"
}
}
stage('maven build'){
steps{
sh "mvn clean package -U -DskipTests"
}
}
stage('docker build'){
steps{
sh "chmod 755 ./deploy/${env}_docker_build.sh && ./deploy/${env}_docker_build.sh ${project_name} ${env}"
}
}
stage('K8s deploy'){
steps{
sh "pwd && sed -i 's#\$IMAGE_NAME#${image_name}#' deploy/${env}_${project_name}.yaml"
kubernetesDeploy configs: "deploy/${env}_${project_name}.yaml", kubeconfigId: "${k8sauth}"
}
}
}
}
The Jenkinsfile's Pipeline script defines the entire automated build deployment process:
- Code Analyze: You can use static code analysis tools such as SonarQube to complete code inspection, which is ignored here;
- Maven Build: Start a maven program to complete the building and packaging of the project maven, or start a maven container, mount the maven local warehouse directory to the host machine, and avoid the need to re-download the dependency package every time;
- docker Build: Build a docker image and push it to the mirror warehouse. Images in different environments are distinguished by tag prefixes, such as the development environment is
dev_
, the test environment istest_
, the pre-release environment ispre_
, and the production environment ispro_
; - K8s Deploy: Use Jenkins' built-in plug-in to complete project deployment, or update and iterate existing projects. Different environments use different parameter configurations. The access credentials of the K8s cluster can be directly configured with kube_config.
Jenkins configuration
Jenkins task configuration
Create a Pipeline task in Jenkins, as shown in the figure:
Configure the build trigger and set the target branch to the master branch, as shown in the figure:
Configure the pipeline, select "Pipeline script" and configure the Pipeline script file, configure the project Git address, pull the source code certificate, etc., as shown in the figure:
The key credentials referenced in the above figure need to be configured in jenkins in advance, as shown below:
Save to complete the Jenkins configuration of the project production environment. The same is true for other environments. It should be noted that the branches corresponding to each environment are distinguished.
cluster function introduction162008b3ab38ee
K8s is a container-based cluster orchestration engine. It has various capabilities such as cluster expansion, rolling upgrade and rollback, elastic scaling, automatic healing, and service discovery. Combined with the actual situation of the current production environment, it focuses on several commonly used function points. For more details about other functions, please check directly at Kubernetes official website .
Kubernetes Architecture Diagram
From a macro perspective, the overall architecture of Kubernetes includes Master, Node and Etcd.
Master is the master node, which is responsible for controlling the entire kubernetes cluster. It contains Api Server, Scheduler, Controller and other parts, they all need to interact with Etcd to store data.
- Api Server: It mainly provides a unified entry for resource operations, which shields the direct interaction with Etcd. The functions include security, registration and discovery;
- Scheduler: Responsible for scheduling pods to Node according to certain scheduling rules;
- Controller: The resource control center to ensure that the resource is in the expected state.
Node is a worker node, which provides computing power for the entire cluster and is where the container really runs, including running containers, kubelet, and kube-proxy.
- kubelet: The main job is to manage the life cycle of the container, monitor, health check and regularly report the node status in conjunction with cAdvisor;
- kube-proxy: It mainly uses service to provide service discovery and load balancing within the cluster, while monitoring service/endpoints changes to refresh load balancing.
Container Orchestration
There are many orchestration-related control resources in Kubernetes, such as deployment for orchestrating stateless applications, statefulset for orchestrating stateful applications, daemonset for orchestrating daemon processes, and job/cronjob for orchestrating offline tasks.
Let's take the deployment of the current production environment application as an example. The relationship between deployment, replicatset and pod is a layer-by-layer control relationship. In short, the replicaatset controls the number of pods, and the deployment controls the version attribute of the replicatset. This design The pattern also provides the basis for the two most basic orchestration actions, that is, the horizontal expansion and contraction of quantity control, and the update/rollback of version attribute control.
Horizontal expansion and shrinkage
The horizontal expansion and contraction is very easy to understand. We only need to modify the number of pod copies controlled by the replicatset, for example, from 2 to 3, then the horizontal expansion is completed, and vice versa.
Rolling Update
Rolling deployment is the default deployment strategy in K8s, it replaces the pods of the previous version of the application with the new version of the pod one by one without any cluster downtime, rolling deployment slowly replaces the previous version's pods with instances of the new version of the application Application instance, as shown in the figure:
In the actual application of rolling update, we can configure RollingUpdateStrategy to control the rolling update strategy, and there are two other options that allow us to fine-tune the update process:
- maxSurge : The number of pods that can be created during an update exceeds the number of pods required, this can be an absolute number or a percentage of the replica count, the default is 25%;
- maxUnavailable : The number of pods that are unavailable during the update process, this can be an absolute number or a percentage of the replica count, the default is 25%.
(service)
Before understanding microservices, we need to understand a very important resource object - service
In microservices, pods can correspond to instances, then services correspond to microservices, and in the process of service invocation, the emergence of services solves two problems:
- The ip of the pod is not fixed, and it is unrealistic to use the non-fixed ip to make network calls;
- Service calls need to be load balanced across different pods.
The service selects the appropriate pod through the label selector, and builds an endpoints, that is, the pod load balancing list. In practice, we generally attach a label similar to app=xxx
to the pod instance of the same microservice, and create the same microservice at the same time. A service with a tag selector of app=xxx
.
Networking in
The network communication of K8s must first have the basis of "three links":
- Nodes and pods can communicate with each other;
- Node pods can communicate with each other;
- Pods between different nodes can communicate with each other.
Simply put, communication between different pods is achieved through the cni0/docker0 bridge, and node access to pods is also through bridge communication.
There are many ways to implement pod communication between different nodes, including the more common flannel vxlan/hostgw mode, etc., flannel learns the network information of other nodes through etcd, and creates a routing table for this node, which ultimately makes different Cross-host communication can be achieved between nodes.
summary
So far, we have basically introduced the concepts related to the basic components used in the overall architecture of our production environment, how they run, and how microservices run in Kubernetes, but it involves configuration center, monitoring and alarming. Some other components have not yet been introduced in detail, and strive to update this part of the content as soon as possible.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。