Introduction to KubeSphere
Kubernetes is a very complex container orchestration platform, and the learning cost is very high. What KubeSphere does is to highly productize and abstract the underlying Kubernetes, and it is a cloud-native operating system. To put it more simply, Kubernetes shields the differences in the runtime of the underlying containers, while KubeSphere shields the differences between the underlying Kubernetes clusters, which solves the high threshold for K8s use and the pain points of cloud-native ecological tools . You can schedule Pods to different nodes of the cluster with a few mouse clicks on the visual interface without writing YAML.
The above is the functional architecture of KubeSphere. You can see that KubeSphere includes many application scenarios, such as microservices, DevOps, application management, observability and security, etc. Each scenario ecology contains a lot of R&D and operation and maintenance personnel. More friendly components, and all components are pluggable, users can freely choose which component to enable according to their own wishes.
Starting from v4.0, KubeSphere will provide front-end and back-end pluggable architectures and frameworks. Any third-party partner offices and ISVs can develop and extend their desired functional plug-ins based on the KubeSphere 4.0 open framework. These functional plug-ins are compatible with KubeSphere. Completely consistent UI experience, forming a more powerful application ecosystem . This is like the relationship between Mac OS and the App Store. Any enterprise or team can publish their own plug-ins to the App Store, flexibly meet the needs of various users, and cooperate with KubeSphere in the community for mutual benefit and win-win results.
Introduction to GitLab
Extreme Fox GitLab is an integrated DevOps platform, which can be simply understood as the "release version" of GitLab in China. It is a product launched by GitLab (GitLab is a company established in the "Sino-foreign joint venture 3.0" model, which operates independently in China and provides domestic users with a DevOps platform and support services suitable for localization).
Jihulab GitLab is open source, anyone can participate in open source co-construction, and the code is hosted on Jihulab GitLab SaaS: https://jihulab.com/gitlab-cn/gitlab . The integrated DevOps capabilities it provides cover the entire software development life cycle (from planning to operation and maintenance), and it has built-in security functions to build a DevSecOps system with out-of-the-box security capabilities.
More importantly, GitLab supports both self-built (private deployment) and SaaS services. During private deployment, multiple installation methods are supported, including cloud-native installation methods. Therefore, combined with KubeSphere and GitLab, a continuous delivery system suitable for the cloud-native era can be created.
Install GitLab and Runner on KubeSphere
Currently, it is very convenient to deploy GitLab on KubeSphere, and you only need to use KubeSphere's application store to deploy with one click.
Application store and application lifecycle management are unique features of KubeSphere. KubeSphere provides users with a Helm-based application store for application lifecycle management. And since version 3.2.0, KubeSphere has added the function of "dynamically loading the application store" . Partners can apply to integrate the application's Helm Chart into the KubeSphere application store . After the relevant Pull Requests are merged, the KubeSphere application store can dynamically Loading applications is no longer limited by the KubeSphere version . At present, Gitlab has put its Helm Chart on the KubeSphere app store through dynamic loading.
Install Jihu GitLab
Simply select jh-gitlab in the red box in the figure below to start deployment.
The next step is to modify some parameters, namely the values in the Helm Chart.
You need to modify it according to your actual situation. My private environment does not need Ingress, and can be directly connected through the Cluster IP, so I set all the domain names to Service Name. In addition, you also need to cancel the installation of Runner, and then install it separately. Other parameters can be modified at your own discretion, for example, I canceled Certmanager and Ingress-Nginx.
Check out the created workload:
After the deployment is complete, you can access GitLab through the set domain name.
The default username is root, and the initial password can be obtained with the following command:
$ kubectl -n jh-gitlab get secret jh-gitlab-gitlab-initial-root-password -o go-template --template='{{.data.password}}' | base64 -D
After logging in, you can create a project first, which will be used in the following chapters on installing Runner and demonstrating Demo.
Install polarfox GitLab Runner
GitLab Runner is just one of the components of GitLab, so it can no longer be installed through the app store. In addition to the application store, KubeSphere can also install applications through application repositories and application templates. The so-called application repository is a loose version of the application store. The application store is shared by all users. The application repository is only a personal version of the application store. It does not require approval and will only exist in your own cluster . Just import the URL of the relevant application Helm Chart to become an application in the application repository. Here we choose to import the Helm Chart of GitLab.
Then click "Create" in "Apply".
Then select "From Application Template", and select GitLab in the drop-down box of the pop-up panel.
Select gitlab-runner, then set the application name, continue to click Next, and start modifying the deployment parameters.
Among them, the value of gitlabUrl
is the visual interface address of GitLab, and the value of runnerRegistrationToken
can be set to the registration token of the CI/CD Specific runners of the project that wants to use the runner, for example :
You also need to enable RBAC:
rbac:
create: true
rules: []
clusterWideAccess: false
podSecurityPolicy:
enabled: false
resourceNames:
- gitlab-runner
Other parameters can be modified as appropriate according to the actual situation. After the modification is completed, click Next to start the installation. After the installation is complete, you can enter the Pod to check the registration status:
$ kubectl -n jh-gitlab get pod -l release=gitlab-runner
NAME READY STATUS RESTARTS AGE
gitlab-runner-gitlab-runner-c7c999dfc-wgg56 1/1 Running 0 61s
$ kubectl -n jh-gitlab exec -it gitlab-runner-gitlab-runner-c7c999dfc-wgg56 -- bash
Defaulted container "gitlab-runner-gitlab-runner" out of: gitlab-runner-gitlab-runner, configure (init)
bash-5.0$ gitlab-runner list
Runtime platform arch=amd64 os=linux pid=106 revision=f761588f version=14.10.1
Listing configured runners ConfigFile=/home/gitlab-runner/.gitlab-runner/config.toml
gitlab-runner-gitlab-runner-c7c999dfc-wgg56 Executor=kubernetes Token=dSz6WoJzpD5bjkDhP5xN URL=https://jihulab.com/
bash-5.0$
You can see that the gitlab-runner command has been built into the Pod, and there is a successfully registered Runner instance gitlab-runner-gitlab-runner-c7c999dfc-wgg56
, and this is the Runner that was just installed with the application template. You can also see the successfully registered Runner on the GitLab Web page.
CI/CD Demo
Next, we demonstrate the working principle of GitLab's CI/CD pipeline through a simple pipeline example. Before demonstrating the pipeline, let's understand a few basic concepts.
Extreme Fox GitLab pipeline consists of two core components:
- Job : describes the task to be performed;
- Stage : Defines the order in which Jobs are executed.
The pipeline (Pipeline) is a set of Job collections running in each Stage, which can contain multiple processes: compilation, testing, deployment, etc. Any submission or Merge Request can trigger the pipeline.
The Runner is responsible for executing the Job. The ontology of the Runner is a daemon process running on a certain machine, similar to the Jenkins agent. From the Runner's own point of view, there is no type difference, it just registers with the designated GitLab based on the Token and URL.
After talking about the basic concepts, let's go directly to the example repository.
This sample application is an HTTP Server written in Go. The code is very simple, so I won't explain it.
package main
import (
"fmt"
"log"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello this is kubesphere")
}
func main() {
http.HandleFunc("/ks", handler)
log.Fatal(http.ListenAndServe(":9999", nil))
}
The pipeline arrangement file is .gitlab-ci.yml
and the content is as follows:
stages:
- build
- deploy
build:
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
stage: build
tags:
- kubernetes
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:1.0.0"
deploy:
image: bitnami/kubectl:latest
stage: deploy
tags:
- kubernetes
variables:
KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: jh-gitlab
only:
- main
script:
- kubectl -n jh-gitlab apply -f deployment.yaml
Since the latest version of Kubernetes has ruthlessly abandoned Docker and recommends using Containerd as the runtime, it is not possible to use Docker to build images. We can choose to use Kaniko. Kaniko is a tool open sourced by Google for building container images. Unlike Docker, Kaniko does not depend on the Docker Daemon process, and completely executes commands line by line according to the content of the Dockerfile to build images in the user space, which makes it possible to build images in some environments where the Docker Daemon process cannot be obtained. on a standard Kubernetes cluster.
This pipeline contains a total of two stages: build and deploy. It should be noted in the deployment stage that Pods in Kubernetes do not have permission to create workloads by default, so we need to create a new ServiceAccount and bind it to the ClusterRole that has permission.
# rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: jh-gitlab
namespace: jh-gitlab
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitlab-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jh-gitlab
namespace: jh-gitlab
$ kubectl apply -f rbac.yaml
Finally, let the Runner use this new ServiceAccount, namely:
variables:
KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: jh-gitlab
But this is not enough. By default, the pipeline does not have permission to modify the ServiceAccount of the Runner. It is necessary to modify the deployment list of the Runner and give it the modification permission.
Pay attention to the highlighted part on the right side of the figure, which means that the Pod under the jh-gitlab namespace is allowed to change its ServiceAccount. After editing, click Update.
Here is my complete configuration for your reference:
imagePullPolicy: IfNotPresent
gitlabUrl: 'https://jihulab.com/'
runnerRegistrationToken: GR1348941fycCAhY3_LnRFPqy3DL4
terminationGracePeriodSeconds: 3600
concurrent: 10
checkInterval: 30
sessionServer:
enabled: false
rbac:
create: true
rules: []
clusterWideAccess: false
podSecurityPolicy:
enabled: false
resourceNames:
- gitlab-runner
metrics:
enabled: false
portName: metrics
port: 9252
serviceMonitor:
enabled: false
service:
enabled: false
type: ClusterIP
runners:
config: |
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "ubuntu:16.04"
privileged: true
tags: "kubernetes"
cache: {}
builds: {}
services: {}
helpers: {}
securityContext:
runAsUser: 100
fsGroup: 65533
resources: {}
affinity: {}
nodeSelector: {}
tolerations: []
envVars:
- name: KUBERNETES_SERVICE_ACCOUNT_OVERWRITE_ALLOWED
value: jh-gitlab
hostAliases: []
podAnnotations: {}
podLabels: {}
secrets: []
configMaps: {}
Now arbitrarily modify the files in the repository (such as README) to trigger the pipeline.
It can be seen that the pipeline was successfully triggered and executed successfully. Finally, let's see if the built image has been successfully deployed.
From the KubeSphere visualization interface, you can see that the application has been successfully deployed. Finally, you can test whether the HTTP Server is working properly:
$ kubectl -n jh-gitlab get pod -l app=cicd-demo -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cicd-demo-86d7fb797c-428xs 1/1 Running 0 4m40s 10.233.65.81 k3s-worker02 <none> <none>
$ curl http://10.233.65.81:9999/ks
Hello this is kubesphere
Summarize
This article introduces KubeSphere and GitLab and their respective advantages, and discusses how to combine KubeSphere and GitLab to create a continuous delivery system in the cloud-native era. Finally, a pipeline example is used to show the working principle of GitLab pipeline. .
As can be seen from the last example, the process of CD (ie deployment stage) is still cumbersome. For example, additional tools (kubectl) need to be installed and configured, and Kubernetes is required to authorize it. If Kubernetes is deployed in the cloud platform, cloud The platform authorizes it. Most importantly, it cannot sense the status of the deployment, and after the deployment is complete, it cannot know whether the workload is serving normally.
So is there a solution to this problem? I will show you how to solve this problem with GitOps in the next article.
This article is published by OpenWrite , a multi-post blog platform!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。