Author: Lao Z, operation and maintenance architect of Shandong Branch of China Telecom Digital Intelligence Technology Co., Ltd., cloud native enthusiast, currently focusing on cloud native operation and maintenance. The technology stack in the cloud native field involves Kubernetes, KubeSphere, DevOps, OpenStack, Ansible, etc.

KubeKey is an open source lightweight tool for deploying K8s clusters.

It provides a flexible, fast and convenient way to install only Kubernetes/K3s, or both K8s/K3s and KubeSphere, and other cloud-native plugins. In addition to this, it is also an effective tool for scaling and upgrading clusters.

KubeKey v2.1.0 adds the concepts of manifest and artifact, providing a solution for users to deploy K8s clusters offline.

The manifest is a text file that describes the current K8s cluster information and defines what should be included in the artifact.

In the past, users needed to prepare deployment tools, image tar packages and other related binaries. The K8s version and images that each user needed to deploy were different. Now using KubeKey, users only need to use the manifest file to define the content required by the cluster environment to be deployed offline, and then export the artifact file through the manifest to complete the preparation. In offline deployment, only KubeKey and artifact are needed to quickly and easily deploy image warehouses and K8s clusters in the environment.

There are two ways for KubeKey to generate manifest file.

The advantage of the first method is that a 1:1 operating environment can be built, but a cluster needs to be deployed in advance, which is not flexible enough, and not everyone has this condition.

Therefore, this article refers to the official offline documentation, and adopts the way of handwritten manifest file to realize the installation and deployment of the offline environment.

Knowledge points of this article

  • Rating: entry level
  • Understand the concept of manifest and artifact
  • Learn how to write a manifest
  • Make an artifact from the manifest
  • Offline deployment of KubeSphere and Kubernetes

Demo server configuration

CPU name IP CPU Memory system disk data disk use
zdeops-master 192.168.9.9 2 4 40 200 Ansible operation and maintenance control node
ks-k8s-master-0 192.168.9.91 4 16 40 200+200 KubeSphere/k8s-master/k8s-worker/Ceph
ks-k8s-master-1 192.168.9.92 4 16 40 200+200 KubeSphere/k8s-master/k8s-worker/Ceph
ks-k8s-master-2 192.168.9.93 4 16 40 200+200 KubeSphere/k8s-master/k8s-worker/Ceph
es-node-0 192.168.9.95 2 8 40 200 ElasticSearch
es-node-1 192.168.9.96 2 8 40 200 ElasticSearch
es-node-2 192.168.9.97 2 8 40 200 ElasticSearch
harbor 192.168.9.89 2 8 40 200 Harbor
total 8 twenty two 84 320 2200

The demo environment involves software version information

  • OS: CentOS-7.9-x86_64
  • KubeSphere: 3.3.0
  • Kubernetes: 1.24.1
  • Kubekey: v2.2.1
  • Ansible: 2.8.20
  • Harbor: 2.5.1

Offline deployment resource production

Download KubeKey

 # 在zdevops-master 运维开发服务器执行

# 选择中文区下载(访问github受限时使用)
$ export KKZONE=cn

# 下载KubeKey
$ mkdir /data/kubekey
$ cd /data/kubekey/
$ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -

Get manifest template

Refer to https://github.com/kubesphere/kubekey/blob/master/docs/manifest-example.md

There are two reference use cases, a simple version and a full version. Just refer to the simple version.

get ks-installer images-list

 $ wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt

The image list in this article selects the public repository stored by other components of the Docker Hub repository. It is recommended to change the prefix uniformly to registry.cn-beijing.aliyuncs.com/kubesphereio in China

The modified full mirror list is shown in the manifest file below.

Please note that only busybox is reserved in the images included in example-images , others are not used in this article.

Get OS dependencies

 $ wget https://github.com/kubesphere/kubekey/releases/download/v2.2.1/centos7-rpms-amd64.iso

Put the ISO file in the /data/kubekey directory of the server where the offline mirror is made

Generate manifest file

Based on the above files and related information, generate the final manifest.yaml .

named ks-v3.3.0-manifest.yaml

 apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems:
  - arch: amd64
    type: linux
    id: centos
    version: "7"
    osImage: CentOS Linux 7 (Core)
    repository:
      iso:
        localPath: "/data/kubekey/centos7-rpms-amd64.iso"
        url:
  kubernetesDistributions:
  - type: kubernetes
    version: v1.24.1
  components:
    helm: 
      version: v3.6.3
    cni: 
      version: v0.9.1
    etcd: 
      version: v3.4.13
    containerRuntimes:
    - type: containerd
      version: 1.6.4
    crictl: 
      version: v1.24.0
    docker-registry:
      version: "2"
    harbor:
      version: v2.4.1
    docker-compose:
      version: v2.2.2
  images:
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.9.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.9.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.3.0-2.319.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11
  - registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38
  - registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0
  registry:
    auths: {}

Manifest modification instructions

  • Enable the harbor and docker-compose configuration items, which will be used to push images from the self-built harbor warehouse through KubeKey later.
  • The mirror list in the manifest created by default is obtained from docker.io , and the replacement prefix is registry.cn-beijing.aliyuncs.com/kubesphereio .
  • If the exported artifact file contains operating system dependent files (such as conntarck, chrony, etc.), you can configure the corresponding ISO dependent file download address in .repostiory.iso.url in the operationSystem element as localPath , and fill in the downloaded file in advance. The local storage path of the ISO package, and leave the url configuration item empty.
  • You can visit https://github.com/kubesphere/kubekey/releases/tag/v2.2.1 to download the ISO file.

Export artifact

 $ export KKZONE=cn

$ ./kk artifact export -m ks-v3.3.0-manifest.yaml -o kubesphere-v3.3.0-artifact.tar.gz

Artifact Description

  • An artifact is a tgz package containing an image tar package and related binaries exported according to the contents of the specified manifest file.
  • You can specify an artifact in KubeKey's commands to initialize a mirror warehouse, create a cluster, add nodes, and upgrade a cluster. KubeKey will automatically unpack the artifact and use the unpacked file directly when executing the command.

Export Kubekey

 $ tar zcvf kubekey-v2.2.1.tar.gz kk kubekey-v2.2.1-linux-amd64.tar.gz

K8s server initialization configuration

This section performs the initial configuration of the K8s server in the offline environment.

Ansible hosts configuration

 [k8s]
ks-k8s-master-0 ansible_ssh_host=192.168.9.91  host_name=ks-k8s-master-0
ks-k8s-master-1 ansible_ssh_host=192.168.9.92  host_name=ks-k8s-master-1
ks-k8s-master-2 ansible_ssh_host=192.168.9.93  host_name=ks-k8s-master-2

[es]
es-node-0 ansible_ssh_host=192.168.9.95 host_name=es-node-0
es-node-1 ansible_ssh_host=192.168.9.96 host_name=es-node-1
es-node-2 ansible_ssh_host=192.168.9.97 host_name=es-node-2

harbor ansible_ssh_host=192.168.9.89 host_name=harbor

[servers:children]
k8s
es

[servers:vars]
ansible_connection=paramiko
ansible_ssh_user=root
ansible_ssh_pass=F@ywwpTj4bJtYwzpwCqD

Check server connectivity

 # 利用 ansible 检测服务器的连通性

$ cd /data/ansible/ansible-zdevops/inventories/dev/
$ source /opt/ansible2.8/bin/activate
$ ansible -m ping all

Initialize server configuration

 # 利用 ansible-playbook 初始化服务器配置

$ ansible-playbook ../../playbooks/init-base.yaml -l k8s

mount data disk

  • Mount the first data disk
 # 利用 ansible-playbook 初始化主机数据盘
# 注意 -e data_disk_path="/data" 指定挂载目录, 用于存储 Docker 容器数据

$ ansible-playbook ../../playbooks/init-disk.yaml -e data_disk_path="/data" -l k8s
  • Mount verification
 # 利用 ansible 验证数据盘是否格式化并挂载
$ ansible harbor -m shell -a 'df -h'

# 利用 ansible 验证数据盘是否配置自动挂载

$ ansible harbor -m shell -a 'tail -1  /etc/fstab'

Install K8s system dependencies

 # 利用 ansible-playbook 安装 kubernetes 系统依赖包
# ansible-playbook 中设置了启用 GlusterFS 存储的开关,默认开启,不需要的可以将参数设置为 False

$ ansible-playbook ../../playbooks/deploy-kubesphere.yaml -e k8s_storage_glusterfs=false -l k8s

Install the cluster offline

Transfer offline deployment resources to deployment nodes

Upload the following offline deployment resources to the /data/kubekey directory of the deployment node (usually the first master node).

  • Kubekey: kubekey-v2.2.1.tar.gz
  • Artifact: kubesphere-v3.3.0-artifact.tar.gz

Do the following to extract kubekey.

 $ cd /data/kubekey
$ tar xvf kubekey-v2.2.1.tar.gz

Create an offline cluster configuration file

  • Create configuration file
 $ ./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.24.1 -f config-sample.yaml
  • Modify the configuration file
 $ vim config-sample.yaml

Modify content description

  • Modify the node information according to the actual offline environment configuration.
  • Add the relevant information of the registry according to the actual situation.
 apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: ks-k8s-master-0, address: 192.168.9.91, internalAddress: 192.168.9.91, user: root, password: "F@ywwpTj4bJtYwzpwCqD"}
  - {name: ks-k8s-master-1, address: 192.168.9.92, internalAddress: 192.168.9.92, user: root, password: "F@ywwpTj4bJtYwzpwCqD"}
  - {name: ks-k8s-master-2, address: 192.168.9.93, internalAddress: 192.168.9.93, user: root, password: "F@ywwpTj4bJtYwzpwCqD"}
  roleGroups:
    etcd:
    - ks-k8s-master-0
    - ks-k8s-master-1
    - ks-k8s-master-2
    control-plane: 
    - ks-k8s-master-0
    - ks-k8s-master-1
    - ks-k8s-master-2
    worker:
    - ks-k8s-master-0
    - ks-k8s-master-1
    - ks-k8s-master-2
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy

    domain: lb.zdevops.com.cn
    address: ""
    port: 6443
  kubernetes:
    version: v1.24.1
    clusterName: zdevops.com.cn
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    type: "harbor"
    auths:
      "registry.zdevops.com.cn":
         username: admin
         password: Harbor12345
    privateRegistry: "registry.zdevops.com.cn"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
  addons: []

# 下面的内容不修改,不做展示

Create a project in Harbor

This article uses the pre-deployed Harbor to store the image. For the deployment process, please refer to the KubeSphere-based installation notes for k8s-Harbor that I wrote earlier.

You can use the kk tool to automatically deploy Harbor, please refer to the official offline deployment documentation for details.

  • Download the create project script template
 $ curl -O https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/create_project_harbor.sh
  • Modify the project script according to the actual situation
 #!/usr/bin/env bash

# Harbor 仓库地址
url="https://registry.zdevops.com.cn"

# 访问 Harbor 仓库用户
user="admin"

# 访问 Harbor 仓库用户密码
passwd="Harbor12345"

# 需要创建的项目名列表,正常只需要创建一个**kubesphereio**即可,这里为了保留变量可扩展性多写了两个。
harbor_projects=(library
    kubesphereio
    kubesphere
)

for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}"
done
  • Execute the script to create the project
 $ sh create_project_harbor.sh

Push offline images to Harbor repository

Push the offline image prepared in advance to the Harbor repository. This step is optional, because the image will be pushed again when the cluster is created. In order to deploy once the success rate, it is recommended to push first.

 $ ./kk artifact image push -f config-sample.yaml -a  kubesphere-v3.3.0-artifact.tar.gz

Create a cluster and install OS dependencies

 $ ./kk create cluster -f config-sample.yaml -a kubesphere-v3.3.0-artifact.tar.gz --with-packages

Parameter Description

  • config-sample.yaml : The configuration file for the offline environment cluster.
  • kubesphere-v3.3.0-artifact.tar.gz : The tar package image of the artifact package.
  • --with-packages : If you need to install operating system dependencies, you need to specify this option.

View cluster status

 $ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

Once installed properly, you will see the following:

 **************************************************
Collecting installation results ...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.9.91:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2022-06-30 14:30:19
#####################################################

Log in to the web console

Access KubeSphere's web console through http://{IP}:30880 using the default account and password admin/P@88w0rd for subsequent operation configuration.

Summarize

Thank you for reading this article completely, for this, you should get the following skills

  • Understand the concept of manifest and artifact
  • Learn how to get manifest and image resources
  • Handwritten manifest checklist
  • Make an artifact from the manifest
  • Offline deployment of KubeSphere and Kubernetes
  • Harbor mirror repository automatically creates projects
  • Tips for using Ansible

So far, we have completed the deployment of the KubeSphere and K8s clusters in the minimal environment. However, this is just the beginning, there are still many configuration and usage skills in the future, so stay tuned...

This article is published by OpenWrite , a multi-post blog platform!

KubeSphere
127 声望61 粉丝

KubeSphere 是一个开源的以应用为中心的容器管理平台,支持部署在任何基础设施之上,并提供简单易用的 UI,极大减轻日常开发、测试、运维的复杂度,旨在解决 Kubernetes 本身存在的存储、网络、安全和易用性等痛...