📚 Introduction to Kubernetes (K8S)

Course content

  • Introduce what kubernetes is, when you need kubernetes, and its composition architecture.
  • Teach you how to install a kubernetes cluster in 3 different ways. Including minikube, cloud platform construction, bare metal construction (3 servers).
  • Demonstrate how to deploy the project to the cluster and how to expose the service port through a demo project
  • How to deploy a stateful application such as a database, and how to persist data
  • Use of configuration files and password files in the cluster
  • How to Quickly Install Third-Party Apps Using the Helm App Store
  • How to use Ingress to provide external services

Goal: After completing the course, have a comprehensive understanding of kubernetes, and be able to easily deal with various cluster deployments.

What is Kubernetes (K8S)

It is an open source tool that provides cluster deployment and management for containerized applications, developed by Google.
The name Kubernetes is derived from the Greek word meaning "helmsman" or "pilot". The abbreviation k8s is because of the eight-character relationship between k and s. Google open sourced the Kubernetes project in 2014

Main features:

  • High availability, no downtime, automatic disaster recovery
  • Grayscale update does not affect the normal operation of the business
  • One-click rollback to historical versions
  • Convenient scaling and expansion (application scaling, machine addition and subtraction), provide load balancing
  • have a perfect ecology
Prerequisites for learning the course <br> Familiar with the basic use of Docker, if you don’t know Docker yet, watch the video to get started quickly with Docker
Familiar with Linux operating system

Different application deployment scenarios

image.png

Traditional deployment method:

The application is directly deployed on the physical machine, and the resource allocation of the machine is not well controlled. When a bug occurs, most of the resources of the machine may be occupied by an application, which causes other applications to fail to run normally, and application isolation cannot be achieved.

Virtual machine deployment

Running multiple virtual machines on a single physical machine, each virtual machine is a complete and independent system, has a large performance loss.

Container deployment

All containers share the host system, lightweight virtual machine, low performance loss, resource isolation, CPU and memory can be allocated on demand

When do you need Kubernetes

When your application only runs on one machine, just one docker + docker-compose is enough, which is convenient and easy;
When your application needs to run on 3 or 4 machines, you can still configure the running environment + load balancer separately for each machine;
When the number of accesses to your application continues to increase, and the number of machines gradually increases to a dozen, hundreds, or thousands, every time you add machines, software updates, and version rollbacks, it will become very troublesome and painful, and you will no longer be able to do it well. If you are fishing, your life is wasted on repetitive work without technical content.

At this point, Kubernetes can come into play, allowing you to easily manage clusters of millions of machines. "When talking and laughing, the turrets will be wiped out." He enjoys controlling everything with one hand, and the annual salary of one million is just around the corner.

Kubernetes can provide you with centralized management of cluster machines and applications, adding machines, version upgrades, and version rollbacks, which are all done with one command, and non-stop grayscale updates to ensure high availability, high performance, and high expansion. .

Kubernetes cluster architecture

image.png

master

The master node, the control platform, does not require high performance and does not run tasks. Usually, one is sufficient. You can also open multiple master nodes to improve the availability of the cluster.

worker

Worker nodes, which can be virtual machines or physical computers, where tasks are run, and machine performance needs to be better; usually there are many, and machines can be added to expand the cluster; each worker node is managed by the master node

Important Concept Pod

Pod, the smallest unit of K8S scheduling and management, a Pod can contain one or more containers, and each Pod has its own virtual IP. A worker node can have multiple pods, and the master node will automatically schedule the pods to run on which node based on the load.

image.png

Kubernetes components

kube-apiserver API server, exposing the Kubernetes API
etcd Key-value database, which can be used as a background database to save all Kubernetes cluster data
kube-scheduler Which node to schedule the Pod to run on
kube-controller Cluster Controller
cloud-controller Interaction with cloud service providers
image.png

If you want to know more details about the composition of K8S, what are the programs of the master node and the worker node, and what are the functions of each, you can check the official website for a detailed introduction

💽 Install Kubernetes cluster

Installation method introduction

  • minikube
    Just a K8S cluster simulator, a cluster with only one node, only for testing, master and worker are together
  • Use the cloud platform Kubernetes directly
    Visual construction, you can create a cluster in just a few simple steps.
    Advantages: simple installation, complete ecology, load balancer, storage, etc. are all provided for you, and you can do it with simple operations
  • Bare Metal Installation
    At least two machines (one master node and one worker node) are required, and you need to install the Kubernetes components yourself, and the configuration will be a little more troublesome.
    You can rent servers from various cloud vendors on time, with low costs, and they will be destroyed when they are used up.
    Disadvantages: troublesome configuration, lack of ecological support, such as load balancer, cloud storage.

minikube

The installation is very simple and supports various platforms and installation methods

Docker needs to be installed in advance
 # 启动集群
minikube start
# 查看节点。kubectl 是一个用来跟 K8S 集群进行交互的命令行工具
kubectl get node
# 停止集群
minikube stop
# 清空集群
minikube delete --all
# 安装集群可视化 Web UI 控制台
minikube dashboard

Cloud platform construction

  • Tencent Cloud TKE (console search container)
  • Log in to Alibaba Cloud Console - Product Search Kubernetes

Bare Metal

Master node requires components
  • docker (can also be other container runtimes)
  • kubectl cluster command line interactive tool
  • kubeadm cluster initialization tool

    Component documentation is required for worker nodes
  • docker (can also be other container runtimes)
  • The kubelet manages pods and containers to ensure they are healthy and stable.
  • kube-proxy network proxy, responsible for network related work

start installation

You can also try this project to quickly build a K8S bare metal cluster with scripts. Of course, for better understanding, you should build it manually first
 # 每个节点分别设置对应主机名
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
 # 所有节点都修改 hosts
vim /etc/hosts
172.16.32.2 node1
172.16.32.6 node2
172.16.0.4 master
 # 所有节点关闭 SELinux
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
w All nodes make sure the firewall is closed
systemctl stop firewalld
systemctl disable firewalld

Add installation source (all nodes)

 # 添加 k8s 安装源
cat <<EOF > kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
mv kubernetes.repo /etc/yum.repos.d/

# 添加 Docker 安装源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Install required components (all nodes)
yum install -y kubelet-1.22.4 kubectl-1.22.4 kubeadm-1.22.4 docker-ce

Note that according to the feedback of the students, the version above 1.24 will report an error, which is different from the tutorial, so it is recommended that you specify the version number to install, and the version number should be the same as the teacher's.

Start kubelet, docker, and set startup (all nodes)

 systemctl enable kubelet
systemctl start kubelet
systemctl enable docker
systemctl start docker

Modify docker configuration (all nodes)

 # kubernetes 官方推荐 docker 等使用 systemd 作为 cgroupdriver,否则 kubelet 启动不了
cat <<EOF > daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://ud6340vz.mirror.aliyuncs.com"]
}
EOF
mv daemon.json /etc/docker/

# 重启生效
systemctl daemon-reload
systemctl restart docker

Initialize the cluster with kubeadm (only run on the master node),

 # 初始化集群控制台 Control plane
# 失败了可以用 kubeadm reset 重置
kubeadm init --image-repository=registry.aliyuncs.com/google_containers

# 记得把 kubeadm join xxx 保存起来
# 忘记了重新获取:kubeadm token create --print-join-command

# 复制授权文件,以便 kubectl 可以有权限访问集群
# 如果你其他节点需要访问集群,需要从主节点复制这个文件过去其他节点
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# 在其他机器上创建 ~/.kube/config 文件也能通过 kubectl 访问到集群
If you are interested in knowing what kubeadm init does, you can check the documentation

Add worker nodes to the cluster (run only on worker nodes)

 kubeadm join 172.16.32.10:6443 --token xxx --discovery-token-ca-cert-hash xxx

Install the network plug-in, otherwise the node is in the NotReady state (the main node runs)

 # 很有可能国内网络访问不到这个资源,你可以网上找找国内的源安装 flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

To view the node, you need to view it on the main node (other nodes can also view if kubectl is installed)

Installation method introduction

  • minikube
    Just a K8S cluster simulator, a cluster with only one node, only for testing, master and worker are together
  • Use the cloud platform Kubernetes directly
    Visual construction, you can create a cluster in just a few simple steps.
    Advantages: simple installation, complete ecology, load balancer, storage, etc. are all provided for you, and you can do it with simple operations
  • Bare Metal Installation
    At least two machines (one master node and one worker node) are required, and you need to install the Kubernetes components yourself, and the configuration will be a little more troublesome.
    You can rent servers from various cloud vendors on time, with low costs, and they will be destroyed when they are used up.
    Disadvantages: troublesome configuration, lack of ecological support, such as load balancer, cloud storage.
The courseware of this document needs to be studied together with the accompanying video

minikube

The installation is very simple and supports various platforms and installation methods

Docker needs to be installed in advance
 # 启动集群
minikube start
# 查看节点。kubectl 是一个用来跟 K8S 集群进行交互的命令行工具
kubectl get node
# 停止集群
minikube stop
# 清空集群
minikube delete --all
# 安装集群可视化 Web UI 控制台
minikube dashboard

Cloud platform construction

  • Tencent Cloud TKE (console search container)
  • Log in to Alibaba Cloud Console - Product Search Kubernetes

Bare Metal

Master node requires components
  • docker (can also be other container runtimes)
  • kubectl cluster command line interactive tool
  • kubeadm cluster initialization tool

    Component documentation is required for worker nodes
  • docker (can also be other container runtimes)
  • The kubelet manages pods and containers to ensure they are healthy and stable.
  • kube-proxy network proxy, responsible for network related work

start installation

You can also try this project to quickly build a K8S bare metal cluster with scripts. Of course, for better understanding, you should build it manually first
 # 每个节点分别设置对应主机名
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
 # 所有节点都修改 hosts
vim /etc/hosts
172.16.32.2 node1
172.16.32.6 node2
172.16.0.4 master
 # 所有节点关闭 SELinux
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
All nodes make sure firewalls are turned off
systemctl stop firewalld
systemctl disable firewalld

Add installation source (all nodes)

 # 添加 k8s 安装源
cat <<EOF > kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
mv kubernetes.repo /etc/yum.repos.d/

# 添加 Docker 安装源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Install required components (all nodes)
yum install -y kubelet-1.22.4 kubectl-1.22.4 kubeadm-1.22.4 docker-ce

Note that according to the feedback of the students, the version above 1.24 will report an error, which is different from the tutorial, so it is recommended that you specify the version number to install, and the version number should be the same as the teacher's.

Start kubelet, docker, and set startup (all nodes)

 systemctl enable kubelet
systemctl start kubelet
systemctl enable docker
systemctl start docker

Modify docker configuration (all nodes)

 # kubernetes 官方推荐 docker 等使用 systemd 作为 cgroupdriver,否则 kubelet 启动不了
cat <<EOF > daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://ud6340vz.mirror.aliyuncs.com"]
}
EOF
mv daemon.json /etc/docker/

# 重启生效
systemctl daemon-reload
systemctl restart docker

Initialize the cluster with kubeadm (only run on the master node),

 # 初始化集群控制台 Control plane
# 失败了可以用 kubeadm reset 重置
kubeadm init --image-repository=registry.aliyuncs.com/google_containers

# 记得把 kubeadm join xxx 保存起来
# 忘记了重新获取:kubeadm token create --print-join-command

# 复制授权文件,以便 kubectl 可以有权限访问集群
# 如果你其他节点需要访问集群,需要从主节点复制这个文件过去其他节点
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# 在其他机器上创建 ~/.kube/config 文件也能通过 kubectl 访问到集群
If you are interested in knowing what kubeadm init does, you can check the documentation

Add worker nodes to the cluster (run only on worker nodes)

 kubeadm join 172.16.32.10:6443 --token xxx --discovery-token-ca-cert-hash xxx

Install the network plug-in, otherwise the node is in the NotReady state (the main node runs)

 # 很有可能国内网络访问不到这个资源,你可以网上找找国内的源安装 flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

To view the node, you need to view it on the main node (other nodes can also view if kubectl is installed)
image.png

🏭Deploy the application to the cluster

Deployment application YAML file

direct command run

kubectl run testapp --image=ccr.ccs.tencentyun.com/k8s-tutorial/test-k8s:v1

Pod
 apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  # 定义容器,可以多个
  containers:
    - name: test-k8s # 容器名字
      image: ccr.ccs.tencentyun.com/k8s-tutorial/test-k8s:v1 # 镜像
Deployment
 apiVersion: apps/v1
kind: Deployment
metadata:
  # 部署名字
  name: test-k8s
spec:
  replicas: 2
  # 用来查找关联的 Pod,所有标签都匹配才行
  selector:
    matchLabels:
      app: test-k8s
  # 定义 Pod 相关数据
  template:
    metadata:
      labels:
        app: test-k8s
    spec:
      # 定义容器,可以多个
      containers:
      - name: test-k8s # 容器名字
        image: ccr.ccs.tencentyun.com/k8s-tutorial/test-k8s:v1 # 镜像
Deployments associate Pods with labels

image.png

Deploy application demo

Deploy a nodejs web application, source address: Github

 # 部署应用
kubectl apply -f app.yaml
# 查看 deployment
kubectl get deployment
# 查看 pod
kubectl get pod -o wide
# 查看 pod 详情
kubectl describe pod pod-name
# 查看 log
kubectl logs pod-name
# 进入 Pod 容器终端, -c container-name 可以指定进入哪个容器。
kubectl exec -it pod-name -- bash
# 伸缩扩展副本
kubectl scale deployment test-k8s --replicas=5
# 把集群内端口映射到节点
kubectl port-forward pod-name 8090:8080
# 查看历史
kubectl rollout history deployment test-k8s
# 回到上个版本
kubectl rollout undo deployment test-k8s
# 回到指定版本
kubectl rollout undo deployment test-k8s --to-revision=2
# 删除部署
kubectl delete deployment test-k8s
Pod error resolution

If you run kubectl describe pod/pod-name find the following error in Events

 networkPlugin cni failed to set up pod "test-k8s-68bb74d654-mc6b9_default" network: open /run/flannel/subnet.env: no such file or directory

Create a file on each node /run/flannel/subnet.env Write the following content, wait for a while after configuration

 FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
more commands
 # 查看全部
kubectl get all
# 重新部署
kubectl rollout restart deployment test-k8s
# 命令修改镜像,--record 表示把这个命令记录到操作历史中
kubectl set image deployment test-k8s test-k8s=ccr.ccs.tencentyun.com/k8s-tutorial/test-k8s:v2-with-error --record
# 暂停运行,暂停后,对 deployment 的修改不会立刻生效,恢复后才应用设置
kubectl rollout pause deployment test-k8s
# 恢复
kubectl rollout resume deployment test-k8s
# 输出到文件
kubectl get deployment test-k8s -o yaml >> app2.yaml
# 删除全部资源
kubectl delete all --all

More official website introduction about Deployment

Assign a Pod to a node to run: nodeselector
Limiting the total amount of CPU and memory: Documentation

 apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    disktype: ssd

Workload Classification

  • Deployment
    Suitable for stateless applications, all pods are equivalent and replaceable
  • StatefulSet
    Stateful applications, suitable for this type of database.
  • DaemonSet
    Run a Pod on each node, which can be used for node monitoring, node log collection, etc.
  • Job & CronJob
    Job is used to express a one-time task, and CronJob will run repeatedly according to its time plan.

Documentation

Existing Problems

  • Only one pod can be accessed at a time, no load balancing is automatically forwarded to different pods
  • Access also requires port forwarding
  • After the Pod was hit hard, the IP changed, and so did the name

In the next section we explain how to solve it.

🎭Service

characteristic

  • Service associates the corresponding Pod through label
  • The Servcie life cycle is not bound to the Pod, and the IP will not be changed due to the heavy damage of the Pod
  • Provides a load balancing function to automatically forward traffic to different Pods
  • Provides access ports outside the cluster
  • The cluster can be accessed by the service name
    image.png

Create Service

Create a Service and associate it with the corresponding Pod through the label test-k8s
service.yaml

 apiVersion: v1
kind: Service
metadata:
  name: test-k8s
spec:
  selector:
    app: test-k8s
  type: ClusterIP
  ports:
    - port: 8080        # 本 Service 的端口
      targetPort: 8080  # 容器端口

Application Configuration kubectl apply -f service.yaml
View Services kubectl get svc
image.png
View the service details kubectl describe svc test-k8s , you can find that Endpoints is the IP of each Pod, that is, he will forward traffic to these nodes.
image.png
The default type of service is ClusterIP , which can only be accessed inside the cluster, we can access it in the Pod:
kubectl exec -it pod-name -- bash
curl http://test-k8s:8080

If you want to access outside the cluster, you can implement it through port forwarding (only suitable for temporary testing):
kubectl port-forward service/test-k8s 8888:8080

If you use minikube, you can do the same minikube service test-k8s

Externally exposed services

Above, we can access the services in the cluster through port forwarding. If we want to directly expose the cluster services, we can use NodePort and Loadbalancer type of Service

 apiVersion: v1
kind: Service
metadata:
  name: test-k8s
spec:
  selector:
    app: test-k8s
  # 默认 ClusterIP 集群内可访问,NodePort 节点可访问,LoadBalancer 负载均衡模式(需要负载均衡器才可用)
  type: NodePort
  ports:
    - port: 8080        # 本 Service 的端口
      targetPort: 8080  # 容器端口
      nodePort: 31000   # 节点端口,范围固定 30000 ~ 32767

Application Configuration kubectl apply -f service.yaml
On the node, we can curl http://localhost:31000/hello/easydoc access to the application and there is load balancing, and the information on the web page can be seen to be forwarded to different Pods

 hello easydoc 

IP lo172.17.0.8, hostname: test-k8s-68bb74d654-962lh
If you use minikube, because it is a simulated cluster, your computer is not a node, the node is simulated by minikube, so you cannot directly access the service on the computer

Loadbalancer can also provide external services, which requires the support of a load balancer, because it needs to generate a new IP external service, otherwise the state will always be pending, which is rarely used, we will talk about it later The higher end Ingress replaces it.

Multiport

The name must be configured when there are multiple ports, documentation

 apiVersion: v1
kind: Service
metadata:
  name: test-k8s
spec:
  selector:
    app: test-k8s
  type: NodePort
  ports:
    - port: 8080        # 本 Service 的端口
      name: test-k8s    # 必须配置
      targetPort: 8080  # 容器端口
      nodePort: 31000   # 节点端口,范围固定 30000 ~ 32767
    - port: 8090
      name: test-other
      targetPort: 8090
      nodePort: 32000

Summarize

ClusterIP

By default, only available within the cluster

NodePort

Expose the port to the node, providing the ingress port range for external access to the cluster. Fixed 30000 ~ 32767

LoadBalancer

A load balancer is required (usually provided by a cloud service provider, bare metal can be installed for METALLB testing)
An additional IP external service will be generated
Load balancer supported by K8S: Load Balancer

Headless

suitable for database
When clusterIp is set to None, it becomes Headless, and no IP will be allocated. The specific usage will be discussed later.
Official website documentation

🥙StatefulSet

What is a StatefulSet

StatefulSets are used to manage stateful applications such as databases.
The applications we deployed earlier do not need to store data or remember the state. You can expand the replicas at will. Each replica is the same and can be replaced.
For stateful ones like databases and Redis, replicas cannot be expanded at will.
StatefulSet will fix the name of each Pod

Deploy Mongodb of type StatefulSet

 apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb
spec:
  serviceName: mongodb
  replicas: 3
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
        - name: mongo
          image: mongo:4.4
          # IfNotPresent 仅本地没有镜像时才远程拉,Always 永远都是从远程拉,Never 永远只用本地镜像,本地没有则报错
          imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
spec:
  selector:
    app: mongodb
  type: ClusterIP
  # HeadLess
  clusterIP: None
  ports:
    - port: 27017
      targetPort: 27017

kubectl apply -f mongo.yaml

StatefulSet Features

  • The Service CLUSTER-IP is empty, and the Pod name is fixed.
  • Pod creation and destruction are in order, creation is in order, and destruction is in reverse order.
  • Pod rebuild will not change the name, except IP, so don't use IP to connect directly
    image.png

Endpoints will have one more hostname
image.png

When accessing, if you use the Service name to connect directly, it will randomly forward the request to connect to the specified Pod, you can do this pod-name.service-name
Run a temporary Pod connection data test
kubectl run mongodb-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb:4.4.10-debian-10-r20 --command -- bash

Web application connects to Mongodb

Inside the cluster, we can access different services through the service name to specify the first connection: mongodb-0.mongodb
image.png
image.png

question

After the pod is rebuilt, the content of the database is lost. <br>In the next section, we explain how to solve this problem.

🍤Data persistence

introduce

The kubernetes cluster will not handle data storage for you, we can mount a disk for the database to ensure data security.
You can choose cloud storage, local disk, NFS.

  • Local disk: You can mount a directory on a node, but this requires restricting the pod to run on this node
  • Cloud storage: Unlimited nodes, not affected by clusters, safe and stable; it needs to be provided by cloud service providers, and bare metal clusters are not available.
  • NFS: Unlimited nodes, not affected by the cluster

hostPath mount example

Mount a directory on the node to the Pod, but it is deprecated, documentation
The configuration method is simple, and you need to manually specify the Pod to run on a fixed node.
For single-node testing use only; not for multi-node clusters.
minikube provides hostPath storage, documentation

 apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  serviceName: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
        - name: mongo
          image: mongo:4.4
          # IfNotPresent 仅本地没有镜像时才远程拉,Always 永远都是从远程拉,Never 永远只用本地镜像,本地没有则报错
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /data/db # 容器里面的挂载路径
              name: mongo-data    # 卷名字,必须跟下面定义的名字一致
      volumes:
        - name: mongo-data              # 卷名字
          hostPath:
            path: /data/mongo-data      # 节点上的路径
            type: DirectoryOrCreate     # 指向一个目录,不存在时自动创建

higher abstraction

image.png

Storage Class (SC)

Divide storage volumes into different categories, such as: SSD, normal disk, local disk, on-demand use. Documentation

 apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
  type: io1
  iopsPerGB: "10"
  fsType: ext4

Persistent Volume (PV)

Describes specific information about the volume, such as disk size, access mode . document , type , Local example

 apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongodata
spec:
  capacity:
    storage: 2Gi
  volumeMode: Filesystem  # Filesystem(文件系统) Block(块)
  accessModes:
    - ReadWriteOnce       # 卷可以被一个节点以读写方式挂载
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /root/data
  nodeAffinity:
    required:
      # 通过 hostname 限定在某个节点创建存储卷
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - node2

Persistent Volume Claim (PVC)

A declaration of storage requirements can be understood as an application form, and the system finds a suitable PV according to this application form
PVs can also be automatically created from PVCs.

 apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongodata
spec:
  accessModes: ["ReadWriteOnce"]
  storageClassName: "local-storage"
  resources:
    requests:
      storage: 2Gi

Why so many layers of abstraction

  • Better division of labor, operation and maintenance personnel are responsible for providing good storage, developers do not need to pay attention to the details of the disk, only need to write an application form.
  • It is convenient for cloud service providers to provide different types. The configuration details do not require developers to pay attention, and only need an application form.
  • Dynamic creation . After the developer writes the application form, the supplier can automatically create the required storage volume according to the demand.

Tencent Cloud Example

image.png

Local disk example

Dynamic creation is not supported, it needs to be created in advance

 apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
        image: mongo:5.0
        imagePullPolicy: IfNotPresent
        name: mongo
        volumeMounts:
          - mountPath: /data/db
            name: mongo-data
      volumes:
        - name: mongo-data
          persistentVolumeClaim:
             claimName: mongodata
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
spec:
  clusterIP: None
  ports:
  - port: 27017
    protocol: TCP
    targetPort: 27017
  selector:
    app: mongodb
  type: ClusterIP
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongodata
spec:
  capacity:
    storage: 2Gi
  volumeMode: Filesystem  # Filesystem(文件系统) Block(块)
  accessModes:
    - ReadWriteOnce       # 卷可以被一个节点以读写方式挂载
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /root/data
  nodeAffinity:
    required:
      # 通过 hostname 限定在某个节点创建存储卷
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - node2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongodata
spec:
  accessModes: ["ReadWriteOnce"]
  storageClassName: "local-storage"
  resources:
    requests:
      storage: 2Gi

question

The connection address of the current database is hard-coded in the code, and the password of the database needs to be configured.
In the next section, we explain how to solve it.

📑ConfigMap & Secret

ConfigMap

The database connection address, which may change according to the deployment environment, should not be hard-coded in the code.
Kubernetes provides us with ConfigMap, which can easily configure some variables. Documentation

configmap.yaml

 apiVersion: v1
kind: ConfigMap
metadata:
  name: mongo-config
data:
  mongoHost: mongodb-0.mongodb
 # 应用
kubectl apply -f configmap.yaml
# 查看
kubectl get configmap mongo-config -o yaml

image.png

Secret

Some important data, such as password and TOKEN, we can put in secret. Documentation , Configuring Certificates

Note that the data is Base64 encoded. Base64 Tools

secret.yaml

 apiVersion: v1
kind: Secret
metadata:
  name: mongo-secret
# Opaque 用户定义的任意数据,更多类型介绍 https://kubernetes.io/zh/docs/concepts/configuration/secret/#secret-types
type: Opaque
data:
  # 数据要 base64。https://tools.fun/base64.html
  mongo-username: bW9uZ291c2Vy
  mongo-password: bW9uZ29wYXNz
 # 应用
kubectl apply -f secret.yaml
# 查看
kubectl get secret mongo-secret -o yaml

image.png

Instructions

Use as an environment variable
 apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb
spec:
  replicas: 3
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
        - name: mongo
          image: mongo:4.4
          # IfNotPresent 仅本地没有镜像时才远程拉,Always 永远都是从远程拉,Never 永远只用本地镜像,本地没有则报错
          imagePullPolicy: IfNotPresent
          env:
          - name: MONGO_INITDB_ROOT_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongo-secret
                key: mongo-username
          - name: MONGO_INITDB_ROOT_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongo-secret
                key: mongo-password
          # Secret 的所有数据定义为容器的环境变量,Secret 中的键名称为 Pod 中的环境变量名称
          # envFrom:
          # - secretRef:
          #     name: mongo-secret
Mount as file (better for certificate files)

After mounting, a file will be generated in the container corresponding to the path, one key and one file, the content is the value, the document

 apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mypod
    image: redis
    volumeMounts:
    - name: foo
      mountPath: "/etc/foo"
      readOnly: true
  volumes:
  - name: foo
    secret:
      secretName: mysecret

🍓Helm & Namespaces

introduce

Helm Similar to npm, pip, docker hub, it can be understood as a software library, which can easily and quickly install some third-party software for our cluster.
Using Helm, we can easily build a MongoDB / MySQL replica cluster, and the YAML files have been written by others for us to use directly. Official website , application center

Install Helm

installation documentation
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Install the MongoDB example

 # 安装
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-mongo bitnami/mongodb

# 指定密码和架构
helm install my-mongo bitnami/mongodb --set architecture="replicaset",auth.rootPassword="mongopass"

# 删除
helm ls
heml delete my-mongo

# 查看密码
kubectl get secret my-mongo-mongodb -o json
kubectl get secret my-mongo-mongodb -o yaml > secret.yaml

# 临时运行一个包含 mongo client 的 debian 系统
kubectl run mongodb-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb:4.4.10-debian-10-r20 --command -- bash

# 进去 mongodb
mongo --host "my-mongo-mongodb" -u root -p mongopass

# 也可以转发集群里的端口到宿主机访问 mongodb
kubectl port-forward svc/my-mongo-mongodb 27017:27018

Namespaces

If multiple applications are deployed in a cluster, all applications are together, which is not easy to manage, and can also lead to name conflicts.
We can use namespace to divide the application into different namespaces, which is a concept with the namespace in the code, just to divide the space.

 # 创建命名空间
kubectl create namespace testapp
# 部署应用到指定的命名空间
kubectl apply -f app.yml --namespace testapp
# 查询
kubectl get pod --namespace kube-system

You can quickly switch namespaces with kubens

 # 切换命名空间
kubens kube-system
# 回到上个命名空间
kubens -
# 切换集群
kubectx minikube

image.png

✈️Ingress

introduce

Ingress provides a unified entry for external access to the cluster, avoiding exposing the cluster port to the outside world;
The function is similar to Nginx, which can forward requests to different services according to the domain name and path.
https can be configured

What's the difference with LoadBalancer?
LoadBalancer needs to expose the port externally, which is not safe;
It is not possible to forward traffic to different services according to the domain name and path, and multiple services need to open multiple LoadBalancers;
Single function, unable to configure https
image.png

use

To use Ingress, you need a Load Balancer + Ingress Controller
If it is a bare metal cluster, you need to install a load balancing plug-in yourself, you can install METALLB
If it is a cloud service provider, it will be automatically configured for you, otherwise your external IP will be in the "pending" state and cannot be used.

Documentation: Ingress
Deploy Ingress Controller in Minikube: nginx
Helm installation: Nginx

 apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: simple-example
spec:
  ingressClassName: nginx
  rules:
  - host: tools.fun
    http:
      paths:
      - path: /easydoc
        pathType: Prefix
        backend:
          service:
            name: service1
            port:
              number: 4200
      - path: /svnbucket
        pathType: Prefix
        backend:
          service:
            name: service2
            port:
              number: 8080

Tencent Cloud Configuration Ingress Demo

image.png

🎉Other supplements

Kubernetes can manage a large number of containerized applications, easily scale and expand the cluster, and roll back versions at any time.
Kubernetes needs the support of cloud vendors to be complete. Fortunately, all major cloud vendors have already provided k8s cluster services. The ecosystem is very complete and very convenient.
What we built by ourselves is called bare metal, which is very good for testing and learning. We can use the computers we eliminated to build a cluster to play with.

WEB visualization management cluster

If you think the command line management cluster is too troublesome, you can use Helm to quickly build a kubernetes-dashboard , so that you have a WEB interface, you can perform some operations and management visually.
If minikube is simpler, one command minikube dashboard be fine.

Database better practices

For a stateful application such as a database, it is better to directly use the database provided by the cloud vendor, which will run more stably and have complete data backup.

Build a cluster with scripts

Some users on Github have written the work that needs to be done for bare metal construction into scripts, and a script will help you initialize the cluster work: kainstall

Public network to build K8S cluster

User provided: Reference document


seasonley
607 声望693 粉丝

一切皆数据