Abstract: The main purpose of many seemingly "cumbersome" designs in Kubernetes is to provide developers with more "scalability" and bring more "stability" and "security" to users ".
Share this article from Huawei cloud community " how to build a complex MySQL database Kubernetes cluster? ", author: zuozewei.
Preface
In the actual production environment, in order to be stable and highly available, the operation and maintenance team generally does not deploy the MySQL database in a Kubernetes cluster. Generally, it uses the database of the cloud vendor or builds it on a high-performance machine (such as a bare metal server).
However, for the test and development environment, we can deploy MySQL to our respective Kubernetes clusters, which is very helpful to improve the efficiency of operation and maintenance, and it also helps to accumulate experience in the use of Kubernetes.
Easy deployment
As shown below, we only need to set the root user password (environment variable MYSQL_ROOT_PASSWORD), and then we can easily use the official MySQL mirror to build a MySQL database.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: mysql-min
name: mysql-min
spec:
replicas: 1
selector:
matchLabels:
app: mysql-min
template:
metadata:
labels:
app: mysql-min
spec:
containers:
- image: centos/mysql-57-centos7:latest
name: mysql-min
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: admin@123
Create a Service so that both inside and outside the cluster can access the database, and the outside of the cluster needs to be accessed through port 30336 set by nodePort.
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql-min
release: mysql-min
name: mysql-min
namespace: default
spec:
ports:
- name: mysql
port: 3306
protocol: TCP
nodePort: 30336
targetPort: mysql
selector:
app: mysql-min
#目前sessionAffinity可以提供"None""ClientIP"两种设定:
#None: 以round robin的方式轮询下面的Pods。
#ClientIP: 以client ip的方式固定request到同一台机器。
sessionAffinity: None
type: NodePort
#status:
# loadBalancer: {}
Next, access the database and verify that it is functioning properly:
# kubectl get pod # 当前Pod名称
NAME READY STATUS RESTARTS AGE
mysql-min-5b5668c448-t44ml 1/1 Running 0 3h
# 通过本机访问
# kubectl exec -it mysql-min-5b5668c448-t44ml -- mysql -uroot -padmin@123
mysql> select 1;
+---+
| 1 |
+---+
| 1 |
+---+
# 集群内部通过mysql service访问:
# kubectl exec -it mysql-min-5b5668c448-t44ml -- mysql -uroot -padmin@123 -hmysql
mysql> select now();
+---------------------+
| now() |
+---------------------+
| 2021-03-13 07:19:14 |
+---------------------+
# 集群外部,可通过任何一个 K8S 节点访问数据库:
# mysql -uroot -padmin@123 -hworker-1 -P30336
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
Expansion deployment
Persistent storage
To ensure that the data still exists after MySQL is restarted, we need to configure persistent storage for it. My experimental environment here uses Local Persistent Volume, which means that I hope Kubernetes can directly use the local disk directory on the host. It does not rely on remote storage services to provide a "persistent" container Volume. The benefits of this are obvious. Since this Volume directly uses local disks, especially SSD disks, its read and write performance is much better than most remote storage. This requirement is very common for private Kubernetes clusters deployed on local physical servers.
It is worth pointing out that secondly, compared to normal PV, once these nodes are down and cannot be recovered, the data in the local storage volume may be lost. This requires that the application that uses it must have the ability to back up and restore data, allowing you to regularly back up these data in other locations.
It is not difficult to imagine that the design of Local Persistent Volume faces two main difficulties.
first difficulty of 1616cf14c17a4b is how to abstract the local disk into PV.
You might say that Local Persistent Volume is not equivalent to hostPath plus NodeAffinity?
For example, a Pod can declare to use a PV of type Local, and this PV is actually a Volume of type hostPath. If the directory corresponding to this hostPath has already been created on node A in advance. So, I only need to add another nodeAffinity=nodeA to this Pod, can I use this Volume?
In fact, you should never use a directory on a host as a PV. This is because the storage behavior of this local directory is completely uncontrollable, and the disk where it is located may be filled with applications at any time, and even cause the entire host to go down. Moreover, there is a lack of even the most basic I/O isolation mechanism between different local directories.
Therefore, the storage medium corresponding to a local storage volume must be an additional disk or block device mounted on the host ("extra" means that it should not be the main hard disk used by the host's root directory). This principle, we can call it " one PV, one disk ".
second difficulty of 1616cf14c17bcb is: How does the scheduler ensure that the Pod can always be correctly scheduled to the node where the local Volume requested by it is located?
The reason for this problem is that for conventional PV, Kubernetes first schedules Pod to a node, and then "persistent" the Volume directory on this machine through "two-stage processing", and then completes it. Bind mounting of the Volume directory and the container.
However, for Local PV, the available disks (or block devices) on the node must be prepared in advance by the operation and maintenance personnel. Their mounting conditions on different nodes can be completely different, and some nodes may even have no such disks.
Therefore, at this time, the scheduler must be able to know the relationship between all nodes and the disks corresponding to the Local Persistent Volume, and then schedule Pods based on this information.
This principle can be called "consider volume distribution when scheduling". In the Kubernetes scheduler, there is a filter condition called VolumeBindingChecker that is specifically responsible for this matter. In Kubernetes v1.11, this filter condition has been enabled by default.
Based on the above description, before starting to use Local Persistent Volume, you first need to configure the disk or block device in the cluster. On the public cloud, this operation is equivalent to attaching an additional disk to the virtual machine. For example, GCE's Local SSD type disk is a typical example.
In the private environment we deployed, you have two ways to complete this step.
- The first one, of course, is to mount and format a usable local disk to your host, which is also the most common operation;
- Second, for the experimental environment, you can actually mount several RAM Disks (memory disks) on the host computer to simulate local disks.
Next, I will use the second method to practice on the Kubernetes cluster we deployed earlier. First, create a mount point on the host named node-1, such as /mnt/disks; then, use several RAM Disks to simulate local disks, as shown below:
# 在node-1上执行
$ mkdir /mnt/disks
$ for vol in vol1 vol2 vol3; do
mkdir /mnt/disks/$vol
mount -t tmpfs $vol /mnt/disks/$vol
done
It should be noted that if you want other nodes to also support Local Persistent Volume, you need to perform the above operations for them as well, and ensure that the names of these disks (vol1, vol2, etc.) are not repeated. Next, we can define the corresponding PVs for these local disks, as shown below:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-min-pv-local
namespace: default
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: "mysql-min-storageclass-local"
persistentVolumeReclaimPolicy: Retain
#表示使用本地存储
local:
path: /mnt/disks/vol1
#使用local pv时必须定义nodeAffinity,Kubernetes Scheduler需要使用PV的nodeAffinity描述信息来保证Pod能够调度到有对应local volume的Node上。
#创建local PV之前,你需要先保证有对应的storageClass已经创建。
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
# pod 需要分不到的主机名,这台主机上开启了 local-pv 资源。
- node-1
As you can see, in the definition of this PV: the local field specifies that it is a Local Persistent Volume; and the path field specifies the path of the local disk corresponding to this PV, namely: /mnt/disks/vol1.
Of course, this also means that if the Pod wants to use this PV, it must run on node-1. Therefore, in the definition of this PV, there needs to be a nodeAffinity field to specify the name of the node node-1. In this way, when scheduling a Pod, the scheduler can know the correspondence between a PV and a node, so as to make the correct choice. This is exactly how Kubernetes implements "Considering Volume distribution when scheduling" the main method.
Next, create a StorageClass to describe this PV, as shown below:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mysql-min-storageclass-local
#指定存储类的供应者,比如aws, nfs等,具体取值参考官方说明。
#存储类有一个供应者的参数域,此参数域决定PV使用什么存储卷插件。参数必需进行设置
#由于demo中使用的是本地存储,所以这里写kubernetes.io/no-provisioner.
provisioner: kubernetes.io/no-provisioner
#volumeBindingMode 参数将延迟PVC绑定,直到 pod 被调度。
volumeBindingMode: WaitForFirstConsumer
The name of this StorageClass is called local-storage. It should be noted that in its provisioner field, we specify no-provisioner. This is because Local Persistent Volume currently does not support Dynamic Provisioning, so it cannot automatically create the corresponding PV when the user creates the PVC. In other words, we cannot omit the operation of creating PV before.
At the same time, this StorageClass also defines a property of volumeBindingMode=WaitForFirstConsumer. It is a very important feature in Local Persistent Volume, namely: delayed binding.
Through this delayed binding mechanism, the original real-time binding process of PVC and PV is delayed until the Pod is scheduled for the first time in the scheduler, thus ensuring that the binding result will not affect the normal scheduling of the Pod. .
Next, we only need to define a very ordinary PVC, and the Pod can use the Local Persistent Volume defined above, as shown below:
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
#当启用PVC 保护 alpha 功能时,如果用户删除了一个 pod 正在使用的 PVC,则该 PVC 不会被立即删除。PVC 的删除将被推迟,直到 PVC 不再被任何 pod 使用。
#可以看到,当 PVC 的状态为 Teminatiing 时,PVC 受到保护,Finalizers 列表中包含 kubernetes.io/pvc-protection:
finalizers:
- kubernetes.io/pvc-protection
labels:
app: mysql-min
release: mysql-min
name: mysql-min
namespace: default
spec:
#PV 的访问模式(accessModes)有三种:
#ReadWriteOnce(RWO):是最基本的方式,可读可写,但只支持被单个 Pod 挂载。
#ReadOnlyMany(ROX):可以以只读的方式被多个 Pod 挂载。
#ReadWriteMany(RWX):这种存储可以以读写的方式被多个 Pod 共享。
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: mysql-min-storageclass-local
#表示使用本地磁盘,实际生产中一般都使用nfs。
volumeMode: Filesystem
volumeName: mysql-min-pv-local
# status:
# accessModes:
# - ReadWriteOnce
# capacity:
# storage: 1Gi
kind: List
As you can see, there is nothing special about this PVC. The only thing to note is that the storageClassName it declares is mysql-min-storageclass-local. Therefore, when the Volume Controller of Kubernetes sees this PVC in the future, it will not bind it.
Finally, we create the Local Persistent Volume resource file:
kubectl apply -f mysql-min-pv-local.yaml
kubectl apply -f mysql-min-storageclass-local.yaml
kubectl apply -f mysql-min-pvc.yaml
Then, adjust Deploy and mount the volume:
spec:
containers:
- image: centos/mysql-57-centos7:latest
...
volumeMounts:
- name: data
mountPath: /var/lib/mysql
volumes:
- name: data
persistentVolumeClaim:
claimName: mysql-min
Custom profile
By creating a configmap and mounting it in the container, we can customize the MySQL configuration file. As shown below, the cm named mysql-config contains a my.cnf file:
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
data:
my.cnf: |
[mysqld]
default_storage_engine=innodb
skip_external_locking
lower_case_table_names=1
skip_host_cache
skip_name_resolve
max_connections=2000
innodb_buffer_pool_size=8589934592
init_connect='SET collation_connection = utf8_unicode_ci'
init_connect='SET NAMES utf8'
character-set-server=utf8
collation-server=utf8_unicode_ci
skip-character-set-client-handshake
query_cache_type=0
innodb_flush_log_at_trx_commit = 0
sync_binlog = 0
query_cache_size = 104857600
slow_query_log =1
slow_query_log_file=/var/lib/mysql/slow-query.log
log-error=/var/lib/mysql/mysql.err
long_query_time = 0.02
table_open_cache_instances=16
table_open_cache = 6000
skip-grant-tables
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
Mount the configmap into the container:
spec:
...
containers:
- image: centos/mysql-57-centos7:latest
...
volumeMounts:
- name: mysql-config
mountPath: /etc/my.cnf.d/my.cnf
subPath: my.cnf
...
volumes:
- name: mysql-config
- name: mysql-config
configMap:
name: mysql-config
...
Set the container time zone
The most foolish and most convenient way to deal with is to set the host time zone and the mapping between the time file and the container.
spec:
...
containers:
- image: centos/mysql-57-centos7:latest
...
volumeMounts:
- name: localtime
readOnly: true
mountPath: /etc/localtime
...
volumes:
- name: localtime
hostPath:
type: File
path: /etc/localtime
...
Encrypt sensitive data
Sensitive data such as user passwords is encrypted and stored with Secret, and then is referenced by Deployment through volume mounting or environment variables. As in this example, we create root and user users, and encrypt and save the user's password:
apiVersion: v1
data:
#将mysql数据库的所有user的password配置到secret,统一管理
mysql-password: YWRtaW4=
mysql-root-password: OVplTmswRGdoSA==
kind: Secret
metadata:
labels:
app: mysql-min
release: mysql-min
name: mysql-min
namespace: default
#Secret有三种类型:
#Opaque:base64编码格式的Secret,用来存储密码、密钥等;但数据也通过base64 –decode解码得到原始数据,所有加密性很弱。
#kubernetes.io/dockerconfigjson:用来存储私有docker registry的认证信息。
#kubernetes.io/service-account-token: 用于被serviceaccount引用。serviceaccout创建时Kubernetes会默认创建对应的secret。Pod如果使用了serviceaccount,对应的secret会自动挂载到Pod目录/run/secrets/ kubernetes.io/serviceaccount中。
type: Opaque
After the Secret is created, we remove the user's plaintext password from the Deployment, and use the environment variable method to reference the Secret data, see the following Yaml modification:
The passwords of root users and MYSQL_USER users are obtained from secret through secretKeyRef.
spec:... containers: - image: centos/mysql-57-centos7:latest name: mysql-min imagePullPolicy: IfNotPresent env: #password存储在secret中 - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: key: mysql-root-password name: mysql-min - name: MYSQL_PASSWORD valueFrom: secretKeyRef: key: mysql-password name: mysql-min - name: MYSQL_USER value: zuozewei
Container health check
The K8S image controller can use livenessProbe to determine whether the container is abnormal, and then decide whether to rebuild the container; while the Service service can use readinessProbe to determine whether the container service is normal, thereby ensuring service availability.
The livenessProbe configured in this example is the same as the readinessProbe, that is, if it fails to query the database three times in a row, it is defined as an exception. The detailed usage of livenessProbe and readinessProbe is beyond the scope of this article. You can refer to K8S official documents:
- Configure Liveness and Readiness Probes
- Pod Lifecycle
spec:
containers:
image: centos/mysql-57-centos7:latest
...
#kubelet 使用 liveness probe(存活探针)来确定何时重启容器。例如,当应用程序处于运行状态但无法做进一步操作,liveness 探针将捕获到 deadlock,重启处于该状态下的容器,使应用程序在存在 bug 的情况下依然能够继续运行下去
livenessProbe:
exec:
command:
- /bin/sh
- "-c"
- MYSQL_PWD="${MYSQL_ROOT_PASSWORD}"
- mysql -h 127.0.0.1 -u root -e "SELECT 1"
failureThreshold: 3 #探测成功后,最少连续探测失败多少次才被认定为失败。默认是 3。最小值是 1。
initialDelaySeconds: 30 #容器启动后第一次执行探测是需要等待多少秒。
periodSeconds: 10 #执行探测的频率。默认是10秒,最小1秒。
successThreshold: 1 #探测失败后,最少连续探测成功多少次才被认定为成功。默认是 1。对于 liveness 必须是 1。最小值是 1。
timeoutSeconds: 5 #探测超时时间。默认1秒,最小1秒。
#Kubelet 使用 readiness probe(就绪探针)来确定容器是否已经就绪可以接受流量。只有当 Pod 中的容器都处于就绪状态时 kubelet 才会认定该 Pod处于就绪状态。该信号的作用是控制哪些 Pod应该作为service的后端。如果 Pod 处于非就绪状态,那么它们将会被从 service 的 load balancer中移除。
readinessProbe:
exec:
command:
- /bin/sh
- "-c"
- MYSQL_PWD="${MYSQL_ROOT_PASSWORD}"
- mysql -h 127.0.0.1 -u root -e "SELECT 1"
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
Container initialization
Some initialization operations of the container are obviously suitable to be completed by InitContainer. The initContainer here is to ensure that the PV disk must be successfully bound before the POD starts. At the same time, to avoid the lost+found directory in the MySQL database directory from being mistaken for the database, initialize Delete it from the container;
#Init 容器支持应用容器的全部字段和特性,包括资源限制、数据卷和安全设置。 然而,Init 容器对资源请求和限制的处理稍有不同,在下面 资源 处有说明。 而且 Init 容器不支持 Readiness Probe,因为它们必须在 Pod 就绪之前运行完成。
#如果为一个 Pod 指定了多个 Init 容器,那些容器会按顺序一次运行一个。 每个 Init 容器必须运行成功,下一个才能够运行。 当所有的 Init 容器运行完成时,Kubernetes 初始化 Pod 并像平常一样运行应用容器。
#mysql这里的initContainer是为了保证在POD启动前,PV盘要先行绑定成功。
initContainers:
- command:
- rm
- -fr
- /var/lib/mysql/lost+found
image: busybox:1.29.3
imagePullPolicy: IfNotPresent
name: remove-lost-found
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/mysql
name: data
restartPolicy: Always
#scheduler 是 kubernetes 的调度器,主要的任务是把定义的 pod 分配到集群的节点上。
schedulerName: default-scheduler
securityContext: {}
#如果您的Pod通常需要超过30秒才能关闭,请确保增加优雅终止宽限期。可以通过在Pod YAML中设置terminationGracePeriodSeconds选项来实现.
#如果容器在优雅终止宽限期后仍在运行,则会发送SIGKILL信号并强制删除。与此同时,所有的Kubernetes对象也会被清除。
terminationGracePeriodSeconds: 30
#定义数据卷PVC,与PV匹配。
volumes:
- name: data
persistentVolumeClaim:
claimName: mysql-min
- name: mysql-config
configMap:
name: mysql-config
- name: localtime
hostPath:
type: File
path: /etc/localtime
Complete Deployment
Through the above multi-step adjustment, the Deplyment of the MySQL database is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
app: mysql-min
release: mysql-min
name: mysql-min
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: mysql-min
strategy:
rollingUpdate:
maxSurge: 1 #滚动升级时会先启动1个pod
maxUnavailable: 1 #滚动升级时允许的最大Unavailable的pod个数
type: RollingUpdate #滚动升级
template:
metadata:
labels:
app: mysql-min
spec:
containers:
- env:
#password存储在secret中
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mysql-root-password
name: mysql-min
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: mysql-password
name: mysql-min
- name: MYSQL_USER
value: apollo
image: centos/mysql-57-centos7:latest
imagePullPolicy: IfNotPresent
#kubelet 使用 liveness probe(存活探针)来确定何时重启容器。例如,当应用程序处于运行状态但无法做进一步操作,liveness 探针将捕获到 deadlock,重启处于该状态下的容器,使应用程序在存在 bug 的情况下依然能够继续运行下去
livenessProbe:
exec:
command:
- /bin/sh
- "-c"
- MYSQL_PWD="${MYSQL_ROOT_PASSWORD}"
- mysql -h 127.0.0.1 -u root -e "SELECT 1"
failureThreshold: 3 #探测成功后,最少连续探测失败多少次才被认定为失败。默认是 3。最小值是 1。
initialDelaySeconds: 30 #容器启动后第一次执行探测是需要等待多少秒。
periodSeconds: 10 #执行探测的频率。默认是10秒,最小1秒。
successThreshold: 1 #探测失败后,最少连续探测成功多少次才被认定为成功。默认是 1。对于 liveness 必须是 1。最小值是 1。
timeoutSeconds: 5 #探测超时时间。默认1秒,最小1秒。
name: mysql-min
ports:
- containerPort: 3306
name: mysql
protocol: TCP
#Kubelet 使用 readiness probe(就绪探针)来确定容器是否已经就绪可以接受流量。只有当 Pod 中的容器都处于就绪状态时 kubelet 才会认定该 Pod处于就绪状态。该信号的作用是控制哪些 Pod应该作为service的后端。如果 Pod 处于非就绪状态,那么它们将会被从 service 的 load balancer中移除。
readinessProbe:
exec:
command:
- /bin/sh
- "-c"
- MYSQL_PWD="${MYSQL_ROOT_PASSWORD}"
- mysql -h 127.0.0.1 -u root -e "SELECT 1"
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 256Mi
#为了达到一个相当高水平的实用性,特别是为了积极开发应用,快速调试失败是很重要的。除了一般的日志采集,Kubernetes还能通过查出重大错误原因来加速调试,并在某种程度上通过kubectl或者UI陈列出来。可以指定一个’terminationMessagePath’来让容器写下它的“death rattle“,比如声明失败消息,堆栈跟踪,免责条款等等。默认途径是‘/dev/termination-log’。
terminationMessagePath: /dev/termination-log
# 此字段默认为 “File“,这意味着仅从终止消息文件中检索终止消息。 通过将 terminationMessagePolicy 设置为 “FallbackToLogsOnError“,你就可以告诉 Kubernetes,在容器因错误退出时,如果终止消息文件为空,则使用容器日志输出的最后一块作为终止消息。 日志输出限制为 2048 字节或 80 行,以较小者为准。
terminationMessagePolicy: File
#要使用的数据盘目录,在initContainer中会关联此处目录。
volumeMounts:
- mountPath: /var/lib/mysql
name: data
- name: mysql-config
mountPath: /etc/my.cnf.d/my.cnf
subPath: my.cnf
- name: localtime
readOnly: true
mountPath: /etc/localtime
dnsPolicy: ClusterFirst
#Init 容器支持应用容器的全部字段和特性,包括资源限制、数据卷和安全设置。 然而,Init 容器对资源请求和限制的处理稍有不同,在下面 资源 处有说明。 而且 Init 容器不支持 Readiness Probe,因为它们必须在 Pod 就绪之前运行完成。
#如果为一个 Pod 指定了多个 Init 容器,那些容器会按顺序一次运行一个。 每个 Init 容器必须运行成功,下一个才能够运行。 当所有的 Init 容器运行完成时,Kubernetes 初始化 Pod 并像平常一样运行应用容器。
#mysql这里的initContainer是为了保证在POD启动前,PV盘要先行绑定成功。
initContainers:
- command:
- rm
- -fr
- /var/lib/mysql/lost+found
image: busybox:1.29.3
imagePullPolicy: IfNotPresent
name: remove-lost-found
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/mysql
name: data
restartPolicy: Always
#scheduler 是 kubernetes 的调度器,主要的任务是把定义的 pod 分配到集群的节点上。
schedulerName: default-scheduler
securityContext: {}
#如果您的Pod通常需要超过30秒才能关闭,请确保增加优雅终止宽限期。可以通过在Pod YAML中设置terminationGracePeriodSeconds选项来实现.
#如果容器在优雅终止宽限期后仍在运行,则会发送SIGKILL信号并强制删除。与此同时,所有的Kubernetes对象也会被清除。
terminationGracePeriodSeconds: 30
#定义数据卷PVC,与PV匹配。
volumes:
- name: data
persistentVolumeClaim:
claimName: mysql-min
- name: mysql-config
configMap:
name: mysql-config
- name: localtime
hostPath:
type: File
path: /etc/localtime
After creating this Deployment, we have the following components:
# kubectl get all,pvc,cm,secret -l app=mysql-min
# MySQL pod:
NAME READY STATUS RESTARTS AGE
pod/mysql-min-f9c9b7b5-q9br4 1/1 Running 6 14d
# Service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mysql-min NodePort 10.96.184.130 <none> 3306:30336/TCP 16d
# MySQL Deployment:
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mysql-min 1/1 1 1 16d
# 副本集ReplicaSet被Deployment调用,其是自动生成的
NAME DESIRED CURRENT READY AGE
replicaset.apps/mysql-min-587cf9fd48 0 0 0 16d
replicaset.apps/mysql-min-589bf8cdc5 0 0 0 16d
replicaset.apps/mysql-min-6b7447c7dd 0 0 0 14d
replicaset.apps/mysql-min-6cc9887459 0 0 0 16d
replicaset.apps/mysql-min-7759579d77 0 0 0 16d
replicaset.apps/mysql-min-84d4d6bd56 0 0 0 15d
replicaset.apps/mysql-min-f9c9b7b5 1 1 1 14d
# Pvc:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/mysql-min Bound mysql-min-pv-local 5Gi RWO mysql-min-storageclass-local 16d
# Secret:
NAME TYPE DATA AGE
secret/mysql-min Opaque 2 16d
Regular automatic backup
Considering data security, we regularly back up the database. In the K8S cluster, we can configure CronJob to implement automatic backup jobs. First, create a persistent storage for backup:
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
#当启用PVC 保护 alpha 功能时,如果用户删除了一个 pod 正在使用的 PVC,则该 PVC 不会被立即删除。PVC 的删除将被推迟,直到 PVC 不再被任何 pod 使用。
#可以看到,当 PVC 的状态为 Teminatiing 时,PVC 受到保护,Finalizers 列表中包含 kubernetes.io/pvc-protection:
finalizers:
- kubernetes.io/pvc-protection
labels:
app: mysql-min
release: mysql-min
name: mysql-min-backup
namespace: default
spec:
#PV 的访问模式(accessModes)有三种:
#ReadWriteOnce(RWO):是最基本的方式,可读可写,但只支持被单个 Pod 挂载。
#ReadOnlyMany(ROX):可以以只读的方式被多个 Pod 挂载。
#ReadWriteMany(RWX):这种存储可以以读写的方式被多个 Pod 共享。
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: mysql-min-storageclass-nfs
#表示使用本地磁盘,实际生产中一般都使用nfs。
volumeMode: Filesystem
volumeName: mysql-min-pv-local
# status:
# accessModes:
# - ReadWriteOnce
# capacity:
# storage: 1Gi
kind: List
Then, configure the actual automated job tasks, as shown below, every day at midnight, mysqldump will be used to back up the mall database.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mysql-backup
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: mysql-min-backup
imagePullPolicy: IfNotPresent
image: centos/mysql-57-centos7:latest
env:
#password存储在secret中
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mysql-root-password
name: mysql-min
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: mysql-password
name: mysql-min
- name: MYSQL_HOST
value: mysql-min
command:
- /bin/sh
- -c
- |
set -ex
mysqldump --host=$MYSQL_HOST --user=$MYSQL_ROOT_PASSWORD \
--password=$mysql-root-password \
--routines --databases mall --single-transaction \
> /mysql-backup/mysql-`date +"%Y%m%d"`.sql
volumeMounts:
- name: mysql-min-backup
mountPath: /mysql-min-backup
restartPolicy: OnFailure
volumes:
- name: mysql-min-backup
persistentVolumeClaim:
claimName: mysql-min-backup
summary
The main purpose of many seemingly "cumbersome" designs of Kubernetes is to provide developers with more "scalability" and to bring more "stability" and "sense of security" to users. The level of these two capabilities is an important criterion for measuring the level of open source infrastructure projects. In the example, multiple technologies of Kubernetes are combined to build a complex single-instance database that can be used for production.
The source code of this article: https://github.com/zuozewei/blog-example/tree/master/Kubernetes/k8s-mysql-pv-local
Reference materials:
[1]: "In-depth analysis of Kubernetes"
Click to follow to learn about Huawei Cloud's fresh technology for the first time~
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。