头图

Hello, everyone, I am Xiao Cai, a Xiao Cai who is eager to be a Cai not Cai in the Internet industry. It can be soft or strong, like it is soft, white whoring is strong!
~ remember to give me a triple after reading!

This article mainly introduces the use of data storage in

If necessary, you can refer to

If it helps, don’t forget 160ac451a42536 ❥

The WeChat public account has been opened, , students who have not followed please remember to pay attention!

The k8s process is here, we have completed the Namespace , Pod , PodController , more than half of the resource usage methods, we will continue to understand how to perform data k8s storage!

The smallest control unit of kubernetes. The containers are all running in pods. There can be one or more containers in a pod.

We have already understood this concept. It is not difficult to understand that the life cycle of a container will be very short. When a pod has a problem, the pod controller will be frequently created and destroyed, and each pod is independent, so the data stored in the container Will also be cleared, this result is undoubtedly a fatal blow. At this time, there will be a small partner might say, Docker present in the data mount, K8S certainly exist, we can use the data to mount to fix the problem - Well, congratulations, you are right, K8S in Not only supports data mounting, but the supported functions are quite powerful. Without much to say, we will enter the world of data~

data storage

K8S There is a Volume concept, Volumn is Pod in a shared directory that can be accessed by a plurality of containers, K8S of Volume defined pod on, then in a pod of Multiple containers are mounted to a specific file directory. k8s uses Volume realize data sharing and persistent storage of data between different containers in the same pod. Volume is not the same as that of a single container in a pod. When the container is terminated or restarted Volume will not be lost.

Volume supports common types as follows:

In addition to these listed above, there are gcePersistentDisk, awsElasticBlockStore, azureFileVolume, azureDisk these storages, but because they are less used, I don't know too much. Let's take a detailed look at how to use each storage!

1. Basic storage

1)EmptyDir

This is the most basic Volume type , a EmptyDir is an empty directory on Host
concept:

It is created when the Pod is allocated to the Node node. The initial content is empty, and there is no need to specify the corresponding directory file on the host machine. will automatically allocate a directory on the host machine.

worth paying attention to is :

Pod is destroyed, the data in EmptyDir will also be permanently deleted!

:

  1. Used as a temporary space, such as a temporary directory required by the Web server to write logs or tmp files.
  2. Used as a shared directory between multiple containers (a directory where one container needs to get data from another container)
Actual combat:

Let’s take nginx as an example to prepare a resource list

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: cbuc-test
  labels:
    app: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.14-alpine
    ports:
       - containerPort: 80
       volumeMounts:    # 将 nginx-Log 挂载到nginx容器中,容器内目录为/var/log/nginx
       - name: nginx-log
         mountPath: /var/log/nginx
  volumes:    #在此声明volume
  - name: nginx-log
    emptyDir: {}

Then after we create it, we can look at the emptyDir storage volume on the host. By default, the directory location of the volume declared on the host is /var/lib/kubelet/pods/<Pod ID>/volumes/kubernetes.io~<Volume type>/<Volume name>.

2)HostPath

concept:

HostPath is to mount an actual directory on the Node node to the pod for use by the container. The advantage is that after the pod is destroyed, the data in the directory still exists!

Actual combat:

Let’s take nginx as an example to prepare a list of resources:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: cbuc-test
  labels:
    app: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.14-alpine
    ports:
    - containerPort: 80
    volumeMounts:
    - name: nginx-log
      mountPath: /var/log/nginx
  volumes:
  - name: nginx-log
    hostPath:           # 指定宿主机目录为 /data/nginx/log
      path: /data/nginx/log
      type: DirectoryOrCreate   # 创建类型

spec.volumes.hostPath.type (create type):

  • DirectoryOrCreate : if the directory exists, it is used, if it does not exist, create it before use
  • Directory: The directory must exist
  • FileOrCreate: file exists, it is used, and if it does not exist, it is used after creation
  • File: file must exist
  • Socket: unix socket must exist
  • CharDevice: character device must exist
  • BlockDevice: block device must exist

We can create a pod according to the resource list, and then podIp , and then check /data/nginx/log , and find that there are logs generated

3)NFS

HostPath is a storage method we often use in our daily life, which can already meet the basic usage scenarios. So far we have a small summary: EmptyDir is for pod , if the pod is destroyed, then the modified data will be lost. To solve this problem, HostPath has been improved, and the storage is changed to Node , but if Node down, the data will still be lost. At this time, you need to prepare a separate network storage system, and the more commonly used ones are NFS, CIFS

concept:

NFS is a network storage system, you can build a NFS server, and then connect the storage in the Pod directly to the NFS system. In this case, no matter how the pod is transferred on the node, as long as the node is node is NFS server is connected, there will be no problems with the data.

Actual combat:

Since you need a NFS server, you must build one yourself. We choose master node to install NFS server

# 安装 nfs 服务器
yum install -y nfs-utils
# 准备共享目录
mkdir -p /data/nfs/nginx
# 将共享目录以读写权限暴露给 192.168.108.0/24 网段的所有主机
vim /etc/exports
# 添加以下内容
/data/nfs/nginx    192.168.108.0/24(rw,no_root_squash)
# 启动 nfs 服务器
systemctl start nfs

Then we also install NFS on each node to drive NFS devices

yum install -y nfs-utils

After making the above preparations, we can prepare the resource list file:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: cbuc-test
  labels:
    app: nginx
spec:
  containers:
  - name: nginx-pod
    image: nginx:1.14-alpine
    ports:
    - containerPort: 80
    volumeMounts:
    - name: nginx-log
      mountPath: /var/log/nginx
    volumes:
    - name: nginx-log
      nfs:
        server: 192.168.108.100     # NFS服务器地址,也就是master地址
        path: /data/nfs/nginx        # 共享文件路径

After creating the pod, we can enter the /data/nfs directory to view the two log files

Second, advanced storage

Managing storage is an obvious problem in managing computing. This part needs to abstract how to provide detailed information about storage according to consumption patterns. And k8s also well supported, introducing two new API resources: PersistenVolume and PersistentVolumeClaim

PersistenVolume (PV) is the meaning of persistent volume, which is an abstraction of the underlying shared storage. Generally, PV is created and configured by the k8s administrator. It is related to the underlying specific shared storage technology and is completed through plug-ins. Objects with shared storage.

PersistentVolumeClaim (PVC) is the meaning of persistent volume declaration, which is a kind of user's declaration of storage demand. In other words, PVC is actually a kind of resource demand application sent by the user to the k8s system.

1)PV

PV is a section of network storage configured by the administrator in the cluster. It is also a resource in the cluster. The resource list template is as follows:

apiVersion: v1
kind: PersistentVolume
meatadata:
  name: pv
spec:
  nfs:                     # 存储类型,可以是 CIFS、GlusterFS
  capacity:                # 存储空间的设置
    storage: 2Gi
  accessModes:          # 访问模式
  storageClassName:      # 存储类别
  persistentVolumeReclaimPolicy:    # 回收策略

Each attribute is really long and difficult to remember. Let's take a look at the meaning of each attribute first:

  • storage type

The actual storage type of the bottom layer, k8s supports multiple storage types, and the configuration of each storage type is different

  • storage capacity (capacity)

Currently, only storage space settings are supported, and IOPS, throughput and other indicators settings may be added in the future

  • Access Modes (accessModes)

It is used to describe the access permissions of user applications to storage resources. There are the following access permissions:

  1. ReadWriteOnce (RWO): Read and write permissions, but can only be mounted by a single node
  2. ReadOnlyMany (ROM): read-only permission, can be mounted by multiple nodes
  3. ReadWriteMany (RWM): Read and write permissions, can be mounted by multiple nodes
  • Storage category

PV can specify a storage class through the storageClassName parameter

  1. PVs with a specific category can only be bound with PVCs that have requested that category
  2. PVs with no category set can only be bound with PVCs that do not request any category
  • Reclaim Policy (persistentVolumeReclaimPolicy)

When PV is no longer used, the way it needs to be handled (different storage types support different strategies), there are the following recycling strategies:

  1. Retain: Retain the data. Requires administrator to manually clean up data
  2. Recycle: Clear the data in the PV, the effect is equivalent to rm -rf
  3. Delete: The back-end storage connected to the PV completes the volume deletion operation, which is common in the storage services of cloud service providers
Life cycle:

The life cycle of a PV may be in 4 different stages:

  • Available (available): indicates the available state and has not been bound by any PVC
  • Bound (bound): indicates that the PV has been bound by PVC
  • Released: indicates that the PVC has been deleted, but the resource has not been re-declared by the cluster
  • Failed: indicates that the automatic recovery of the PV failed
Actual combat:

We have already known the NFS storage server before, so we still use the NFS server as the underlying storage here. First, we need to create a PV, which corresponds to a path that needs to be exposed in NFS.

# 创建目录
mkdir /data/pv1 -pv

# 向NFS暴露路径
vim /etc/exports
/data/pv1   192.168.108.0/24(rw,no_root_squash)

After completing the above steps, we need to create 1 PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01
  namespace: cbuc-test
  labels:
    app: pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/pv1
    server: 192.168.108.100

After creation, we can view the PV:

2)PVC

PVC is an application for resources, used to declare demand information for storage space, access mode, and storage category. The resource list template is as follows:

apiVersion: v1
kind: persistentVolumeClaim
metadata:
  name: pvc
  namespace: cbuc-test
  labels:
    app: pvc
spec:
  accessModes:          # 访问模式
  selector:                # 采用标签对PV选择
  storageClassName:        # 存储类别
  resources:                # 请求空间
    request:
      storage: 1Gi

We have already learned many attributes in PV, here we will briefly~

  • Access Modes (accessModes)

Used to describe the access permissions of user applications to storage resources

  • Selection criteria (selector)

Through the settings of Labels Selector, filter and manage PVs that already exist in the system

  • Resource category (storageClassName)

When pvc is defined, the required back-end storage category can be set, and only the pv with this class set can be selected by the system

  • resource request (resources)

Describe requests for storage resources

Actual combat:

Prepare a resource list template for PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc01
  namespace: cbuc-test
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

After creation, we first check whether the PVC is created successfully

Then check whether the pv has been bound by pvc

3) Actual use

We have successfully created PV and PVC above, but we haven't explained how to use it yet. Next, we will prepare a pod list:

apiVersion: v1
kind: Pod
metadata:
  name: pod-pvc
  namespace: cbuc-test
spec:
  containers:
  - name: nginx01
    image: nginx:1.14-alpine
    volumeMounts:
    - name: test-pv
      mountPath: /var/log/nginx
  volumes:
  - name: test-pv
    persistentVolumeClaim:
      claimName: pvc01
      readOnly: true

4) Life cycle

Everything has a life cycle, PV and PVC are no exception. The life cycle is as follows:

  • Resource Supply

The administrator manually creates the underlying storage and PV

  • Resource binding

The user creates the PVC, and k8s is responsible for finding and binding the PV according to the statement of the PVC.

  1. If found, it will be successfully bound, and the user's application can use the PVC
  2. If it is not found, the PVC will be in the Pending state indefinitely, until the system administrator has created a PV that meets its requirements

PV is bound to a certain PVC, it will be monopolized by this PVC and can no longer be bound with other PVCs

  • Resource usage

Users can use pvc in Pod like volume

  • Resource release

The user releases the PV by deleting the PVC. When the storage resource is used up, the user can delete the PVC. The PV bound to the PVC will be marked as released, but it cannot be bound to other PVCs immediately. Write through the previous PVC The imported data may still remain on the storage device, and the PV can be used again only after clearing

  • Resource Recovery

k8s will recycle resources according to the recycling strategy set by pv

The life cycle of PV and PVC is listed above. It is not so much the life cycle, but rather the process of using PV and PVC!

Three, configuration storage

Configuration storage, as the name suggests, is used to store configuration, which includes two configuration storage, ConfigMap and Secret

1)ConfigMap

ConfigMap is a special storage volume whose main function is to store configuration information. The resource list template is as follows:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cmp
  namespace: cbuc-test
data:
  info:
    username:cbuc
    sex:male

The usage is very simple, spec , and data.info . You only need to store the configuration file you want to configure in the way of info lower level of key: value

ConfigMap can be created through the kubectl create -f configMap.yaml

The specific use is as follows, we need to create a Pod:

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
  namespace: cbuc-test
spec:
  containers:
  - name: nginx
    image: nginx:1.14-apline
    volumeMounts:        # 将 configMap 挂载到目录中
    - name: config
      mountPath: /var/configMap/config
  volumes:
  - name: config
    configMap:
      name: cmp  # 上面我们创建 configMap 的名称

Then use the command kubectl create -f pod-cmp.yaml create a test Pod, and then you can view the configuration file in the pod:

2)Secret

In k8s, there is also an object very similar to ConfigMap, called the Secret object. It is mainly used to store sensitive information, such as passwords, secret keys, certificates and other information.

We first encrypt the data we want to configure with base64:

# 加密用户名
[root@master test]# echo -n 'cbuc' | base64
Y2J1Yw==
# 加密密码
[root@master test]# echo -n '123456' | base64
MTIzNDU2

Then prepare the Secret resource manifest file

apiVersion: v1
kind: Secret
metadata:
  name: secret
  namespace: cbuc-test
type: Opaque    # 表示base64编码格式的Secret
data:
  username: Y2J1Yw==
  password: MTIzNDU2

Create Secret by command kubectl create -f secret.yaml , and then we prepare a list of Pod resources:

apiVersion: v1
kind: Pod
metadata:
  name: pod-secret
  namespace: cbuc-test
spec:
  containers:
  - name: nginx
    image: nginx:1.14-apline
    volumeMounts:
    - name: config
      mountPath: /var/secret/config
  volumes:
  - name: config
    secret:
      secretName: secret

After creation, we enter the pod to view the configuration file, we can find that the information in the configuration file has been decoded

END

In this article, we have introduced the data storage of k8s. The length is relatively short. If it is not enough, we will see you in the next article (Server and Ingress)! The road is long, Xiaocai will search with you~

看完不赞,都是坏蛋

If you work harder today, you will be able to say less begging words tomorrow!

I am Xiaocai, a man who studies with you. 💋

The WeChat public account has been opened, , students who have not followed please remember to pay attention!


写做
624 声望1.7k 粉丝