Author: Yin Min
Rook introduction
Rook is an open source cloud-native storage orchestrator that provides platforms, frameworks, and support for various storage solutions to integrate natively with the cloud-native environment.
Rook transforms the distributed storage system into a self-management, self-expanding, and self-healing storage service. It automates tasks such as deployment, guidance, configuration, configuration, expansion, upgrade, migration, disaster recovery, monitoring, and resource management for storage administrators.
In short, Rook is a set of Kubernetes Operators, which can fully control the deployment, management and automatic recovery of multiple data storage solutions (such as Ceph, EdgeFS, Minio, Cassandra).
So far, the most stable storage supported by Rook is still Ceph. This article will introduce how to use Rook to create and maintain a Ceph cluster as a persistent storage for Kubernetes.
Environmental preparation
The K8s environment can be deployed by installing KubeSphere, and I use a high-availability solution.
Install KubeSphere on the public cloud reference document: multi-node installation
⚠️ Note: There are two data disks on the nodes of kube-node (5, 6, 7) respectively.
kube-master1 Ready master 118d v1.17.9
kube-master2 Ready master 118d v1.17.9
kube-master3 Ready master 118d v1.17.9
kube-node1 Ready worker 118d v1.17.9
kube-node2 Ready worker 118d v1.17.9
kube-node3 Ready worker 111d v1.17.9
kube-node4 Ready worker 111d v1.17.9
kube-node5 Ready worker 11d v1.17.9
kube-node6 Ready worker 11d v1.17.9
kube-node7 Ready worker 11d v1.17.9
Please make sure that lvm2 is installed on all nodes before installation, otherwise an error will be reported.
Deploy and install Rook, Ceph cluster
1. Clone the Rook warehouse to the local
$ git clone -b release-1.4 https://github.com/rook/rook.git
2. Switch directory
$ cd /root/ceph/rook/cluster/examples/kubernetes/ceph
3. Deploy Rook and create CRD resource
$ kubectl create -f common.yaml -f operator.yaml
# 说明:
# 1.comm.yaml里面主要是权限控制以及CRD资源定义
# 2.operator.yaml是rook-ceph-operator的deloyment
4. Create a Ceph cluster
$ kubectl create -f cluster.yaml
# 重要说明:
# 演示不做定制化操作,Ceph集群默认会动态去识别node节点上未格式化的全新空闲硬盘,自动会对这些盘进行OSD初始化(至少是需要3个节点,每个节点至少一块空闲硬盘)
5. Check pod status
$ kubectl get pod -n rook-ceph -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-cephfsplugin-5fw92 3/3 Running 6 12d 192.168.0.31 kube-node7 <none> <none>
csi-cephfsplugin-78plf 3/3 Running 0 12d 192.168.0.134 kube-node1 <none> <none>
csi-cephfsplugin-bkdl8 3/3 Running 3 12d 192.168.0.195 kube-node5 <none> <none>
csi-cephfsplugin-provisioner-77f457bcb9-6w4cv 6/6 Running 0 12d 10.233.77.95 kube-node4 <none> <none>
csi-cephfsplugin-provisioner-77f457bcb9-q7vxh 6/6 Running 0 12d 10.233.76.156 kube-node3 <none> <none>
csi-cephfsplugin-rqb4d 3/3 Running 0 12d 192.168.0.183 kube-node4 <none> <none>
csi-cephfsplugin-vmrfj 3/3 Running 0 12d 192.168.0.91 kube-node3 <none> <none>
csi-cephfsplugin-wglsw 3/3 Running 3 12d 192.168.0.116 kube-node6 <none> <none>
csi-rbdplugin-4m8hv 3/3 Running 0 12d 192.168.0.91 kube-node3 <none> <none>
csi-rbdplugin-7wt45 3/3 Running 3 12d 192.168.0.195 kube-node5 <none> <none>
csi-rbdplugin-bn5pn 3/3 Running 3 12d 192.168.0.116 kube-node6 <none> <none>
csi-rbdplugin-hwl4b 3/3 Running 6 12d 192.168.0.31 kube-node7 <none> <none>
csi-rbdplugin-provisioner-7897f5855-7m95p 6/6 Running 0 12d 10.233.77.94 kube-node4 <none> <none>
csi-rbdplugin-provisioner-7897f5855-btwt5 6/6 Running 0 12d 10.233.76.155 kube-node3 <none> <none>
csi-rbdplugin-qvksp 3/3 Running 0 12d 192.168.0.183 kube-node4 <none> <none>
csi-rbdplugin-rr296 3/3 Running 0 12d 192.168.0.134 kube-node1 <none> <none>
rook-ceph-crashcollector-kube-node1-64cf6f49fb-bx8lz 1/1 Running 0 12d 10.233.101.46 kube-node1 <none> <none>
rook-ceph-crashcollector-kube-node3-575b75dc64-gxwtp 1/1 Running 0 12d 10.233.76.149 kube-node3 <none> <none>
rook-ceph-crashcollector-kube-node4-78549d6d7f-9zz5q 1/1 Running 0 8d 10.233.77.226 kube-node4 <none> <none>
rook-ceph-crashcollector-kube-node5-5db8557476-b8zp6 1/1 Running 1 11d 10.233.81.239 kube-node5 <none> <none>
rook-ceph-crashcollector-kube-node6-78b7946769-8qh45 1/1 Running 0 8d 10.233.66.252 kube-node6 <none> <none>
rook-ceph-crashcollector-kube-node7-78c97898fd-k85l4 1/1 Running 1 8d 10.233.111.33 kube-node7 <none> <none>
rook-ceph-mds-myfs-a-86bdb684b6-4pbj7 1/1 Running 0 8d 10.233.77.225 kube-node4 <none> <none>
rook-ceph-mds-myfs-b-6697d66b7d-jgnkw 1/1 Running 0 8d 10.233.66.250 kube-node6 <none> <none>
rook-ceph-mgr-a-658db99d5b-jbrzh 1/1 Running 0 12d 10.233.76.162 kube-node3 <none> <none>
rook-ceph-mon-a-5cbf5947d8-vvfgf 1/1 Running 1 12d 10.233.101.44 kube-node1 <none> <none>
rook-ceph-mon-b-6495c96d9d-b82st 1/1 Running 0 12d 10.233.76.144 kube-node3 <none> <none>
rook-ceph-mon-d-dc4c6f4f9-rdfpg 1/1 Running 1 12d 10.233.66.219 kube-node6 <none> <none>
rook-ceph-operator-56fc54bb77-9rswg 1/1 Running 0 12d 10.233.76.138 kube-node3 <none> <none>
rook-ceph-osd-0-777979f6b4-jxtg9 1/1 Running 1 11d 10.233.81.237 kube-node5 <none> <none>
rook-ceph-osd-10-589487764d-8bmpd 1/1 Running 0 8d 10.233.111.59 kube-node7 <none> <none>
rook-ceph-osd-11-5b7dd4c7bc-m4nqz 1/1 Running 0 8d 10.233.111.60 kube-node7 <none> <none>
rook-ceph-osd-2-54cbf4d9d8-qn4z7 1/1 Running 1 10d 10.233.66.222 kube-node6 <none> <none>
rook-ceph-osd-6-c94cd566-ndgzd 1/1 Running 1 10d 10.233.81.238 kube-node5 <none> <none>
rook-ceph-osd-7-d8cdc94fd-v2lm8 1/1 Running 0 9d 10.233.66.223 kube-node6 <none> <none>
rook-ceph-osd-prepare-kube-node1-4bdch 0/1 Completed 0 66m 10.233.101.91 kube-node1 <none> <none>
rook-ceph-osd-prepare-kube-node3-bg4wk 0/1 Completed 0 66m 10.233.76.252 kube-node3 <none> <none>
rook-ceph-osd-prepare-kube-node4-r9dk4 0/1 Completed 0 66m 10.233.77.107 kube-node4 <none> <none>
rook-ceph-osd-prepare-kube-node5-rbvcn 0/1 Completed 0 66m 10.233.81.73 kube-node5 <none> <none>
rook-ceph-osd-prepare-kube-node5-rcngg 0/1 Completed 5 10d 10.233.81.98 kube-node5 <none> <none>
rook-ceph-osd-prepare-kube-node6-jc8cm 0/1 Completed 0 66m 10.233.66.109 kube-node6 <none> <none>
rook-ceph-osd-prepare-kube-node6-qsxrp 0/1 Completed 0 11d 10.233.66.109 kube-node6 <none> <none>
rook-ceph-osd-prepare-kube-node7-5c52p 0/1 Completed 5 8d 10.233.111.58 kube-node7 <none> <none>
rook-ceph-osd-prepare-kube-node7-h5d6c 0/1 Completed 0 66m 10.233.111.110 kube-node7 <none> <none>
rook-ceph-osd-prepare-kube-node7-tzvp5 0/1 Completed 0 11d 10.233.111.102 kube-node7 <none> <none>
rook-ceph-osd-prepare-kube-node7-wd6dt 0/1 Completed 7 8d 10.233.111.56 kube-node7 <none> <none>
rook-ceph-tools-64fc489556-5clvj 1/1 Running 0 12d 10.233.77.118 kube-node4 <none> <none>
rook-discover-6kbvg 1/1 Running 0 12d 10.233.101.42 kube-node1 <none> <none>
rook-discover-7dr44 1/1 Running 2 12d 10.233.66.220 kube-node6 <none> <none>
rook-discover-dqr82 1/1 Running 0 12d 10.233.77.74 kube-node4 <none> <none>
rook-discover-gqppp 1/1 Running 0 12d 10.233.76.139 kube-node3 <none> <none>
rook-discover-hdkxf 1/1 Running 1 12d 10.233.81.236 kube-node5 <none> <none>
rook-discover-pzhsw 1/1 Running 3 12d 10.233.111.36 kube-node7 <none> <none>
The above is the state after the pods of all components are completed. The pod starting with rook-ceph-osd-prepare is automatically aware of the newly mounted hard disks in the cluster. As long as a new hard disk is mounted to the cluster, the OSD will be triggered automatically.
6. Configure Ceph cluster dashboard
Ceph Dashboard is a built-in web-based management and monitoring application that is part of the open source Ceph distribution. Various basic status information of the Ceph cluster can be obtained through the Dashboard.
The default ceph ceph-dashboard has been installed, its SVC address is service clusterIP, and it cannot be accessed from outside, you need to create a service service
$ kubectl apply -f dashboard-external-http.yaml
apiVersion: v1
kind: Service
metadata:
name: rook-ceph-mgr-dashboard-external-https
namespace: rook-ceph # namespace:cluster
labels:
app: rook-ceph-mgr
rook_cluster: rook-ceph # namespace:cluster
spec:
ports:
- name: dashboard
port: 7000
protocol: TCP
targetPort: 7000
selector:
app: rook-ceph-mgr
rook_cluster: rook-ceph
sessionAffinity: None
type: NodePort
Description : Since 8443 is the https access port, a certificate needs to be configured, this tutorial only shows that only 7000 is configured on the http access port
7. View svc status
$ kubectl get svc -n rook-ceph
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
csi-cephfsplugin-metrics ClusterIP 10.233.3.172 <none> 8080/TCP,8081/TCP 12d
csi-rbdplugin-metrics ClusterIP 10.233.43.23 <none> 8080/TCP,8081/TCP 12d
rook-ceph-mgr ClusterIP 10.233.63.85 <none> 9283/TCP 12d
rook-ceph-mgr-dashboard ClusterIP 10.233.20.159 <none> 7000/TCP 12d
rook-ceph-mgr-dashboard-external-https NodePort 10.233.56.73 <none> 7000:31357/TCP 12d
rook-ceph-mon-a ClusterIP 10.233.30.222 <none> 6789/TCP,3300/TCP 12d
rook-ceph-mon-b ClusterIP 10.233.55.25 <none> 6789/TCP,3300/TCP 12d
rook-ceph-mon-d ClusterIP 10.233.0.206 <none> 6789/TCP,3300/TCP 12d
8. Verify access to dashboard
Open the KubeSphere platform to open the external network service
interview method:
http://{master1-ip:31357}
Username obtaining method:
$ kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}"|base64 --decode && echo
Description : The dashboard displays HEALTH_WARN warnings. You can view the specific reasons through seelog. Generally, the number of osd down, pg is insufficient, etc.
9. Deploy rook toolbox
The Rook toolbox is a container that contains commonly used tools for Rook debugging and testing
$ kubectl apply -f toolbox.yaml
Enter the toolbox to view the Ceph cluster status
$ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- bash
$ ceph -s
cluster:
id: 1457045a-4926-411f-8be8-c7a958351a38
health: HEALTH_WARN
mon a is low on available space
2 osds down
Degraded data redundancy: 25/159 objects degraded (15.723%), 16 pgs degraded, 51 pgs undersized
3 daemons have recently crashed
services:
mon: 3 daemons, quorum a,b,d (age 9d)
mgr: a(active, since 4h)
mds: myfs:1 {0=myfs-b=up:active} 1 up:standby-replay
osd: 12 osds: 6 up (since 8d), 8 in (since 8d); 9 remapped pgs
data:
pools: 5 pools, 129 pgs
objects: 53 objects, 37 MiB
usage: 6.8 GiB used, 293 GiB / 300 GiB avail
pgs: 25/159 objects degraded (15.723%)
5/159 objects misplaced (3.145%)
69 active+clean
35 active+undersized
16 active+undersized+degraded
9 active+clean+remapped
Toolbox related query commands
ceph status
ceph osd status
ceph df
rados df
Deploy StorageClass
1. Introduction to
Ceph can provide object storage RADOSGW, block storage RBD, and file system storage Ceph FS at the same time. RBD stands for RADOS Block Device. RBD block storage is the most stable and most commonly used storage type. RBD block devices can be mounted like disks. RBD block devices have the characteristics of snapshots, multiple copies, clones, and consistency. Data is stored in multiple OSDs in the Ceph cluster in a striped manner.
2. Create StorageClass
[root@kube-master1 rbd]# kubectl apply -f storageclass.yaml
3. View StorageClass deployment status
4. Create pvc
$ kubectl apply -f pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: rook-ceph-block
~
5. Create a pod with pvc
$ kubectl apply -f pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: csirbd-demo-pod
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: mypvc
mountPath: /var/lib/www/html
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false
6. View pod, pvc, pv status
Summarize
For students who are exposed to rook+Ceph deployment experience for the first time, there are more things to know, and there will be more pits. Hope that the above deployment process record can help everyone.
Ceph cluster always prompts that there is no osd disk
Answer: I have encountered several situations here. Check whether the mounted data disk has been used before. Although it has been formatted, the previous raid information still exists? You can use the following script to clean up and mount it after formatting.
#!/usr/bin/env bash
DISK="/dev/vdc" #按需修改自己的盘符信息
# Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean)
# You will have to run this step for all disks.
sgdisk --zap-all $DISK
# Clean hdds with dd
dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync
# Clean disks such as ssd with blkdiscard instead of dd
blkdiscard $DISK
# These steps only have to be run once on each node
# If rook sets up osds using ceph-volume, teardown leaves some devices mapped that lock the disks.
ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove %
# ceph-volume setup can leave ceph-<UUID> directories in /dev and /dev/mapper (unnecessary clutter)
rm -rf /dev/ceph-*
rm -rf /dev/mapper/ceph--*
# Inform the OS of partition table changes
partprobe $DISK
~
2. What storage types does Ceph support?
Answer: RDB block storage, cephfs file storage, s3 object storage, etc.
3. How to troubleshoot various pits in deployment?
Answer: It is strongly recommended to check related documents on the official websites of rook and ceph for troubleshooting
4. Failed to access dashboard
Answer: If it is KubeSphere or K8s built by public cloud, please release the nodeport port in the security group.
This article is published by the blog one article multi-posting OpenWrite
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。