一、前言
为了便于学习容器编排,实践容器编排,本篇文章记录在本地虚拟机搭建一个准生产级别的Kubernetes集群。在此k8s集群的基础上,我们可以尝试着容器化工作或者学习场景中的各种中间件集群,以及微服务应用。
二、集群搭建资源准备
从学习的目的出发,准备搭建一个控制节点和两个工作节点的集群,为了节约成本,需要支持pod被调度到控制节点上。
2.1、机器准备
虚拟机 | 节点角色 | 系统配置 | 安装组件 |
---|---|---|---|
k8s-master01 | 主节点 | centos7,2核6G | kube-apiserver、kube-scheduler、kube-controller-manager、etcd、kube-proxy、kubeadm、kubelet、kubectl、docker |
k8s-workder01 | 工作节点 | centos7,2核6G | kubeadm、kubelet、kubectl、kube-proxy、docker |
k8s-workder02 | 工作节点 | centos7,2核6G | kubeadm、kubelet、kubectl、kube-proxy、docker |
2.2、组件介绍
组件名称 | 组件类型 | 部署方式 | 组件描述 |
---|---|---|---|
kube-apiserver | 控制平面组件 | 通过kubeadm容器化部署 | API 服务器是 Kubernetes 控制面的组件, 该组件公开了 Kubernetes API。 API 服务器是 Kubernetes 控制面的前端 |
kube-scheduler | 控制平面组件 | 通过kubeadm容器化部署 | 负责监视新创建的、未指定运行节点(node)的 Pods,选择节点让 Pod 在上面运行 |
kube-controller-manager | 控制平面组件 | 通过kubeadm容器化部署 | 从逻辑上讲,每个控制器都是一个单独的进程, 但是为了降低复杂性,它们都被编译到同一个可执行文件,并在一个进程中运行 |
etcd | 控制平面组件 | 通过kubeadm容器化部署 | etcd是兼具一致性和高可用性的键值数据库,作为保存 Kubernetes所有集群数据(资源对象等)的后台数据库 |
kubelet | 节点组件 | yum安装,无法容器化部署 | kubelet 接收一组通过各类机制提供给它的 PodSpecs,确保这些 PodSpecs 中描述的容器处于运行状态且健康。 kubelet 不会管理不是由 Kubernetes 创建的容器 |
kube-proxy | 节点组件 | 通过kubeadm容器化部署 | kube-proxy 是集群中每个节点上运行的网络代理, 实现 Kubernetes 服务(Service) 概念的一部分。kube-proxy 维护节点上的网络规则。这些网络规则允许从集群内部或外部的网络会话与 Pod 进行网络通信。 |
docker | 节点组件 | yum安装 | docker引擎,k8s支持的一种容器运行时实现 |
kubeadm | 集群辅助部署工具 | yum安装 | k8s官方集群部署工具 |
kubectl | 客户端工具 | yum安装 | 用来与集群通信的命令行工具 |
2.3、准备工作确认
无论是本地物理机还是公有云上的虚拟机,都应该保证以下几点要求
- 满足安装 Docker 项目所需的要求,比如 64 位的 Linux 操作系统、3.10 及以上的内核版本
- x86 或者 ARM 架构均可
- 机器之间网络互通,容器网络互通的前提
- 有外网访问权限,因为需要拉取镜像(也可手动下载好镜像)
- 节点之中不可以有重复的主机名、MAC 地址或 product_uuid
- 开启机器上的某些端口,详情参照k8s官网,k8s组件端口协议
- 禁用交换分区。为了保证 kubelet 正常工作
2.4、实践部署目标
- 在所有节点上安装 Docker 和 kubeadm
- 部署 Kubernetes Master
- 部署容器网络插件
- 部署 Kubernetes Worker
- 部署 Dashboard 可视化插件
- 部署容器存储插件
三、集群搭建
3.1、安装docker引擎
k8s通过上层CRI接口对接容器运行时实现,这里我们采用docker引擎这个容器运行时实现。因为我们采用的是centos7的机器部署k8s集群,此处采用yum源安装docker引擎
注意:配置 Docker 守护程序,尤其是使用 systemd 来管理容器的 cgroup,保持docker与kubelet的cgroup驱动一致
[root@vm-k8s-master ~]# cat /etc/docker/daemon.json
{
#配置docker的镜像仓库,加速镜像的拉取
"registry-mirrors": ["https://hub-mirror.c.163.com","https://reg-mirror.qiniu.com"],
#配置cgroup驱动为systemd
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
#对于运行 Linux 内核版本 4.0 或更高版本,或使用 3.10.0-51 及更高版本的 RHEL 或 CentOS 的系统,overlay2是首选的存储驱动程序。
"storage-driver": "overlay2"
}
3.2、安装kubeadm
3.2.1、配置主机名
k8s集群节点之中不可以有重复的主机名、MAC 地址或 product_uuid
#配置主机名
hostnamectl set-hostname master
#查看主机名,检查是否k8s集群节点的主机名是否有重复
cat /etc/hostname
#product_uuid校验
sudo cat /sys/class/dmi/id/product_uuid
#使用命令ip link 或 ifconfig -a来获取网络接口的MAC地址
ifconfig -a
3.2.2、修改节点host
cat >> /etc/hosts << EOF
192.168.31.254 k8s.master.com
192.168.31.254 k8s.cluster-endpoint
192.168.31.58 k8s.worker01.com
192.168.31.20 k8s.worker02.com
EOF
说明:以上是k8s三个节点的IP域名映射。
3.2.3、允许iptables检查桥接流量
- 确保br_netfilter模块被加载
#查看是否加载了br_netfilter模块
lsmod | grep br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
#若没有加载br_netfilter模块,则显式加载br_netfilter模块
sudo modprobe br_netfilter
- 确保iptables能够正确地查看桥接流量
为了让你的 Linux 节点上的 iptables 能够正确地查看桥接流量,你需要确保在你的 sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
3.2.4、禁用swap交换分区
#禁用所有交换交换设备
swapoff -a
# 注释swap行
vim /etc/fstab
#重启机器
reboot
3.2.5、配置kubernetes的yum源
配置官方的kubernetes yum源(网络允许的情况下)
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF
配置采用阿里云镜像的kubernetes yum源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF
3.2.6、yum源安装kubeadm、kubelet、kubectl
# 将SELinux设置为permissive模式(相当于将其禁用)
# 通过运行命令 setenforce 0 和 sed ... 将 SELinux 设置为 permissive 模式 可以有效地将其禁用。 这是允许容器访问主机文件系统所必需的,而这些操作时为了例如 Pod 网络工作正常
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#关闭防火墙,这个不是必须得,但是需要保证k8s的各个组件的端口交互的规则的维护,实验环境简单关闭处理
systemctl stop firewalld
systemctl disable firewalld
#yum源安装kubeadm、kubelet、kubectl
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
#查询kubelet版本,以便于找到到一个版本,用于下一步指定版本安装
yum list kubelet --showduplicates | sort -r
#指定版本安装
sudo yum install -y kubelet-<version> kubeadm-<version> kubectl-<version> --disableexcludes=kubernetes
#开机自启动kubelet
sudo systemctl enable --now kubelet
#安装bash-completion
yum install -y bash-completion
#开机即设置kubectl命令补全
echo "source <(kubectl completion bash)" >> ~/.bash_profile
source .bash_profile
- 在上述安装 kubeadm 的过程中,kubeadm 和 kubelet、kubectl、kubernetes-cni 这几个二进制文件都会被自动安装好
==================================================================================================================================================================================================================
Package 架构 版本 源 大小
==================================================================================================================================================================================================================
正在安装:
kubeadm x86_64 1.23.5-0 kubernetes 9.0 M
kubectl x86_64 1.23.5-0 kubernetes 9.5 M
kubelet x86_64 1.23.5-0 kubernetes 21 M
为依赖而安装:
conntrack-tools x86_64 1.4.4-7.el7 base 187 k
cri-tools x86_64 1.23.0-0 kubernetes 7.1 M
kubernetes-cni x86_64 0.8.7-0 kubernetes 19 M
libnetfilter_cthelper x86_64 1.0.0-11.el7 base 18 k
libnetfilter_cttimeout x86_64 1.0.0-7.el7 base 18 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
socat x86_64 1.7.3.2-2.el7 base 290 k
事务概要
==================================================================================================================================================================================================================
安装 3 软件包 (+7 依赖软件包)
总下载量:65 M
安装大小:297 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm | 187 kB 00:00:00
(2/10): 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm | 7.1 MB 00:00:00
(3/10): ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64.rpm | 9.0 MB 00:00:00
(4/10): 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm | 9.5 MB 00:00:00
(5/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm | 18 kB 00:00:00
(6/10): socat-1.7.3.2-2.el7.x86_64.rpm | 290 kB 00:00:00
(7/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm | 18 kB 00:00:00
(8/10): d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm | 21 MB 00:00:01
(9/10): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm | 19 MB 00:00:01
(10/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm | 23 kB 00:00:01
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
总计 26 MB/s | 65 MB 00:00:02
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : libnetfilter_cthelper-1.0.0-11.el7.x86_64 1/10
正在安装 : socat-1.7.3.2-2.el7.x86_64 2/10
正在安装 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 3/10
正在安装 : cri-tools-1.23.0-0.x86_64 4/10
正在安装 : libnetfilter_queue-1.0.2-2.el7_2.x86_64 5/10
正在安装 : conntrack-tools-1.4.4-7.el7.x86_64 6/10
正在安装 : kubernetes-cni-0.8.7-0.x86_64 7/10
正在安装 : kubelet-1.23.5-0.x86_64 8/10
正在安装 : kubectl-1.23.5-0.x86_64 9/10
正在安装 : kubeadm-1.23.5-0.x86_64 10/10
验证中 : conntrack-tools-1.4.4-7.el7.x86_64 1/10
验证中 : kubernetes-cni-0.8.7-0.x86_64 2/10
验证中 : kubectl-1.23.5-0.x86_64 3/10
验证中 : kubeadm-1.23.5-0.x86_64 4/10
验证中 : libnetfilter_queue-1.0.2-2.el7_2.x86_64 5/10
验证中 : cri-tools-1.23.0-0.x86_64 6/10
验证中 : kubelet-1.23.5-0.x86_64 7/10
验证中 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 8/10
验证中 : socat-1.7.3.2-2.el7.x86_64 9/10
验证中 : libnetfilter_cthelper-1.0.0-11.el7.x86_64 10/10
已安装:
kubeadm.x86_64 0:1.23.5-0 kubectl.x86_64 0:1.23.5-0 kubelet.x86_64 0:1.23.5-0
作为依赖被安装:
conntrack-tools.x86_64 0:1.4.4-7.el7 cri-tools.x86_64 0:1.23.0-0 kubernetes-cni.x86_64 0:0.8.7-0 libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7
libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 socat.x86_64 0:1.7.3.2-2.el7
完毕!
说明:kubelet现在每隔几秒就会重启,因为它陷入了一个等待kubeadm指令的死循环。
注意:如果校验签名报错,关闭校验,设置(gpgcheck=0、repo_gpgcheck=0)即可
https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for kubernetes
3.3、部署Kubernetes的Master节点
3.3.1、查看kubeadm需要下载的镜像
[root@vm-k8s-master ~]# kubeadm config images list --kubernetes-version=1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
说明:kubernetes一切皆容器的思想体现,k8s控制平面的核心组件均采用pod管理容器部署。默认情况下, kubeadm 会从 k8s.gcr.io 仓库拉取镜像。如果请求的 Kubernetes 版本是 CI 标签 (例如 ci/latest),则使用 gcr.io/k8s-staging-ci-images,如果网络不允许,我们可以手动下载以上所有的容器镜像,也可以指定从自定义的镜像仓库拉取
3.3.2、指定镜像仓库初始化控制平面(master节点)
kubeadm init \
--apiserver-advertise-address=192.168.31.254 \
--control-plane-endpoint=k8s.cluster-endpoint \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.5 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
- apiserver-advertise-address:API 服务器所公布的其正在监听的 IP 地址。如果未设置,则使用默认网络接口
- control-plane-endpoint:为控制平面指定一个稳定的IP地址或DNS名称,
- image-repository:选择用于拉取控制平面镜像的容器仓库,默认值:"k8s.gcr.io"
- kubernetes-version:为控制平面选择一个特定的Kubernetes版本, 默认值:"stable-1"
- service-cidr:为服务的虚拟IP地址另外指定IP地址段,默认值:"10.96.0.0/12"
- pod-network-cidr:指明pod网络可以使用的IP地址段。如果设置了这个参数,控制平面将会为每一个节点自动分配 CIDRs
3.3.2.1、初始化控制平面日志
[root@vm-k8s-master k8s-init]# kubeadm init \
> --apiserver-advertise-address=192.168.31.254 \
> --control-plane-endpoint=k8s.cluster-endpoint \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.23.5 \
> --service-cidr=10.1.0.0/16 \
> --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.cluster-endpoint kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vm-k8s-master] and IPs [10.1.0.1 192.168.31.254]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost vm-k8s-master] and IPs [192.168.31.254 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost vm-k8s-master] and IPs [192.168.31.254 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.002185 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vm-k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node vm-k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: vz7g8m.fhgc8sby6tm6mi25
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join k8s.cluster-endpoint:6443 --token vz7g8m.fhgc8sby6tm6mi25 \
--discovery-token-ca-cert-hash sha256:e368eb6bdd202ff442cef277b97f12fe374d63071ff9dc277add24301609204d \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s.cluster-endpoint:6443 --token vz7g8m.fhgc8sby6tm6mi25 \
--discovery-token-ca-cert-hash sha256:e368eb6bdd202ff442cef277b97f12fe374d63071ff9dc277add24301609204d
说明:以上日志中有两个比较重要的信息,如下:
- 集群添加工作节点的指令
kubeadm join k8s.cluster-endpoint:6443 --token vz7g8m.fhgc8sby6tm6mi25 \
--discovery-token-ca-cert-hash sha256:e368eb6bdd202ff442cef277b97f12fe374d63071ff9dc277add24301609204d \
--control-plane
- kubectl客户端依赖的集群配置
#普通用户需要将集群配置复制到用户的家目录下,kubectl客户端会默认读取
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#root用户为了简单方便起见,设置环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
说明:Kubernetes 集群默认需要加密方式访问。所以,这几条命令,就是将刚刚部署生成的 Kubernetes 集群的安全配置文件,保存到当前用户的.kube 目录下,kubectl 默认会使用这个目录下的授权信息访问 Kubernetes集群。
3.3.3、为控制平面安装网络插件
初始化控制平面后,我们首先来查看下节点的状态,会发现Master节点的状态是NotReady。
[root@vm-k8s-master k8s-init]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm-k8s-master NotReady control-plane,master 24m v1.23.5
进一步,通过 kubectl describe 指令的输出,我们可以看到 NodeNotReady 的原因在于,我们尚未部署任何网络插件。
[root@vm-k8s-master k8s-init]# kubectl describe node vm-k8s-master
Name: vm-k8s-master
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=vm-k8s-master
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 26 Mar 2022 16:48:43 +0800
Taints: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: vm-k8s-master
AcquireTime: <unset>
RenewTime: Sat, 26 Mar 2022 17:18:00 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 26 Mar 2022 17:14:24 +0800 Sat, 26 Mar 2022 16:48:40 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 26 Mar 2022 17:14:24 +0800 Sat, 26 Mar 2022 16:48:40 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 26 Mar 2022 17:14:24 +0800 Sat, 26 Mar 2022 16:48:40 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sat, 26 Mar 2022 17:14:24 +0800 Sat, 26 Mar 2022 16:48:40 +0800 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 192.168.31.254
Hostname: vm-k8s-master
Capacity:
cpu: 2
ephemeral-storage: 39617640Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 5925672Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 36511616964
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 5823272Ki
pods: 110
System Info:
Machine ID: 5bf19e1a8ca94a5c987c497fd9d169f9
System UUID: 4A364D56-F429-F892-581C-C7E2B013277E
Boot ID: 8a40b366-2645-4ee1-bf40-90e60b23a2df
Kernel Version: 3.10.0-1160.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.13
Kubelet Version: v1.23.5
Kube-Proxy Version: v1.23.5
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-vm-k8s-master 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 29m
kube-system kube-apiserver-vm-k8s-master 250m (12%) 0 (0%) 0 (0%) 0 (0%) 29m
kube-system kube-controller-manager-vm-k8s-master 200m (10%) 0 (0%) 0 (0%) 0 (0%) 29m
kube-system kube-proxy-dvrkb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29m
kube-system kube-scheduler-vm-k8s-master 100m (5%) 0 (0%) 0 (0%) 0 (0%) 29m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 650m (32%) 0 (0%)
memory 100Mi (1%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 28m kube-proxy
Normal Starting 29m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 29m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 29m kubelet Node vm-k8s-master status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 29m kubelet Node vm-k8s-master status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 29m kubelet Node vm-k8s-master status is now: NodeHasSufficientPID
另外,可以看到,CoreDNS这个依赖于网络的Pod处于Pending状态,即调度失败。这当然是符合预期的,因为这个 Master 节点的网络尚未就绪。
[root@vm-k8s-master k8s-init]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d8c4cb4d-6ldld 0/1 Pending 0 31m
coredns-6d8c4cb4d-8z59l 0/1 Pending 0 31m
etcd-vm-k8s-master 1/1 Running 0 32m
kube-apiserver-vm-k8s-master 1/1 Running 0 32m
kube-controller-manager-vm-k8s-master 1/1 Running 0 32m
kube-proxy-dvrkb 1/1 Running 0 31m
kube-scheduler-vm-k8s-master 1/1 Running 0 32m
3.3.3.1、安装flannel网络插件
在Kubernetes项目一切皆容器的设计理念指导下,部署网络插件也是采用pod部署。
#如果网络允许可直接执行kubectl apply指令即可
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#如果网络不允许,无法下载kube-flannel.yml,可以手动复制以下的kube-flannel.yml文件内容,注意手动创建的kube-flannel.yml位置执行以下命令
kubectl apply -f ./kube-flannel.yml
- kube-flannel.yml文件内容
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel:v0.17.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel:v0.17.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- 安装flannel网络插件日志
[root@vm-k8s-master k8s-init]# kubectl apply -f ./kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds created
- 网络插件部署完成后,我们可以通过 kubectl get重新检查Pod的状态
[root@vm-k8s-master k8s-init]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d8c4cb4d-6ldld 1/1 Running 0 95m
coredns-6d8c4cb4d-8z59l 1/1 Running 0 95m
etcd-vm-k8s-master 1/1 Running 0 95m
kube-apiserver-vm-k8s-master 1/1 Running 0 95m
kube-controller-manager-vm-k8s-master 1/1 Running 0 95m
kube-flannel-ds-gwccf 1/1 Running 0 38s
kube-proxy-dvrkb 1/1 Running 0 95m
kube-scheduler-vm-k8s-master 1/1 Running 0 95m
可以看到,所有的系统 Pod 都成功启动了,而刚刚部署的flannel网络插件则在kube-system下面新建了一个名叫kube-flannel-ds-gwccf的Pod,一般来说,这些Pod就是容器网络插件在每个节点上的控制组件。
至此,Kubernetes 的 Master 节点就部署完成了。如果你只需要一个单节点的 Kubernetes,现在你就可以使用了。不过,在默认情况下,Kubernetes 的 Master 节点是不能运行用户 Pod 的,所以还需要额外做一个小操作。
3.3.4、控制平面节点污点消除
默认情况下Master节点是不允许运行用户Pod的。而Kubernetes做到这一点,依靠的是 Kubernetes的Taint/Toleration机制。
它的原理非常简单:一旦某个节点被加上了一个Taint,即被“打上了污点”,那么所有Pod就都不能在这个节点上运行,因为Kubernetes的Pod都有“洁癖”。除非,有个别的Pod声明自己能“容忍”这个“污点”,即声明了Toleration,它才可以在这个节点上运行。
- 为节点打上“污点”(Taint)的命令是
#该node1节点上就会增加一个键值对格式的 Taint,即:foo=bar:NoSchedule。其中值里面的 NoSchedule,意味着这个Taint只会在调度新Pod时产生作用,而不会影响已经在 node1 上运行的 Pod,哪怕它们没有Toleration
kubectl taint nodes node1 foo=bar:NoSchedule
- Pod声明Toleration污点
apiVersion: v1
kind: Pod
...
spec:
tolerations:
- key: "foo"
operator: "Equal"
value: "bar"
effect: "NoSchedule"
这个 Toleration 的含义是,这个Pod能“容忍”所有键值对为foo=bar的Taint(operator: “Equal”,“等于”操作)。
- kubectl describe检查Master节点的Taint字段
[root@vm-k8s-master k8s-init]# kubectl describe node vm-k8s-master
Name: vm-k8s-master
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=vm-k8s-master
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"d6:07:13:de:95:c6"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.31.254
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 26 Mar 2022 16:48:43 +0800
Taints: node-role.kubernetes.io/master:NoSchedule
可以看到,Master节点默认被加上了node-role.kubernetes.io/master:NoSchedule这样一个“污点”,其中“键”是node-role.kubernetes.io/master,而没有提供“值”。
- 删除master节点的默认污点
为了方便起见,测试学习环境需要master节点也能被调度pod运行,简单粗暴地为master节点删除掉默认污点。
#在“node-role.kubernetes.io/master”这个键后面加上了一个短横线“-”,这个格式就意味着移除所有以“node-role.kubernetes.io/master”为键的 Taint。
kubectl taint nodes --all node-role.kubernetes.io/master-
3.4、为Kubernetes集群添加工作节点
Kubernetes的Worker节点跟Master节点几乎是相同的,它们运行着的都是一个 kubelet 组件。唯一的区别在于,在kubeadm init的过程中,kubelet 启动后,Master 节点上还会自动运行 kube-apiserver、kube-scheduler、kube-controller-manger 这三个系统 Pod。
到这里默认工作节点已经安装了kubeadm、kubectl、kubelet、docker。接下来只要执行kubeadm init的过程中生成的kubeadm join执行就可以了。
- 执行部署Master节点时生成的kubeadm join指令
kubeadm join k8s.cluster-endpoint:6443 --token vz7g8m.fhgc8sby6tm6mi25 \
--discovery-token-ca-cert-hash sha256:e368eb6bdd202ff442cef277b97f12fe374d63071ff9dc277add24301609204d
- worker节点加入集群日志输出
[root@vm-k8s-worker01 manifests]# kubeadm join k8s.cluster-endpoint:6443 --token vz7g8m.fhgc8sby6tm6mi25 \
> --discovery-token-ca-cert-hash sha256:e368eb6bdd202ff442cef277b97f12fe374d63071ff9dc277add24301609204d
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
3.5、部署和访问 Kubernetes 仪表板(Dashboard)
Dashboard 是基于网页的 Kubernetes 用户界面。 你可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。你可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes资源(如 Deployment,Job,DaemonSet 等等)。 例如,你可以对Deployment实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。
Dashboard 同时展示了 Kubernetes 集群中的资源状态信息和所有报错信息。
3.5.1、部署Dashboard可视化插件
#向k8s集群提交dashboard资源组件配置,如果网络不通,可复制以下提供的配置文件内容
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
- kubernetes-dashboard.yaml文件内容
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.5.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.7
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
- kubernetes-dashboard安装日志
[root@vm-k8s-master k8s-init]# kubectl apply -f ./kubernetes-dashboard.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
3.5.2、创建超级管理员的账号用于登录Dashboard
Dashboard是一个Web Server,很多人经常会在自己的公有云上无意地暴露 Dashboard 的端口,从而造成安全隐患。所以,1.7 版本之后的 Dashboard 项目部署完成后,默认只能通过 Proxy的方式在本地访问。
- 创建用户,并绑定集群admin权限
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
- 生成Bearer Token
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
#打印输出信息如下
eyJhbGciOiJSUzI1NiIsImtpZCI6Il9uU25iZHpROE5WZkZOcDFFeGhGT0JRQjZOal93Zk1qOHNVVlRsQVU2QmMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWRzbDR2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0MzJkNTA5Yy0zNTQ4LTRlYTktOTNmZi03NTM2ZDkxY2YwMTIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.RLl_jz41Snyc4qEHFk9-o_JYKfv22Lv2JquTgXX92_9k9VQOswd7RhF1OsWywVfd5TKq9_tagMDfVRqHo6fVe-jB4blUL6j6iTxtkFno7P9snqboFsEoegSFCM-S9OaF1C5Dk_rcIp2yJzbALkmTC6VjdWxPt3TvjEclXRQl0cITlE1NFmJKpJqbNlG65l4SjnRO5KLinrgYfpcLxlroqlKd4ZFWenKrz16VOicCUHuP_mWlq4yxp_WRrfpCglX92u-sDY3ouJW5cL0eqk-w5XKhREFvm-C2Opoba-mDZVcbaQxEIw_VkeudLliSUs5_v9CKQGKPYK6r6rdESYi6sw
3.5.3、修改kubernetes-dashboard的service
为了更友好的访问dashboard服务,需要修改kubernetes-dashboard.yaml的service配置部分,将service type从ClusterIP修改为NodePort的方式。
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
- 更新以上配置文件后,执行kubectl apply更新资源对象
kubectl apply -f ./kubernetes-dashboard.yaml
- 查看更新后的service资源对象
[root@vm-k8s-master k8s-init]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.1.245.206 <none> 8000/TCP 62m
kubernetes-dashboard NodePort 10.1.50.101 <none> 443:30001/TCP 62m
3.5.4、访问kubernetes-dashboard
#注意此处的https协议,选择token的认证方式,复制早前生成的token即可
https://k8s.master.com:30001/
3.6、部署rook存储插件
Rook 项目是一个基于Ceph的Kubernetes 存储插件(它后期也在加入对更多存储实现的支持)。不过,不同于对Ceph的简单封装,Rook 在自己的实现中加入了水平扩展、迁移、灾难备份、监控等大量的企业级功能,使得这个项目变成了一个完整的、生产级别可用的容器存储插件。
可在生产环境中对文件存储、块存储和对象存储进行管理。Rook由云原生计算基金会(CNCF) 作为毕业级项目托管。Rook是用golang 实现的。Ceph 是用C++实现的。
- 更多信息参考rook官网
- 查看rook github
3.6.1、部署rook存储插件前置条件
以当前最新的rook版本为v1.8.7为例,进行以下的安装步骤。为了配置Ceph存储集群,至少需要以下本地存储选项之一。
- 原始设备(无分区或格式化文件系统)
- 原始分区(无格式化文件系统)
- 可通过block模式从存储类获得PV
- 集群至少需要三个节点,每个节点至少需要一个OSD(对象存储设备),保证存储类的创建
- 需要Kubernetesv1.16或更高版本
3.6.2、部署rook存储插件前置准备
由于本次搭建的环境,机器是通过VMware创建出来的,所以可以很方便地为每个虚拟机添加一块全新的硬盘。用于后续rook集群作为一个OSD(对象存储设备)。
- VMware添加虚拟机硬盘,参考vmware官方文档
1、选择该虚拟机,然后选择虚拟机 > 设置。
2、在硬件选项卡中,单击添加。
3、在新建硬件向导中,选择硬盘。
4、选择创建新虚拟磁盘。
5、选择磁盘类型。
- 查看磁盘块设备
[root@vm-k8s-worker01 ~]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 xfs 8fd14e7c-6b98-4845-bd19-2f1eb6b175b0 /boot
└─sda2 LVM2_member ZT7jhW-IRXs-0KBo-FGSk-C85B-hnOA-upyrdP
├─centos_vm--k8s--jayway-root xfs 477d2f79-9e28-49a0-8f5f-01c69014014b /
└─centos_vm--k8s--jayway-swap swap 949d1c1f-abe7-4fd2-9232-40526e1b8637
sdb
sr0 iso9660 CentOS 7 x86_64 2020-11-04-11-36-43-00
说明:注意FSTYPE字段的值,如果为空,则表示当前的设备尚未写入文件系统,例如以上的sdb设备就是刚新增的硬盘设备。
3.6.3、部署rook存储插件前置准备
- 克隆rook git仓库
#当前最新版本为v1.8.7
git clone --single-branch --branch v1.8.7 https://github.com/rook/rook.git
- 查看rook集群部署的相关K8S资源配置文件
进入rook/deploy/examples目录,可查看rook项目提供了很多K8S资源配置文件。暂时我们只需要关注个别的资源配置文件。
资源文件 | 描述 |
---|---|
crds.yaml | 创建rook集群前,首先需要被创建的CRD资源对象,让k8s认识rook的自定义资源 |
common.yaml | 启动operator和ceph cluster所必须得通用资源,必须先于operator.yaml、cluster.yaml资源被创建 |
operator.yaml | rook operator控制器 |
cluster.yaml | rook-ceph cluster配置 |
storageclass.yaml | 创建存储类,为k8s集群提供动态创建pv的能力 |
mysql.yaml | rook官方提供的mysql测试服务,依赖于storageclass创建的pv持久化 |
wordpress.yaml | rook官方提供的wordpress测试服务,依赖于storageclass创建的pv持久化 |
-rw-r--r--. 1 root root 696 3月 27 10:27 bucket-notification-endpoint.yaml
-rw-r--r--. 1 root root 1258 3月 27 10:27 bucket-notification.yaml
-rw-r--r--. 1 root root 842 3月 27 10:27 bucket-topic.yaml
-rw-r--r--. 1 root root 456 3月 27 10:27 ceph-client.yaml
-rw-r--r--. 1 root root 1088 3月 27 10:27 cluster-external-management.yaml
-rw-r--r--. 1 root root 1275 3月 27 10:27 cluster-external.yaml
-rw-r--r--. 1 root root 7350 3月 27 10:27 cluster-on-local-pvc.yaml
-rw-r--r--. 1 root root 8539 3月 27 10:27 cluster-on-pvc.yaml
-rw-r--r--. 1 root root 5376 3月 27 10:27 cluster-stretched-aws.yaml
-rw-r--r--. 1 root root 3312 3月 27 10:27 cluster-stretched.yaml
-rw-r--r--. 1 root root 1798 3月 27 10:27 cluster-test.yaml
-rw-r--r--. 1 root root 15343 3月 27 10:27 cluster.yaml
-rw-r--r--. 1 root root 2418 3月 27 10:27 common-external.yaml
-rw-r--r--. 1 root root 3908 3月 27 10:27 common-second-cluster.yaml
-rw-r--r--. 1 root root 38180 3月 27 10:27 common.yaml
-rw-r--r--. 1 root root 739053 3月 27 10:27 crds.yaml
-rw-r--r--. 1 root root 75912 3月 27 10:27 create-external-cluster-resources.py
-rw-r--r--. 1 root root 3696 3月 27 10:27 create-external-cluster-resources.sh
drwxr-xr-x. 4 root root 31 3月 27 10:27 csi
-rw-r--r--. 1 root root 430 3月 27 10:27 dashboard-external-https.yaml
-rw-r--r--. 1 root root 429 3月 27 10:27 dashboard-external-http.yaml
-rw-r--r--. 1 root root 988 3月 27 10:27 dashboard-ingress-https.yaml
-rw-r--r--. 1 root root 432 3月 27 10:27 dashboard-loadbalancer.yaml
-rw-r--r--. 1 root root 1904 3月 27 10:27 direct-mount.yaml
-rw-r--r--. 1 root root 3729 3月 27 10:27 filesystem-ec.yaml
-rw-r--r--. 1 root root 1253 3月 27 10:27 filesystem-mirror.yaml
-rw-r--r--. 1 root root 825 3月 27 10:27 filesystem-test.yaml
-rw-r--r--. 1 root root 5378 3月 27 10:27 filesystem.yaml
-rw-r--r--. 1 root root 416 3月 27 10:27 images.txt
-rw-r--r--. 1 root root 5142 3月 27 10:27 import-external-cluster.sh
drwxr-xr-x. 2 root root 4096 3月 27 10:27 monitoring
-rw-r--r--. 1 root root 1269 3月 27 10:27 mysql.yaml
-rw-r--r--. 1 root root 747 3月 27 10:27 nfs-test.yaml
-rw-r--r--. 1 root root 2635 3月 27 10:27 nfs.yaml
-rw-r--r--. 1 root root 604 3月 27 10:27 object-bucket-claim-delete.yaml
-rw-r--r--. 1 root root 502 3月 27 10:27 object-bucket-claim-notification.yaml
-rw-r--r--. 1 root root 604 3月 27 10:27 object-bucket-claim-retain.yaml
-rw-r--r--. 1 root root 3793 3月 27 10:27 object-ec.yaml
-rw-r--r--. 1 root root 836 3月 27 10:27 object-external.yaml
-rw-r--r--. 1 root root 1991 3月 27 10:27 object-multisite-pull-realm-test.yaml
-rw-r--r--. 1 root root 2064 3月 27 10:27 object-multisite-pull-realm.yaml
-rw-r--r--. 1 root root 1339 3月 27 10:27 object-multisite-test.yaml
-rw-r--r--. 1 root root 1365 3月 27 10:27 object-multisite.yaml
-rw-r--r--. 1 root root 6090 3月 27 10:27 object-openshift.yaml
-rw-r--r--. 1 root root 707 3月 27 10:27 object-test.yaml
-rw-r--r--. 1 root root 795 3月 27 10:27 object-user.yaml
-rw-r--r--. 1 root root 5835 3月 27 10:27 object.yaml
-rw-r--r--. 1 root root 23608 3月 27 10:27 operator-openshift.yaml
-rw-r--r--. 1 root root 21444 3月 27 10:27 operator.yaml
-rw-r--r--. 1 root root 944 3月 27 10:27 osd-env-override.yaml
-rw-r--r--. 1 root root 3150 3月 27 10:27 osd-purge.yaml
-rw-r--r--. 1 root root 797 3月 27 10:27 pool-device-health-metrics.yaml
-rw-r--r--. 1 root root 1127 3月 27 10:27 pool-ec.yaml
-rw-r--r--. 1 root root 507 3月 27 10:27 pool-mirrored.yaml
-rw-r--r--. 1 root root 539 3月 27 10:27 pool-test.yaml
-rw-r--r--. 1 root root 3461 3月 27 10:27 pool.yaml
-rw-r--r--. 1 root root 1515 3月 27 10:27 rbdmirror.yaml
-rw-r--r--. 1 root root 130 3月 27 10:27 README.md
-rw-r--r--. 1 root root 546 3月 27 10:27 rgw-external.yaml
-rw-r--r--. 1 root root 754 3月 27 10:27 storageclass-bucket-delete.yaml
-rw-r--r--. 1 root root 752 3月 27 10:27 storageclass-bucket-retain.yaml
-rw-r--r--. 1 root root 281 3月 27 10:27 subvolumegroup.yaml
-rw-r--r--. 1 root root 1796 3月 27 10:27 toolbox-job.yaml
-rw-r--r--. 1 root root 1672 3月 27 10:27 toolbox.yaml
-rw-r--r--. 1 root root 460 3月 27 10:27 volume-replication-class.yaml
-rw-r--r--. 1 root root 352 3月 27 10:27 volume-replication.yaml
-rw-r--r--. 1 root root 1369 3月 27 10:27 wordpress.yaml
3.6.4、rook存储插件安装
rook各个资源对象依赖的镜像如下所示,这些镜像也是在谷歌的镜像仓库。
[root@vm-k8s-master examples]# pwd
/root/k8s-init/rook-git/deploy/examples
[root@vm-k8s-master examples]# cat images.txt
k8s.gcr.io/sig-storage/csi-attacher:v3.4.0
k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0
k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1
quay.io/ceph/ceph:v16.2.7
quay.io/cephcsi/cephcsi:v3.5.1
quay.io/csiaddons/k8s-sidecar:v0.2.1
quay.io/csiaddons/volumereplication-operator:v0.3.0
rook/ceph:v1.8.7
3.6.4.1、创建的CRD资源对象
创建rook集群前,首先需要被创建的CRD资源对象,让k8s认识rook的自定义资源;此资源文件未依赖镜像。
#当前路径:rook/deploy/examples
kubectl apply -f crds.yaml
- 执行日志
[root@vm-k8s-master examples]# kubectl apply -f crds.yaml
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephbucketnotifications.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephbuckettopics.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephclients.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystemmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystemsubvolumegroups.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectrealms.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectzonegroups.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectzones.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephrbdmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/objectbucketclaims.objectbucket.io created
customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io created
3.6.4.2、创建通用资源对象
启动operator和ceph cluster所必须得通用资源,必须先于operator.yaml、cluster.yaml资源被创建。此资源文件未依赖镜像。
#当前路径:rook/deploy/examples
kubectl apply -f common.yaml
- 执行日志
[root@vm-k8s-master examples]# kubectl apply -f common.yaml
namespace/rook-ceph created
clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created
clusterrole.rbac.authorization.k8s.io/psp:rook created
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created
clusterrole.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrole.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/00-rook-privileged created
role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created
role.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created
role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
role.rbac.authorization.k8s.io/rook-ceph-mgr created
role.rbac.authorization.k8s.io/rook-ceph-osd created
role.rbac.authorization.k8s.io/rook-ceph-purge-osd created
role.rbac.authorization.k8s.io/rook-ceph-system created
rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin-role-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-purge-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-system created
serviceaccount/rook-ceph-cmd-reporter created
serviceaccount/rook-ceph-mgr created
serviceaccount/rook-ceph-osd created
serviceaccount/rook-ceph-purge-osd created
serviceaccount/rook-ceph-system created
serviceaccount/rook-csi-cephfs-plugin-sa created
serviceaccount/rook-csi-cephfs-provisioner-sa created
serviceaccount/rook-csi-rbd-plugin-sa created
serviceaccount/rook-csi-rbd-provisioner-sa created
3.6.4.3、创建rook operator控制器
依赖于rook/ceph:v1.8.7镜像,此镜像国内网络可以正常拉取。但是也默认依赖CSI相关的镜像,需要访问谷歌镜像仓库。
#当前路径:rook/deploy/examples
kubectl apply -f operator.yaml
- 如果无法拉取谷歌镜像,请修改相关依赖镜像仓库地址
ROOK_CSI_ATTACHER_IMAGE: "willdockerhub/csi-attacher:v3.4.0"
ROOK_CSI_REGISTRAR_IMAGE: "willdockerhub/csi-node-driver-registrar:v2.5.0"
ROOK_CSI_PROVISIONER_IMAGE: "willdockerhub/csi-provisioner:v3.1.0"
ROOK_CSI_RESIZER_IMAGE: "willdockerhub/csi-resizer:v1.4.0"
ROOK_CSI_SNAPSHOTTER_IMAGE: "willdockerhub/csi-snapshotter:v5.0.1"
ROOK_CSI_CEPH_IMAGE: "willdockerhub/cephcsi:v3.5.1"
ROOK_CSIADDONS_IMAGE: "willdockerhub/k8s-sidecar:v0.2.1"
CSI_VOLUME_REPLICATION_IMAGE: "willdockerhub/volumereplication-operator:v0.3.0"
- 修改过镜像后的operator.yaml文件内容
#################################################################################################################
# The deployment for the rook operator
# Contains the common settings for most Kubernetes deployments.
# For example, to create the rook-ceph cluster:
# kubectl create -f crds.yaml -f common.yaml -f operator.yaml
# kubectl create -f cluster.yaml
#
# Also see other operator sample files for variations of operator.yaml:
# - operator-openshift.yaml: Common settings for running in OpenShift
###############################################################################################################
# Rook Ceph Operator Config ConfigMap
# Use this ConfigMap to override Rook-Ceph Operator configurations.
# NOTE! Precedence will be given to this config if the same Env Var config also exists in the
# Operator Deployment.
# To move a configuration(s) from the Operator Deployment to this ConfigMap, add the config
# here. It is recommended to then remove it from the Deployment to eliminate any future confusion.
kind: ConfigMap
apiVersion: v1
metadata:
name: rook-ceph-operator-config
# should be in the namespace of the operator
namespace: rook-ceph # namespace:operator
data:
# The logging level for the operator: ERROR | WARNING | INFO | DEBUG
ROOK_LOG_LEVEL: "INFO"
# Enable the CSI driver.
# To run the non-default version of the CSI driver, see the override-able image properties in operator.yaml
ROOK_CSI_ENABLE_CEPHFS: "true"
# Enable the default version of the CSI RBD driver. To start another version of the CSI driver, see image properties below.
ROOK_CSI_ENABLE_RBD: "true"
ROOK_CSI_ENABLE_GRPC_METRICS: "false"
# Set to true to enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary
# in some network configurations where the SDN does not provide access to an external cluster or
# there is significant drop in read/write performance.
# CSI_ENABLE_HOST_NETWORK: "true"
# Set logging level for csi containers.
# Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.
# CSI_LOG_LEVEL: "0"
# Set replicas for csi provisioner deployment.
CSI_PROVISIONER_REPLICAS: "2"
# OMAP generator will generate the omap mapping between the PV name and the RBD image.
# CSI_ENABLE_OMAP_GENERATOR need to be enabled when we are using rbd mirroring feature.
# By default OMAP generator sidecar is deployed with CSI provisioner pod, to disable
# it set it to false.
# CSI_ENABLE_OMAP_GENERATOR: "false"
# set to false to disable deployment of snapshotter container in CephFS provisioner pod.
CSI_ENABLE_CEPHFS_SNAPSHOTTER: "true"
# set to false to disable deployment of snapshotter container in RBD provisioner pod.
CSI_ENABLE_RBD_SNAPSHOTTER: "true"
# Enable cephfs kernel driver instead of ceph-fuse.
# If you disable the kernel client, your application may be disrupted during upgrade.
# See the upgrade guide: https://rook.io/docs/rook/latest/ceph-upgrade.html
# NOTE! cephfs quota is not supported in kernel version < 4.17
CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true"
# (Optional) policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
CSI_RBD_FSGROUPPOLICY: "ReadWriteOnceWithFSType"
# (Optional) policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
CSI_CEPHFS_FSGROUPPOLICY: "ReadWriteOnceWithFSType"
# (Optional) Allow starting unsupported ceph-csi image
ROOK_CSI_ALLOW_UNSUPPORTED_VERSION: "false"
# (Optional) control the host mount of /etc/selinux for csi plugin pods.
CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT: "false"
# The default version of CSI supported by Rook will be started. To change the version
# of the CSI driver to something other than what is officially supported, change
# these images to the desired release of the CSI driver.
# ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.5.1"
# ROOK_CSI_REGISTRAR_IMAGE: "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0"
# ROOK_CSI_RESIZER_IMAGE: "k8s.gcr.io/sig-storage/csi-resizer:v1.4.0"
# ROOK_CSI_PROVISIONER_IMAGE: "k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0"
# ROOK_CSI_SNAPSHOTTER_IMAGE: "k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1"
# ROOK_CSI_ATTACHER_IMAGE: "k8s.gcr.io/sig-storage/csi-attacher:v3.4.0"
ROOK_CSI_ATTACHER_IMAGE: "willdockerhub/csi-attacher:v3.4.0"
ROOK_CSI_REGISTRAR_IMAGE: "willdockerhub/csi-node-driver-registrar:v2.5.0"
ROOK_CSI_PROVISIONER_IMAGE: "willdockerhub/csi-provisioner:v3.1.0"
ROOK_CSI_RESIZER_IMAGE: "willdockerhub/csi-resizer:v1.4.0"
ROOK_CSI_SNAPSHOTTER_IMAGE: "willdockerhub/csi-snapshotter:v5.0.1"
ROOK_CSI_CEPH_IMAGE: "willdockerhub/cephcsi:v3.5.1"
ROOK_CSIADDONS_IMAGE: "willdockerhub/k8s-sidecar:v0.2.1"
CSI_VOLUME_REPLICATION_IMAGE: "willdockerhub/volumereplication-operator:v0.3.0"
# (Optional) set user created priorityclassName for csi plugin pods.
# CSI_PLUGIN_PRIORITY_CLASSNAME: "system-node-critical"
# (Optional) set user created priorityclassName for csi provisioner pods.
# CSI_PROVISIONER_PRIORITY_CLASSNAME: "system-cluster-critical"
# CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
# CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY: "OnDelete"
# CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
# CSI_RBD_PLUGIN_UPDATE_STRATEGY: "OnDelete"
# kubelet directory path, if kubelet configured to use other than /var/lib/kubelet path.
# ROOK_CSI_KUBELET_DIR_PATH: "/var/lib/kubelet"
# Labels to add to the CSI CephFS Deployments and DaemonSets Pods.
# ROOK_CSI_CEPHFS_POD_LABELS: "key1=value1,key2=value2"
# Labels to add to the CSI RBD Deployments and DaemonSets Pods.
# ROOK_CSI_RBD_POD_LABELS: "key1=value1,key2=value2"
# (Optional) CephCSI provisioner NodeAffinity(applied to both CephFS and RBD provisioner).
# CSI_PROVISIONER_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
# (Optional) CephCSI provisioner tolerations list(applied to both CephFS and RBD provisioner).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_PROVISIONER_TOLERATIONS: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/controlplane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) CephCSI plugin NodeAffinity(applied to both CephFS and RBD plugin).
# CSI_PLUGIN_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
# (Optional) CephCSI plugin tolerations list(applied to both CephFS and RBD plugin).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_PLUGIN_TOLERATIONS: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/controlplane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) CephCSI RBD provisioner NodeAffinity(if specified, overrides CSI_PROVISIONER_NODE_AFFINITY).
# CSI_RBD_PROVISIONER_NODE_AFFINITY: "role=rbd-node"
# (Optional) CephCSI RBD provisioner tolerations list(if specified, overrides CSI_PROVISIONER_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_RBD_PROVISIONER_TOLERATIONS: |
# - key: node.rook.io/rbd
# operator: Exists
# (Optional) CephCSI RBD plugin NodeAffinity(if specified, overrides CSI_PLUGIN_NODE_AFFINITY).
# CSI_RBD_PLUGIN_NODE_AFFINITY: "role=rbd-node"
# (Optional) CephCSI RBD plugin tolerations list(if specified, overrides CSI_PLUGIN_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_RBD_PLUGIN_TOLERATIONS: |
# - key: node.rook.io/rbd
# operator: Exists
# (Optional) CephCSI CephFS provisioner NodeAffinity(if specified, overrides CSI_PROVISIONER_NODE_AFFINITY).
# CSI_CEPHFS_PROVISIONER_NODE_AFFINITY: "role=cephfs-node"
# (Optional) CephCSI CephFS provisioner tolerations list(if specified, overrides CSI_PROVISIONER_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_CEPHFS_PROVISIONER_TOLERATIONS: |
# - key: node.rook.io/cephfs
# operator: Exists
# (Optional) CephCSI CephFS plugin NodeAffinity(if specified, overrides CSI_PLUGIN_NODE_AFFINITY).
# CSI_CEPHFS_PLUGIN_NODE_AFFINITY: "role=cephfs-node"
# (Optional) CephCSI CephFS plugin tolerations list(if specified, overrides CSI_PLUGIN_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_CEPHFS_PLUGIN_TOLERATIONS: |
# - key: node.rook.io/cephfs
# operator: Exists
# (Optional) CEPH CSI RBD provisioner resource requirement list, Put here list of resource
# requests and limits you want to apply for provisioner pod
# CSI_RBD_PROVISIONER_RESOURCE: |
# - name : csi-provisioner
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# cpu: 200m
# - name : csi-resizer
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# cpu: 200m
# - name : csi-attacher
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# cpu: 200m
# - name : csi-snapshotter
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# cpu: 200m
# - name : csi-rbdplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# cpu: 500m
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# cpu: 100m
# (Optional) CEPH CSI RBD plugin resource requirement list, Put here list of resource
# requests and limits you want to apply for plugin pod
# CSI_RBD_PLUGIN_RESOURCE: |
# - name : driver-registrar
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# cpu: 100m
# - name : csi-rbdplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# cpu: 500m
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# cpu: 100m
# (Optional) CEPH CSI CephFS provisioner resource requirement list, Put here list of resource
# requests and limits you want to apply for provisioner pod
# CSI_CEPHFS_PROVISIONER_RESOURCE: |
# - name : csi-provisioner
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# cpu: 200m
# - name : csi-resizer
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# cpu: 200m
# - name : csi-attacher
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# cpu: 200m
# - name : csi-cephfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# cpu: 500m
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# cpu: 100m
# (Optional) CEPH CSI CephFS plugin resource requirement list, Put here list of resource
# requests and limits you want to apply for plugin pod
# CSI_CEPHFS_PLUGIN_RESOURCE: |
# - name : driver-registrar
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# cpu: 100m
# - name : csi-cephfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# cpu: 500m
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# cpu: 100m
# Configure CSI CSI Ceph FS grpc and liveness metrics port
# CSI_CEPHFS_GRPC_METRICS_PORT: "9091"
# CSI_CEPHFS_LIVENESS_METRICS_PORT: "9081"
# Configure CSI RBD grpc and liveness metrics port
# CSI_RBD_GRPC_METRICS_PORT: "9090"
# CSI_RBD_LIVENESS_METRICS_PORT: "9080"
# CSIADDONS_PORT: "9070"
# Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used
ROOK_OBC_WATCH_OPERATOR_NAMESPACE: "true"
# Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster.
# This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs.
ROOK_ENABLE_DISCOVERY_DAEMON: "false"
# The timeout value (in seconds) of Ceph commands. It should be >= 1. If this variable is not set or is an invalid value, it's default to 15.
ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS: "15"
# Enable the volume replication controller.
# Before enabling, ensure the Volume Replication CRDs are created.
# See https://rook.io/docs/rook/latest/ceph-csi-drivers.html#rbd-mirroring
CSI_ENABLE_VOLUME_REPLICATION: "false"
# CSI_VOLUME_REPLICATION_IMAGE: "quay.io/csiaddons/volumereplication-operator:v0.3.0"
# Enable the csi addons sidecar.
CSI_ENABLE_CSIADDONS: "false"
# ROOK_CSIADDONS_IMAGE: "quay.io/csiaddons/k8s-sidecar:v0.2.1"
---
# OLM: BEGIN OPERATOR DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: rook-ceph-operator
namespace: rook-ceph # namespace:operator
labels:
operator: rook
storage-backend: ceph
app.kubernetes.io/name: rook-ceph
app.kubernetes.io/instance: rook-ceph
app.kubernetes.io/component: rook-ceph-operator
app.kubernetes.io/part-of: rook-ceph-operator
spec:
selector:
matchLabels:
app: rook-ceph-operator
replicas: 1
template:
metadata:
labels:
app: rook-ceph-operator
spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.8.7
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
volumeMounts:
- mountPath: /var/lib/rook
name: rook-config
- mountPath: /etc/ceph
name: default-config-dir
- mountPath: /etc/webhook
name: webhook-cert
ports:
- containerPort: 9443
name: https-webhook
protocol: TCP
env:
# If the operator should only watch for cluster CRDs in the same namespace, set this to "true".
# If this is not set to true, the operator will watch for cluster CRDs in all namespaces.
- name: ROOK_CURRENT_NAMESPACE_ONLY
value: "false"
# Rook Discover toleration. Will tolerate all taints with all keys.
# Choose between NoSchedule, PreferNoSchedule and NoExecute:
# - name: DISCOVER_TOLERATION
# value: "NoSchedule"
# (Optional) Rook Discover toleration key. Set this to the key of the taint you want to tolerate
# - name: DISCOVER_TOLERATION_KEY
# value: "<KeyOfTheTaintToTolerate>"
# (Optional) Rook Discover tolerations list. Put here list of taints you want to tolerate in YAML format.
# - name: DISCOVER_TOLERATIONS
# value: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/controlplane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) Rook Discover priority class name to set on the pod(s)
# - name: DISCOVER_PRIORITY_CLASS_NAME
# value: "<PriorityClassName>"
# (Optional) Discover Agent NodeAffinity.
# - name: DISCOVER_AGENT_NODE_AFFINITY
# value: "role=storage-node; storage=rook, ceph"
# (Optional) Discover Agent Pod Labels.
# - name: DISCOVER_AGENT_POD_LABELS
# value: "key1=value1,key2=value2"
# The duration between discovering devices in the rook-discover daemonset.
- name: ROOK_DISCOVER_DEVICES_INTERVAL
value: "60m"
# Whether to start pods as privileged that mount a host path, which includes the Ceph mon and osd pods.
# Set this to true if SELinux is enabled (e.g. OpenShift) to workaround the anyuid issues.
# For more details see https://github.com/rook/rook/issues/1314#issuecomment-355799641
- name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
value: "false"
# In some situations SELinux relabelling breaks (times out) on large filesystems, and doesn't work with cephfs ReadWriteMany volumes (last relabel wins).
# Disable it here if you have similar issues.
# For more details see https://github.com/rook/rook/issues/2417
- name: ROOK_ENABLE_SELINUX_RELABELING
value: "true"
# In large volumes it will take some time to chown all the files. Disable it here if you have performance issues.
# For more details see https://github.com/rook/rook/issues/2254
- name: ROOK_ENABLE_FSGROUP
value: "true"
# Disable automatic orchestration when new devices are discovered
- name: ROOK_DISABLE_DEVICE_HOTPLUG
value: "false"
# Provide customised regex as the values using comma. For eg. regex for rbd based volume, value will be like "(?i)rbd[0-9]+".
# In case of more than one regex, use comma to separate between them.
# Default regex will be "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Add regex expression after putting a comma to blacklist a disk
# If value is empty, the default regex will be used.
- name: DISCOVER_DAEMON_UDEV_BLACKLIST
value: "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Time to wait until the node controller will move Rook pods to other
# nodes after detecting an unreachable node.
# Pods affected by this setting are:
# mgr, rbd, mds, rgw, nfs, PVC based mons and osds, and ceph toolbox
# The value used in this variable replaces the default value of 300 secs
# added automatically by k8s as Toleration for
# <node.kubernetes.io/unreachable>
# The total amount of time to reschedule Rook pods in healthy nodes
# before detecting a <not ready node> condition will be the sum of:
# --> node-monitor-grace-period: 40 seconds (k8s kube-controller-manager flag)
# --> ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS: 5 seconds
- name: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS
value: "5"
# The name of the node to pass with the downward API
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# The pod name to pass with the downward API
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# The pod namespace to pass with the downward API
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Recommended resource requests and limits, if desired
#resources:
# limits:
# cpu: 500m
# memory: 256Mi
# requests:
# cpu: 100m
# memory: 128Mi
# Uncomment it to run lib bucket provisioner in multithreaded mode
#- name: LIB_BUCKET_PROVISIONER_THREADS
# value: "5"
# Uncomment it to run rook operator on the host network
#hostNetwork: true
volumes:
- name: rook-config
emptyDir: {}
- name: default-config-dir
emptyDir: {}
- name: webhook-cert
emptyDir: {}
# OLM: END OPERATOR DEPLOYMENT
- 执行日志
[root@vm-k8s-master examples]# kubectl apply -f operator.yaml
configmap/rook-ceph-operator-config created
deployment.apps/rook-ceph-operator created
3.6.4.4、创建rook-ceph cluster
#当前路径:rook/deploy/examples
kubectl apply -f cluster.yaml
- 查看pod状态
#查看pod状态
[root@vm-k8s-master examples]# kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-2qr5s 2/3 ImagePullBackOff 0 5m6s
csi-cephfsplugin-mwzc6 2/3 ErrImagePull 0 5m6s
csi-cephfsplugin-provisioner-5dc9cbcc87-gpbbg 2/6 ErrImagePull 0 5m6s
csi-cephfsplugin-provisioner-5dc9cbcc87-zjflm 0/6 ContainerCreating 0 5m6s
csi-cephfsplugin-qlqb5 0/3 ErrImagePull 0 5m6s
csi-rbdplugin-4hwfg 2/3 ErrImagePull 0 5m6s
csi-rbdplugin-8hzz4 2/3 ImagePullBackOff 0 5m6s
csi-rbdplugin-f6d6z 2/3 ImagePullBackOff 0 5m6s
csi-rbdplugin-provisioner-58f584754c-b57xb 2/6 ErrImagePull 0 5m6s
csi-rbdplugin-provisioner-58f584754c-t87ts 0/6 ContainerCreating 0 5m6s
rook-ceph-mon-a-7dd7f95f87-mzrbt 1/1 Running 0 5m52s
rook-ceph-mon-b-b7645dd5b-ls7nm 1/1 Running 0 4m56s
rook-ceph-mon-c-5dc96ff9c-zgq6x 0/1 Init:0/2 0 2m35s
rook-ceph-operator-846695c777-jzg4z 1/1 Running 0 15m
#
[root@vm-k8s-master examples]# kubectl describe pod -n rook-ceph csi-cephfsplugin-2qr5s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m45s default-scheduler Successfully assigned rook-ceph/csi-cephfsplugin-2qr5s to vm-k8s-worker02
Normal Pulled 9m14s kubelet Container image "quay.io/cephcsi/cephcsi:v3.5.1" already present on machine
Normal Created 9m14s kubelet Created container liveness-prometheus
Normal Started 9m14s kubelet Started container liveness-prometheus
Normal Pulled 9m14s kubelet Container image "quay.io/cephcsi/cephcsi:v3.5.1" already present on machine
Normal Created 9m14s kubelet Created container csi-cephfsplugin
Normal Started 9m14s kubelet Started container csi-cephfsplugin
Normal Pulling 7m33s (x3 over 9m44s) kubelet Pulling image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0"
Warning Failed 6m44s (x3 over 9m14s) kubelet Failed to pull image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://k8s.gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 6m44s (x3 over 9m14s) kubelet Error: ErrImagePull
Warning Failed 6m17s (x5 over 9m14s) kubelet Error: ImagePullBackOff
Normal BackOff 4m33s (x8 over 9m14s) kubelet Back-off pulling image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0"
3.6.4.5、创建存储类
创建存储类,为k8s集群提供动态创建pv的能力
#当前路径:rook/deploy/examples/csi/rbd
kubectl apply -f storageclass.yaml
[root@vm-k8s-master rbd]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 65s
- storageclass.yaml资源文件内容
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook-ceph # namespace:cluster
spec:
failureDomain: host
replicated:
size: 3
# Disallow setting pool with replica 1, this could lead to data loss without recovery.
# Make sure you're *ABSOLUTELY CERTAIN* that is what you want
requireSafeReplicaSize: true
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
#targetSizeRatio: .5
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
# clusterID is the namespace where the rook cluster is running
# If you change this namespace, also change the namespace below where the secret namespaces are defined
clusterID: rook-ceph # namespace:cluster
# If you want to use erasure coded pool with RBD, you need to create
# two pools. one erasure coded and one replicated.
# You need to specify the replicated pool here in the `pool` parameter, it is
# used for the metadata of the images.
# The erasure coded pool must be set as the `dataPool` parameter below.
#dataPool: ec-data-pool
pool: replicapool
# (optional) mapOptions is a comma-separated list of map options.
# For krbd options refer
# https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
# For nbd options refer
# https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
# mapOptions: lock_on_read,queue_depth=1024
# (optional) unmapOptions is a comma-separated list of unmap options.
# For krbd options refer
# https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
# For nbd options refer
# https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
# unmapOptions: force
# RBD image format. Defaults to "2".
imageFormat: "2"
# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
imageFeatures: layering
# The secrets contain Ceph admin credentials. These are generated automatically by the operator
# in the same namespace as the cluster.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
# in hyperconverged settings where the volume is mounted on the same node as the osds.
csi.storage.k8s.io/fstype: ext4
# uncomment the following to use rbd-nbd as mounter on supported nodes
# **IMPORTANT**: CephCSI v3.4.0 onwards a volume healer functionality is added to reattach
# the PVC to application pod if nodeplugin pod restart.
# Its still in Alpha support. Therefore, this option is not recommended for production use.
#mounter: rbd-nbd
allowVolumeExpansion: true
reclaimPolicy: Delete
3.6.4.6、创建rook官方提供的mysql测试服务
rook官方提供的mysql测试服务,依赖于storageclass创建的pv持久化
#当前路径:rook/deploy/examples
kubectl apply -f mysql.yaml
[root@vm-k8s-master examples]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-effe80d6-a1ae-42a7-b112-d11e50c06fb8 20Gi RWO rook-ceph-block 4m27s
- mysql.yaml资源文件内容
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
storageClassName: rook-ceph-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
tier: mysql
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: changeme
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
3.6.4.7、创建toolbox
默认启动的Ceph集群,是开启Ceph认证的,这样你登陆Ceph组件所在的Pod里,是没法去获取集群状态,以及执行CLI命令,这时需要部署Ceph toolbox。
#当前路径:rook/deploy/examples
kubectl apply -f toolbox.yaml
3.6.4.8、创建rook官方提供的wordpress测试服务
rook官方提供的wordpress测试服务,依赖于storageclass创建的pv持久化
#当前路径:rook/deploy/examples
kubectl apply -f wordpress.yaml
[root@vm-k8s-master examples]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-effe80d6-a1ae-42a7-b112-d11e50c06fb8 20Gi RWO rook-ceph-block 5m49s
wp-pv-claim Bound pvc-240ea773-ace2-4a9a-99ff-89f52bbd8084 20Gi RWO rook-ceph-block 7s
说明:这里的pv会自动创建,当提交了包含StorageClass字段的PVC之后,Kubernetes 就会根据这个StorageClass创建出对应的PV,这是用到的是Dynamic Provisioning机制来动态创建pv。
- 访问wordpress
[root@vm-k8s-master examples]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 26h
wordpress LoadBalancer 10.1.43.219 <pending> 80:31225/TCP 14m
wordpress-mysql ClusterIP None <none> 3306/TCP 20m
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。