Kubernetes三节点安装

一只小蜗牛

1. 安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 虚拟了3台机器,操作系统:Linux version 3.10.0-1127.el7.x86_64
  • 硬件配置:CPU(s)-2;3GB或更多RAM;硬盘20G左右;
  • 3台机器构成一个集群,机器之前网络互通;
  • 集群可以访问外网,用于拉取镜像
  • 禁用swap分区

2. 安装目标

  1. 集群内所有机器安装Docker和Kubeadm
  2. 部署Kubernetes Master节点
  3. 部署容器网络插件
  4. 部署Kubernetes Node节点,将节点加入Kubernetes集群中
  5. 部署Dashboard Web页面,可视化查看Kubernetes资源

3. 环境准备

  • master节点 : 192.168.11.99
  • node1节点 :192.168.11.100
  • node2节点 :192.168.11.101
##  1. 关闭防火墙
$ systemctl stop firewalld
### 防火墙关闭开启自启动
$ systemctl disable firewalld

## 2. 查看selinux状态
$ sestatus
SELinux status:                 disabled 

### 2.1 永久关闭selinux【重启生效】;
$ vim /etc/sysconfig/selinux
修改:SELINUX=disabled 

### 2.2 临时关闭selinux
$ setenforce 0

## 3. 分别关闭swap
$  free -g                                                                            
              total        used        free      shared  buff/cache   available                              
Mem:              2           0           2           0           0           2                              
Swap:             1           0           1          


### 3.1 临时关闭
$ swapoff -a 

### 3.2 永久关闭 【重启生效】;
$ vi /etc/fstab 
注释 swap

## 4.分别设置设置主机名:
$ hostnamectl set-hostname <hostname>
#master节点:
hostnamectl set-hostname k8s-master
#node1节点:
hostnamectl set-hostname k8s-node1
#node2节点:
hostnamectl set-hostname k8s-node2

### 4.1 查看hostname
[root@localhost vagrant]# hostname                                                                           
k8s-master  

### 5.在master添加hosts【ip和name依据自己的虚机设置】:
$ cat >> /etc/hosts << EOF
192.168.11.99 k8s-master
192.168.11.100 k8s-node1
192.168.11.101 k8s-node2
EOF

## 6.配置master时间同步:
### 6.1安装chrony:
$ yum install ntpdate -y
$ ntpdate time.windows.com    

### 6.2注释默认ntp服务器
sed -i 's/^server/#&/' /etc/chrony.conf

### 6.3指定上游公共 ntp 服务器,并允许其他节点同步时间
cat >> /etc/chrony.conf << EOF
server 0.asia.pool.ntp.org iburst
server 1.asia.pool.ntp.org iburst
server 2.asia.pool.ntp.org iburst
server 3.asia.pool.ntp.org iburst
allow all
EOF

### 6.4重启chronyd服务并设为开机启动:
systemctl enable chronyd && systemctl restart chronyd

### 6.5开启网络时间同步功能
timedatectl set-ntp true

## 7.配置node节点时间同步
### 7.1安装chrony:
yum install -y chrony

### 7.2注释默认服务器
sed -i 's/^server/#&/' /etc/chrony.conf

### 7.3指定内网 master节点为上游NTP服务器
echo server 192.168.11.99 iburst >> /etc/chrony.conf

### 7.4重启服务并设为开机启动:
systemctl enable chronyd && systemctl restart chronyd

### 7.5 检查配置--成功同步master节点时间
[root@k8s-node2 vagrant]# chronyc sources                                                                    
210 Number of sources = 1                                                                                    
MS Name/IP address         Stratum Poll Reach LastRx Last sample                                             
===============================================================================                              
^* k8s-master                    3   6    36   170    +15us[  +85us] +/- 4405us 

## 8.将桥接的IPv4流量传递到iptables(所有节点)

$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system  # 生效

## 9.加载ipvs相关模块
注解:ipvs是虚拟IP模块--ClusterIP:vip(虚拟IP提供集群IP)
由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:
在所有的Kubernetes节点执行以下脚本:

### 9.1 配置
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

### 9.2执行脚本
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。
接下来还需要确保各个节点上已经安装了ipset软件包。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。

### 9.3安装管理工具
$ yum install ipset ipvsadm -y

4. 所有节点安装Docker / kubeadm-引导集群的工具 / kubelet-容器管理

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

4.1 Docker安装

Kubernetes默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。需要注意的是,Kubernetes 1.13最低支持的Docker版本是1.11.1,最高支持是18.06,而Docker最新版本已经是18.09了,故我们安装时需要指定版本为18.06.1-ce。
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ docker --version                                                                  
Docker version 18.06.1-ce, build e68fc7a

修改镜像源:

$ cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

4.2 添加k8s阿里云YUM软件源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

4.3 所有节点安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定版本号部署:

$ yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
$ systemctl enable kubelet && systemctl start kubelet

5. 部署Kubernetes Master【只在master执行】

5.1 在master节点执行。

由于默认拉取镜像地址http://k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。
$ kubeadm init \
  --apiserver-advertise-address=192.168.11.99 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.17.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16
kubeadm init 初始化參數
      --apiserver-advertise-address string   设置 apiserver 绑定的 IP.
      --apiserver-bind-port int32            设置apiserver 监听的端口. (默认 6443)
      --apiserver-cert-extra-sans strings    api证书中指定额外的Subject Alternative Names (SANs) 可以是IP 也可以是DNS名称。 证书是和SAN绑定的。
      --cert-dir string                      证书存放的目录 (默认 "/etc/kubernetes/pki")
      --certificate-key string                kubeadm-cert secret 中 用于加密 control-plane 证书的key
      --config string                         kubeadm 配置文件的路径.
      --cri-socket string                    CRI socket 文件路径,如果为空 kubeadm 将自动发现相关的socket文件; 只有当机器中存在多个 CRI  socket 或者 存在非标准 CRI socket 时才指定.
      --dry-run                              测试,并不真正执行;输出运行后的结果.
      --feature-gates string                 指定启用哪些额外的feature 使用 key=value 对的形式。
        --help  -h                             帮助文档
      --ignore-preflight-errors strings       忽略前置检查错误,被忽略的错误将被显示为警告. 例子: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
      --image-repository string              选择拉取 control plane images 的镜像repo (default "k8s.gcr.io")
      --kubernetes-version string            选择K8S版本. (default "stable-1")
      --node-name string                     指定node的名称,默认使用 node 的 hostname.
      --pod-network-cidr string              指定 pod 的网络, control plane 会自动将 网络发布到其他节点的node,让其上启动的容器使用此网络
      --service-cidr string                  指定service 的IP 范围. (default "10.96.0.0/12")
      --service-dns-domain string            指定 service 的 dns 后缀, e.g. "myorg.internal". (default "cluster.local")
      --skip-certificate-key-print            不打印 control-plane 用于加密证书的key.
      --skip-phases strings                  跳过指定的阶段(phase)
      --skip-token-print                     不打印 kubeadm init 生成的 default bootstrap token 
      --token string                         指定 node 和control plane 之间,简历双向认证的token ,格式为 [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
      --token-ttl duration                   token 自动删除的时间间隔。 (e.g. 1s, 2m, 3h). 如果设置为 '0', token 永不过期 (default 24h0m0s)
      --upload-certs                         上传 control-plane 证书到 kubeadm-certs Secret.
输出
[root@localhost sysctl.d]# kubeadm init   --apiserver-advertise-address=192.168.11.99   --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.17.0   --service-cidr=10.96.0.
0/12   --pod-network-cidr=10.244.0.0/16                                                                                                                                                                    
W0923 10:03:00.077870    3217 validation.go:28] Cannot validate kube-proxy config - no validator is available                                                                                              
W0923 10:03:00.077922    3217 validation.go:28] Cannot validate kubelet config - no validator is available                                                                                                 
[init] Using Kubernetes version: v1.17.0                                                                                                                                                                   
[preflight] Running pre-flight checks                                                                                                                                                                      
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/             
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'                                                                                           
[preflight] Pulling images required for setting up a Kubernetes cluster                                                                                                                                    
[preflight] This might take a minute or two, depending on the speed of your internet connection                                                                                                            
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'                                                                                                              
                                                                                                                                                                                                           
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"                                                                                                   
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"                                                                                                                       
[kubelet-start] Starting the kubelet                                                                                                                                                                       
[certs] Using certificateDir folder "/etc/kubernetes/pki"                                                                                                                                                  
[certs] Generating "ca" certificate and key                                                                                                                                                                
[certs] Generating "apiserver" certificate and key                                                                                                                                                         
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.99]            
[certs] Generating "apiserver-kubelet-client" certificate and key                                                                                                                                          
[certs] Generating "front-proxy-ca" certificate and key                                                                                                                                                    
[certs] Generating "front-proxy-client" certificate and key                                                                                                                                                
[certs] Generating "etcd/ca" certificate and key                                                                                                                                                           
[certs] Generating "etcd/server" certificate and key                                                                                                                                                       
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.99 127.0.0.1 ::1]                                                                                      
[certs] Generating "etcd/peer" certificate and key                                                                                                                                                         
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.99 127.0.0.1 ::1]                                                                                        
[certs] Generating "etcd/healthcheck-client" certificate and key                                                                                                                                           
[certs] Generating "apiserver-etcd-client" certificate and key                                                                                                                                             
[certs] Generating "sa" key and public key                                                                                                                                                                 
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"                                                                                                                                                     
[kubeconfig] Writing "admin.conf" kubeconfig file                                                                                                                                                          
[kubeconfig] Writing "kubelet.conf" kubeconfig file                                                                                                                                                        
[kubeconfig] Writing "controller-manager.conf" kubeconfig file                                                                                                                                             
[kubeconfig] Writing "scheduler.conf" kubeconfig file                                                                                                                                                      
[control-plane] Using manifest folder "/etc/kubernetes/manifests"                                                                                                                                          
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                                                                                                          
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                                                                                                 
W0923 10:04:06.247879    3217 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"                                                                            
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                                                                                                          
W0923 10:04:06.249731    3217 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"                                                                            
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"                                                                                                                          
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s                                              
[apiclient] All control plane components are healthy after 26.503229 seconds                                                                                                                               
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace                                                                                                
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster                                                                       
[upload-certs] Skipping phase. Please see --upload-certs                                                                                                                                                   
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"                                                                                  
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]                                                                         
[bootstrap-token] Using token: tccfjm.u0k13eb29g9qaxzc                                                                                                                                                     
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles                                                                                                                         
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials                                                            
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token                                                                         
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster                                                                                      
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace                                                                                                                     
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key                                                                                      
[addons] Applied essential addon: CoreDNS                                                                                                                                                                  
[addons] Applied essential addon: kube-proxy                                                                                                                                                               
                                                                                                                                                                                                           
Your Kubernetes control-plane has initialized successfully!                                                                                                                                                
                                                                                                                                                                                                           
To start using your cluster, you need to run the following as a regular user:                                                                                                                              
                                                                                                                                                                                                           
  mkdir -p $HOME/.kube                                                                                                                                                                                     
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config                                                                                                                                                 
  sudo chown $(id -u):$(id -g) $HOME/.kube/config                                                                                                                                                          
                                                                                                                                                                                                           
You should now deploy a pod network to the cluster.                                                                                                                                                        
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:                                                                                                                                
  https://kubernetes.io/docs/concepts/cluster-administration/addons/                                                                                                                                       
                                                                                                                                                                                                           
Then you can join any number of worker nodes by running the following on each as root:                                                                                                                     
                                                                                                                                                                                                          
kubeadm join 192.168.11.99:6443 --token sc92uy.m2utl1i03ejf8kxb \                                                                                                                                          
    --discovery-token-ca-cert-hash sha256:c2e814618022854ddc3cb060689c520397a6faa0e2e132be2c11c2b1900d1789

(注意记录下初始化结果中的kubeadm join命令,部署worker节点时会用到)

初始化过程说明

  • [preflight] kubeadm 执行初始化前的检查。
  • [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
  • [certificates] 生成相关的各种token和证书
  • [kubeconfig] 生成 KubeConfig 文件,kubelet 需要这个文件与 Master 通信
  • [control-plane] 安装 Master 组件,会从指定的 Registry 下载组件的 Docker 镜像。
  • [bootstraptoken] 生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • [addons] 安装附加组件 kube-proxy 和 kube-dns。 Kubernetes Master 初始化成功,提示如何配置常规用户使用kubectl访问集群。 提示如何安装 Pod 网络。 提示如何注册其他节点到 Cluster。

5.2 配置 kubectl

kubectl 是管理 Kubernetes Cluster 的命令行工具,前面我们已经在所有的节点安装了 kubectl。Master 初始化完成后需要做一些配置工作,然后 kubectl 就能使用了。 依照 kubeadm init 输出的最后提示,推荐用 Linux 普通用户执行 kubectl。

创建普通用户centos【可以不做配置,直接用root访问kubectl】
#创建普通用户并设置密码123456
useradd centos && echo "centos:123456" | chpasswd centos

#追加sudo权限,并配置sudo免密
sed -i '/^root/a\centos  ALL=(ALL)       NOPASSWD:ALL' /etc/sudoers

#保存集群安全配置文件到当前用户.kube目录
su - centos
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#启用 kubectl 命令自动补全功能(注销重新登录生效)
echo "source <(kubectl completion bash)" >> ~/.bashrc

5.3 使用kubectl工具:

# 获取节点信息(暂时只有master节点)
[centos@k8s-master ~]$ kubectl get nodes                                                                     
NAME         STATUS     ROLES    AGE   VERSION                                                               
k8s-master   NotReady   master   95m   v1.17.0   

# 获取 pod 信息(暂时没有挂载pod)
[centos@k8s-master ~]$ kubectl get po                                                                        
No resources found in default namespace.

# 获取所有 namespaces信息
[centos@k8s-master ~]$ kubectl get po --all-namespaces                                                       
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE                          
kube-system   coredns-9d85f5447-k8q66              0/1     Pending   0          99m                          
kube-system   coredns-9d85f5447-prwrh              0/1     Pending   0          99m                          
kube-system   etcd-k8s-master                      1/1     Running   0          99m                          
kube-system   kube-apiserver-k8s-master            1/1     Running   0          99m                          
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          99m                          
kube-system   kube-proxy-k59lt                     1/1     Running   0          99m                          
kube-system   kube-scheduler-k8s-master            1/1     Running   0          99m

# 查看集群状态:确认各个组件都处于healthy状态
[centos@k8s-master ~]$ kubectl get  cs                                                                       
NAME                 STATUS    MESSAGE             ERROR                                                     
controller-manager   Healthy   ok                                                                            
scheduler            Healthy   ok                                                                            
etcd-0               Healthy   {"health":"true"} 

# 检查这个节点上各个系统 Pod 的状态
[centos@k8s-master ~]$  kubectl get pod -n kube-system -o wide                                                                                                                                             
NAME                                 READY   STATUS    RESTARTS   AGE    IP              NODE         NOMINATED NODE   READINESS GATES                                                                     
coredns-9d85f5447-k8q66              0/1     Pending   0          106m   <none>          <none>       <none>           <none>                                                                              
coredns-9d85f5447-prwrh              0/1     Pending   0          106m   <none>          <none>       <none>           <none>                                                                              
etcd-k8s-master                      1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              
kube-apiserver-k8s-master            1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              
kube-controller-manager-k8s-master   1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              
kube-proxy-k59lt                     1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              
kube-scheduler-k8s-master            1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none> 
可以看到,CoreDNS依赖于网络的 Pod 都处于 Pending 状态,即调度失败。这当然是符合预期的:因为这个 Master 节点的网络尚未就绪。 集群初始化如果遇到问题,可以使用kubeadm reset命令进行清理然后重新执行初始化。

6. 部署网络插件

要让 Kubernetes Cluster 能够工作,必须安装 Pod 网络,否则 Pod 之间无法通信。 Kubernetes 支持多种网络方案,这里我们使用 flannel 执行如下命令部署 flannel:

6.1 安装Pod网络插件(CNI)- master节点,node节点加入后自动下载

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

如果不下来,报错:
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
直接在宿主机用浏览器打开这个链接,然后把这个文件下载下来,再传到虚机里

然后在当前目录应用这个文件就可以了

[centos@k8s-master ~]$ ll                                                                                    
total 8                                                                                                      
-rw-rw-r-- 1 centos centos 4819 Sep 24 06:45 kube-flannel.yaml 

$ kubectl apply -f kube-flannel.yaml
kube-flannel.yaml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: lizhenliang/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
如果Pod镜像下载失败,可以改成这个镜像地址:lizhenliang/flannel:v0.11.0-amd64
# 查看POD信息
[centos@k8s-master ~]$ kubectl get pod -n kube-system -o wide                                                                                                                                              
NAME                                 READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES                                                                      
coredns-9d85f5447-k8q66              1/1     Running   0          21h   10.244.0.2      k8s-master   <none>           <none>                                                                               
coredns-9d85f5447-prwrh              1/1     Running   0          21h   10.244.0.3      k8s-master   <none>           <none>                                                                               
etcd-k8s-master                      1/1     Running   2          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               
kube-apiserver-k8s-master            1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               
kube-controller-manager-k8s-master   1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               
kube-flannel-ds-rzbkv                1/1     Running   0          14m   192.168.11.99   k8s-master   <none>           <none>                                                                               
kube-proxy-k59lt                     1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               
kube-scheduler-k8s-master            1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>  
从上面的POD信息中我们可以发现多一个kube-flannel-ds-rzbkv,这个是用来打通网络的, coredns也已经成功了

6.2 查看节点状态

[centos@k8s-master ~]$ kubectl get nodes                                                                     
NAME         STATUS   ROLES    AGE   VERSION                                                                 
k8s-master   Ready    master   21h   v1.17.0 

我们可以看到 k8s-master 节点的 Stauts 已经是 Ready

至此,Kubernetes 的 Master 节点就部署完成了。如果你只需要一个单节点的 Kubernetes,现在你就可以使用了。不过,在默认情况下,Kubernetes 的 Master 节点是不能运行用户 Pod 的

7 部署worker节点

Kubernetes 的 Worker 节点跟 Master 节点几乎是相同的,它们运行着的都是一个 kubelet 组件。唯一的区别在于,在 kubeadm init 的过程中,kubelet 启动后,Master 节点上还会自动运行 kube-apiserver、kube-scheduler、kube-controller-manger 这三个系统 Pod。 在 k8s-node1 和 k8s-node2 上分别执行如下命令,将其注册到 Cluster 中:

7.1 Kubernetes Node节点接入到集群中接受master节点管理

k8s-node 节点中执行,将其挂载到master节点中接受管理

加入集群

# kubeadm init 的时候生成的 token 在 子节点中执行,使其加入集群;
$ kubeadm join 192.168.11.99:6443 --token tccfjm.u0k13eb29g9qaxzc --discovery-token-ca-cert-hash sha256:2db1d97e65a4c4d20118e53292876906388c95f08f40b9ddb6c485a155ca7007     

# 加入成功
[preflight] Running pre-flight checks                                                                        
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driv
er is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/                            
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.servi
ce'                                                                                                          
[preflight] Reading configuration from the cluster...                                                        
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' 
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kub
e-system namespace                                                                                           
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"                         
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"     
[kubelet-start] Starting the kubelet                                                                         
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...                                      
                                                                                                             
This node has joined the cluster:                                                                            
* Certificate signing request was sent to apiserver and a response was received.                             
* The Kubelet was informed of the new secure connection details.                                             
                                                                                                             
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.   


# 如果忘记了,可以通过下面命令重新生成
$ kubeadm token create --print-join-command

7.2 Kubernetes Master节点查看集群状态

[centos@k8s-master ~]$ kubectl get nodes                                                                     
NAME         STATUS   ROLES    AGE    VERSION                                                                
k8s-master   Ready    master   21h    v1.17.0                                                                
k8s-node1    Ready    <none>   2m8s   v1.17.0                                                                
k8s-node2    Ready    <none>   114s   v1.17.0 

# 查看pod状态
[centos@k8s-master ~]$ kubectl get pod -n kube-system -o wide                                                                                                                                              
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES                                                                     
coredns-9d85f5447-k8q66              1/1     Running   0          22h   10.244.0.2       k8s-master   <none>           <none>                                                                              
coredns-9d85f5447-prwrh              1/1     Running   0          22h   10.244.0.3       k8s-master   <none>           <none>                                                                              
etcd-k8s-master                      1/1     Running   2          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              
kube-apiserver-k8s-master            1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              
kube-controller-manager-k8s-master   1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              
kube-flannel-ds-crxlk                1/1     Running   0          45m   192.168.11.130   k8s-node1    <none>           <none>                                                                              
kube-flannel-ds-njjdv                1/1     Running   0          45m   192.168.11.131   k8s-node2    <none>           <none>                                                                              
kube-flannel-ds-rzbkv                1/1     Running   0          86m   192.168.11.99    k8s-master   <none>           <none>                                                                              
kube-proxy-66bw5                     1/1     Running   0          45m   192.168.11.130   k8s-node1    <none>           <none>                                                                              
kube-proxy-k59lt                     1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              
kube-proxy-p7xch                     1/1     Running   0          45m   192.168.11.131   k8s-node2    <none>           <none>                                                                              
kube-scheduler-k8s-master            1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none> 
等所有的节点都已经 Ready,Kubernetes Cluster 创建成功,一切准备就绪。 如果pod状态为Pending、ContainerCreating、ImagePullBackOff都表明 Pod 没有就绪,Running 才是就绪状态。 如果有pod提示Init:ImagePullBackOff,说明这个pod的镜像在对应节点上拉取失败,我们可以通过 kubectl describe pod 查看 Pod 具体情况,以确认拉取失败的镜像:

7.3 Kubernetes Master节点 和 Kubernetes Node 节点的镜像

# Kubernetes Master节点镜像
[centos@k8s-master ~]$ sudo docker images                                                                                                                                                                  
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE                                                                         
registry.aliyuncs.com/google_containers/kube-proxy                v1.17.0             7d54289267dc        21 months ago       116MB                                                                        
registry.aliyuncs.com/google_containers/kube-scheduler            v1.17.0             78c190f736b1        21 months ago       94.4MB                                                                       
registry.aliyuncs.com/google_containers/kube-apiserver            v1.17.0             0cae8d5cc64c        21 months ago       171MB                                                                        
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.17.0             5eb3b7486872        21 months ago       161MB                                                                        
registry.aliyuncs.com/google_containers/coredns                   1.6.5               70f311871ae1        22 months ago       41.6MB                                                                       
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        23 months ago       288MB                                                                        
lizhenliang/flannel                                               v0.11.0-amd64       ff281650a721        2 years ago         52.6MB                                                                       
registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        3 years ago         742kB

# Kubernetes Node节点镜像
[root@k8s-node1 vagrant]# docker images                                                                                                                                                                    
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE                                                                                      
registry.aliyuncs.com/google_containers/kube-proxy   v1.17.0             7d54289267dc        21 months ago       116MB                                                                                     
lizhenliang/flannel                                  v0.11.0-amd64       ff281650a721        2 years ago         52.6MB                                                                                    
registry.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        3 years ago         742kB
其实我们从 Kubernetes Node 节点的镜像中我们可以发现,flannel 镜像的存在,这也验证了我们之前说的:在 node 节点 join 到master的时候 会加载网络主键,用于打通网络

8 测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

[centos@k8s-master ~]$ kubectl create deployment nginx --image=nginx:alpine                                   
deployment.apps/nginx created

# scale用于程序在负载加重或缩小时副本进行扩容或缩小
[centos@k8s-master ~]$ kubectl scale deployment nginx --replicas=2                                            
deployment.apps/nginx scaled 

# 查看pod的运行状态
[centos@k8s-master ~]$ kubectl get pods -l app=nginx -o wide                                                                                                                                               
NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES                                                                                    
nginx-5b6fb6dd96-4t4dt   1/1     Running   0          10m     10.244.2.2   k8s-node2   <none>           <none>                                                                                             
nginx-5b6fb6dd96-594pq   1/1     Running   0          9m21s   10.244.1.2   k8s-node1   <none>           <none>

# 通过deployment创建nginx服务,并以 **NodePort** 的方式 开放 80 端口
[centos@k8s-master ~]$ kubectl expose deployment nginx --port=80 --type=NodePort                             
service/nginx exposed

# 查看services的运行状态 (节点的30919端口映射到容器的80端口)
[centos@k8s-master ~]$ kubectl get services nginx                                                            >
NAME    TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE                                             
nginx   NodePort   10.96.72.70   <none>        80:30919/TCP   8m39s 

# curl访问nginx服务
[root@k8s-node1 vagrant]# curl 192.168.11.130:30919                                                          
<!DOCTYPE html>                                                                                              
<html>                                                                                                       
<head>                                                                                                       
<title>Welcome to nginx!</title>                                                                             
<style>                                                                                                      
html { color-scheme: light dark; }                                                                           
body { width: 35em; margin: 0 auto;
.......

最后验证一下dns, pod network是否正常: 运行Busybox并进入交互模式
# busybox是一个集成了一百多个最常用linux命令和工具的软件
[centos@k8s-master ~]$ kubectl run -it curl --image=radial/busyboxplus:curl                                   
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl
 run --generator=run-pod/v1 or kubectl create instead.                                                       
If you don't see a command prompt, try pressing enter.                                                       

# 输入nslookup nginx查看是否可以正确解析出集群内的IP,以验证DNS是否正常
[ root@curl-69c656fd45-t8jth:/ ]$ nslookup nginx
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      nginx
Address 1: 10.96.112.44 nginx.default.svc.cluster.local

# 通过服务名进行访问,验证kube-proxy是否正常
[ root@curl-69c656fd45-t8jth:/ ]$ curl http://nginx/

9 部署 Dashboard

9.1 准备安装 Kubernetes dashboard的yaml文件

# 下载配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

# 改名
mv recommended.yaml kubernetes-dashboard.yaml

9.2 编辑yaml配置文件

默认Dashboard只能集群内部访问,修改Service为NodePort类型,并暴露端口到外部:
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

9.3 创建service account并绑定默认cluster-admin管理员集群角色:

[centos@k8s-master ~]$ ll                                                                                     total 16
-rw-rw-r-- 1 centos centos 4815 Sep 26 08:17 kube-flannel.yaml
-rw-rw-r-- 1 centos centos 7607 Sep 26 09:34 kubernetes-dashboard.yaml

[centos@k8s-master ~]$ kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
serviceaccount/dashboard-admin created

[centos@k8s-master ~]$ kubectl create clusterrolebinding dashboard-admin-rb --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin                                                          
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created


[centos@k8s-master ~]$ kubectl get secrets -n kubernetes-dashboard | grep dashboard-admin
dashboard-admin-token-j6zhm        kubernetes.io/service-account-token   3      9m36s                         
[centos@k8s-master ~]$ kubectl describe secrets dashboard-admin-token-j6zhm  -n kubernetes-dashboard         -
Name:         dashboard-admin-token-j6zhm                                                                    s
Namespace:    kubernetes-dashboard                                                                            
Labels:       <none>                                                                                         N
Annotations:  kubernetes.io/service-account.name: dashboard-admin                                            e
              kubernetes.io/service-account.uid: 353ad721-17ff-48e2-a6b6-0654798de60b                        e
                                                                                                              
Type:  kubernetes.io/service-account-token                                                                    
                                                                                                              
Data                                                                                                          
====                                                                                                          
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IndrOGZVSGFaWGVQVHZLTUdMOTdkd3NhaGI2UGhHbDdZSFVkVFk3ejYyNncifQ.eyJpc3 MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGV zLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tajZ6 aG0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZ XRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzUzYWQ3MjEtMTdmZi00OGUyLWE2YjYtMDY1NDc5OGRlNjBiIi wic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.XxrEE6AbDQEH-bu1SY d7FkRuOo98nJUy05t_fCkpa-xyx-VgIWfOsH9VVhOlINtzYOBbpAFnNb52w943t7XmeK9P0Jiwd0Z2YxbmhXbZLtIAFO9gWeTWVUxe0yIzlne DkQui_CLgFTzSHZZYed19La2ejghXeYXcWXPpMKBI3dohc2TTUj0cEfMmMeWPJef_6fj5Qyx7ujokwsw5mqO9givaI9zdBkiV3ky35SjBerYO lvbQ8TjzHygF9vhVZTVh38_Tuff8SomMDUv5xpZ2DQXXkTmXbGao7SFivX4U56tOxrGGktio7A0xVVqnlJN3Br5M-YUrVNJrm73MegKVWg
此处的token,下文dashboard 登陆时需要使用,记得找地方记下来,实在忘了记录,也有重新输出的操作

9.4 应用配置文件启动服务

[centos@k8s-master ~]$ kubectl apply -f kubernetes-dashboard.yaml                                                                                                                                          
namespace/kubernetes-dashboard created                                                                                                                                                                     
serviceaccount/kubernetes-dashboard created                                                                                                                                                                
service/kubernetes-dashboard created                                                                                                                                                                       
secret/kubernetes-dashboard-certs created                                                                                                                                                                  
secret/kubernetes-dashboard-csrf created                                                                                                                                                                   
secret/kubernetes-dashboard-key-holder created                                                                                                                                                             
configmap/kubernetes-dashboard-settings created                                                                                                                                                            
role.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                                
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                         
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                         
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                  
deployment.apps/kubernetes-dashboard created                                                                                                                                                               
service/dashboard-metrics-scraper created                                                                                                                                                                  
deployment.apps/dashboard-metrics-scraper created 

9.5 查看状态

kubectl get pods -n kubernetes-dashboard
kubectl get svc -n kubernetes-dashboard

10 Dashboard chrome无法访问问题--问题修复

10.1 问题描述

K8S Dashboard安装好以后,通过Firefox浏览器是可以打开的,但通过Google Chrome浏览器,无法成功浏览页面。
[centos@k8s-master ~]$ curl 192.168.11.130:30001                                                            Client sent an HTTP request to an HTTPS server.

可见这是,要改成https

10.2 解决问题

kubeadm自动生成的证书,很多浏览器不支持。所以我们需要自己创建证书。
# 创建一个key的目录,存放证书等文件
[centos@k8s-master ~]$ mkdir kubernetes-key                                                                 [centos@k8s-master ~]$ cd kubernetes-key/                                                                   [centos@k8s-master kubernetes-key]$  

# 生成证书
# 1)生成证书请求的key
[centos@k8s-master kubernetes-key]$ openssl genrsa -out dashboard.key 2048                          
Generating RSA private key, 2048 bit long modulus
..........+++
...............................................+++
e is 65537 (0x10001)

# 2)生成证书请求,下面192.168.11.99为master节点的IP地址
[centos@k8s-master kubernetes-key]$ openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.11.99'

# 3)生成自签证书
[centos@k8s-master kubernetes-key]$ openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt                                                                                          
Signature ok
subject=/CN=192.168.11.99
Getting Private key 

# 删除原有证书
[centos@k8s-master kubernetes-key]$ kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
secret "kubernetes-dashboard-certs" deleted

# 创建新证书的sercret
[centos@k8s-master kubernetes-key]$ ll
total 12
-rw-rw-r-- 1 centos centos  989 Sep 26 11:15 dashboard.crt
-rw-rw-r-- 1 centos centos  895 Sep 26 11:14 dashboard.csr
-rw-rw-r-- 1 centos centos 1679 Sep 26 11:12 dashboard.key

[centos@k8s-master kubernetes-key]$ kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
secret/kubernetes-dashboard-certs created

# 查找正在运行的pod
[centos@k8s-master kubernetes-key]$ kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-76585494d8-pcpk8   1/1     Running   0          99m
kubernetes-dashboard-5996555fd8-5sh8w        1/1     Running   0          99m


# 删除现有的pod
[centos@k8s-master kubernetes-key]$ kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-76585494d8-pcpk8   1/1     Running   0          99m
kubernetes-dashboard-5996555fd8-5sh8w        1/1     Running   0          99m

[centos@k8s-master kubernetes-key]$ kubectl delete po dashboard-metrics-scraper-76585494d8-pcpk8 -n kubernetes-dashboard                                                                                   
pod "dashboard-metrics-scraper-76585494d8-pcpk8" deleted

[centos@k8s-master kubernetes-key]$ kubectl delete po kubernetes-dashboard-5996555fd8-5sh8w -n kubernetes-dashboard                                                                                        
pod "kubernetes-dashboard-5996555fd8-5sh8w" deleted

10.3 使用Chrome访问验证

等待新的pod状态正常 ,再使用浏览器访问
[centos@k8s-master kubernetes-key]$ kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-76585494d8-wqfbp   1/1     Running   0          2m18s
kubernetes-dashboard-5996555fd8-c96js        1/1     Running   0          97s
访问地址:http://NodeIP:30001,选择token登陆,复制前文提到的token就可以登陆了。

image.png

使用token登录, Kubernetes即可真香可用了
阅读 289
299 声望
12 粉丝
0 条评论
299 声望
12 粉丝
文章目录
宣传栏