Containerd作为Runtime

所有节点安装docker-ce-20.10:

yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

配置Containerd所需的模块(所有节点):

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

所有节点加载模块:

modprobe -- overlay
modprobe -- br_netfilter

所有节点,配置Containerd所需的内核:

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

所有节点加载内核:

sysctl --system

所有节点配置Containerd的配置文件:

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

所有节点将Containerd的Cgroup改为Systemd:

vim /etc/containerd/config.toml

找到containerd.runtimes.runc.options,添加SystemdCgroup = true

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            SystemdCgroup = true

所有节点将sandbox_image的Pause镜像改成符合自己版本的地址

sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"

所有节点启动Containerd,并配置开机自启动:

systemctl daemon-reload
systemctl enable --now containerd

所有节点配置crictl客户端连接的运行时位置:

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

Docker作为Runtime(版本小于1.24)

所有节点安装docker-ce 20.10:

yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

CgroupDriver改成systemd:

mkdir /etc/docker

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

所有节点设置开机自启动Docker:

systemctl daemon-reload && systemctl enable --now docker

查看是否安装成功

ctr plugin ls

安装Kubenetes组件

所有节点安装1.27最新版本kubeadm、kubelet和kubectl:

yum install kubeadm-1.27* kubelet-1.27* kubectl-1.27* -y

所有节点设置Kubelet开机自启动

systemctl daemon-reload
systemctl enable --now kubelet

高可用组件

所有Master节点通过yum安装HAProxy和KeepAlived:

yum install keepalived haproxy -y

所有Master节点配置HAProxy(所有Master节点的HAProxy配置相同):

mkdir /etc/haproxy
vim /etc/haproxy/haproxy.cfg 
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01    192.168.2.201:6443  check
  server k8s-master02    192.168.2.202:6443  check
  server k8s-master03    192.168.2.203:6443  check

查看端口是否被监听:

netstat -lntp

所有Master节点配置KeepAlived(配置不一样,注意区分):

mkdir /etc/keepalived
vim /etc/keepalived/keepalived.conf

查看网卡信息(替换keepalived.conf中的interface)

ip a

Master01节点的配置:

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip 192.168.2.201
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.2.236
    }
    track_script {
       chk_apiserver
    }
}

Master02节点的配置:

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
   interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.2.202
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.2.236
    }
    track_script {
       chk_apiserver
    }
}

Master03节点的配置:

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
 interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.2.203
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.2.236
    }
    track_script {
       chk_apiserver
    }
}

所有master节点配置KeepAlived健康检查文件:

vim /etc/keepalived/check_apiserver.sh
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
chmod +x /etc/keepalived/check_apiserver.sh

启动haproxy和keepalived

systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived

测试keepalived是否是正常

ping 192.168.2.236 -c 4
telnet 192.168.2.236 16443

如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等:

  • 所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
  • 所有节点查看selinux状态,必须为disable:getenforce
  • master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
  • master节点查看监听端口:netstat -lntp

IT小马
1.2k 声望166 粉丝

Php - Go - Vue - 云原生