参考文章

主参考:
Centos7.6部署k8s v1.16.4高可用集群(主备模式)
使用kubeadm在Centos8上部署kubernetes1.18
使用kubeadm部署k8s集群[v1.18.0]

1、环境规划

a. 主机规划

#   主机名        Centos版本    ip               docker version        flannel version    Keepalived version    主机配置     备注
#   master01      7-8.2003    192.168.1.121     19.03.9                v0.11.0            v1.3.5                2C2G        control plane
#   master02      7-8.2003    192.168.1.122     19.03.9                v0.11.0            v1.3.5                2C2G        control plane
#   master03      7-8.2003    192.168.1.123     19.03.9                v0.11.0            v1.3.5                2C2G        control plane
#   work01        7-8.2003    192.168.1.131     19.03.9                /                  /                     2C2G        worker nodes
#   work02        7-8.2003    192.168.1.132     19.03.9                /                  /                     2C2G        worker nodes
#   work03        7-8.2003    192.168.1.133     19.03.9                /                  /                     2C2G        worker nodes
#   VIP           7-8.2003    192.168.1.200     19.03.9                v0.11.0            v1.3.5                2C2G        在control plane上浮动
#   client        7-8.2003    192.168.1.201     /                      /                  /                     2C2G        client
#   共有7台服务器,3台control plane【1台VirtualPC】,3台work,1台client不动。
#   试验机-16G内存:集群节点全部加入master1后,基本内存占用99%,kubectl get nodes查询经常拒绝,后内存改为 1200

b. vagrant 准备

centos7准备略。

Vagrantfile.default 构建虚拟机
Vagrant.configure("2") do |config|
    config.vm.provision "shell", inline: <<-SHELL
        echo "Mechines All up"
    SHELL
    
    # ssh密码:reload随需要在重启时开启,provision
    config.ssh.username="vagrant"
    config.ssh.password="vagrant"
    config.vm.provision "shell", inline: <<-SHELL
        sudo su && sed -i "s/PasswordAuthentication no/# PasswordAuthentication no/" "/etc/ssh/sshd_config" && systemctl restart sshd
    SHELL
    config.vm.define "k8s-master1" do |node|
        node.vm.hostname="master1"
        node.vm.box="centos7"
        node.vm.network "public_network",ip: "192.168.1.121"
        node.vm.provider "virtualbox" do | vb |
            vb.memory=1200
            vb.cpus=2
        end
    end
    
    config.vm.define "k8s-master2" do |node|
        node.vm.hostname="master2"
        node.vm.box="centos7"
        node.vm.network "public_network", ip: "192.168.1.122"
        node.vm.provider "virtualbox" do | vb |
            vb.memory=1200
            vb.cpus=2
        end
    end
    
    config.vm.define "k8s-master3" do |node|
        node.vm.hostname="master3"
        node.vm.box="centos7"
        node.vm.network "public_network", ip: "192.168.1.123"
        node.vm.provider "virtualbox" do | vb |
            vb.memory=1200
            vb.cpus=2
        end
    end
    
    
    config.vm.define "k8s-worker1" do |node|
        node.vm.hostname="worker1"
        node.vm.box="centos7"
        node.vm.network "public_network", ip: "192.168.1.131"
        node.vm.provider "virtualbox" do | vb |
            vb.memory=1200
            vb.cpus=2
        end
    end
    
    config.vm.define "k8s-worker2" do |node|
        node.vm.hostname="worker2"
        node.vm.box="centos7"
        node.vm.network "public_network", ip: "192.168.1.132"
        node.vm.provider "virtualbox" do | vb |
            vb.memory=1200
            vb.cpus=2
        end
    end
    
    config.vm.define "k8s-worker3" do |node|
        node.vm.hostname="worker3"
        node.vm.box="centos7"
        node.vm.network "public_network", ip: "192.168.1.133"
        node.vm.provider "virtualbox" do | vb |
            vb.memory=1200
            vb.cpus=2
        end
    end
    
end

初始化时不添加密码,二次重启vagrant reload --provision(重启并执行) 批量执行初始化,这样可以用密码进行远程登录了。

2、笔记梳理

除client是空机器外,其它6台机器:机器分为 121:master+vip(200)、master(122,123)、work(worker: 131,132,133),执行如下安装过程:

a. 环境初始化

完成初始化工作:系统参数、防火墙、swap、hosts,ssh方便后续xshell连接,docker-ce kubelet kubeadm kubectl 的安镜像源设置、软件安装。

系统初始化脚本
[192.168.1.121]# vim core.sh
#!/bin/bash
### 0 允许密码认证登录
if [[ $(id | grep root) == "" ]]; then
  sudo su
  echo "go to root"
fi
id

### 1、环境初始化
# 1.1 关闭防火墙功能
systemctl stop firewalld
systemctl disable firewalld
# 1.2.关闭selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
setenforce 0
# 1.3 关闭swap,启动项
swapoff -a
sed -i.bak '/swap/s/^/#/' /etc/fstab
# 1.4 服务器规划
result=$(cat /etc/hosts | grep "节点主机")
if [[ "$result" != "" ]]; then
    echo
else
    cat <<EOF >>  /etc/hosts
#节点主机
192.168.1.121 master1
192.168.1.122 master2
192.168.1.123 master3
192.168.1.131 worker1
192.168.1.132 worker2
192.168.1.133 worker3

# GitHub githubusercontent 超时备用
199.232.68.133 raw.githubusercontent.com
EOF
fi
# 1.5 临时主机名配置方法,vagrant设置、略 hostnamectl set-hostname master1
# 1.6 时间同步:ntp、chrony
timedatectl set-timezone Asia/Shanghai
yum install chrony -y
cat <<EOF >  /etc/chrony.conf
server ntp1.aliyun.com iburst minpoll 4 maxpoll 10
server ntp2.aliyun.com iburst minpoll 4 maxpoll 10
server ntp3.aliyun.com iburst minpoll 4 maxpoll 10
EOF
systemctl start chronyd.service
systemctl enable chronyd.service
# 1.7 开启转发,即要求iptables不对bridge的数据进行处理
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

### 2、docker安装
# 2.1 更新主机源
yum install -y yum-utils device-mapper-persistent-data lvm2 wget bash-completion.noarchdocker-ce.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-7.repo
wget -O /etc/yum.repos.d/docker-ce.repo "https://download.docker.com/linux/centos/docker-ce.repo"
sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# 2.2 安装docker,kubelet kubeadm kubectl
#yum remove docker docker-common docker-selinux docker-engine -y
yum clean all && yum makecache fast
yum install -y docker-ce 
systemctl start kubelet 
systemctl enable kubelet
# 2.3 docker配置cgroup驱动
IPADDR=$(cat /etc/sysconfig/network-scripts/ifcfg-eth1|grep IPADDR)
ip=${IPADDR:7}
point=${ip:10} #视实际地址而定
cat <<EOF > /etc/docker/daemon.json
{
    "graph": "/data/docker",
    "storage-driver": "overlay2",
    "registry-mirrors": ["https://kuogup1r.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "bip": "172.7.${point}.1/24",
    "live-restore": true
}
EOF
#systemctl daemon-reload 
systemctl start docker 
systemctl enable docker

echo ">> 完成!"

支持重复执行。

镜像下载脚本
# 2.4 下载k8s系列镜像,并tag(私有仓库)
for img in `kubeadm config images list`; do
    docker pull "gcrxio/$(echo $img | tr '/' '_')" && docker tag "gcrxio/$(echo $img | tr '/' '_')" $img
    docker rmi -f "gcrxio/$(echo $img | tr '/' '_')"; ##必要
done

此时节点的 docker-ce kubelet kubeadm kubectl ,以及安装 k8s 的docker镜像准备完毕。

此时节点准备工作完成。

b. 集群初始化

主要工作:初始化VIP节点,分发 证书 vip配置文件 ,部署flannel网络。

分发证书
[192.168.1.121]#
for host in '121' '122' '123'; do
    ssh root@192.168.1.$host 'ln -s /opt/etcd-v3.4.9-linux-amd64/ /opt/etcd'
    ssh root@192.168.1.$host 'mkdir -p /data/etcd/etcd-server /data/logs/etcd-server /opt/certs'
    scp /opt/certs/etcd-peer* root@10.4.7.$host:/opt/certs/
    scp /opt/certs/ca.pem root@10.4.7.$host:/opt/certs/
done
master1节点
#!/bin/bash

cat <<EOF >  kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.3
apiServer:
  certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP
  - master1
  - master2
  - master3
  - worker1
  - worker2
  - worker3
  - 192.168.1.121
  - 192.168.1.122
  - 192.168.1.123
  - 192.168.1.131
  - 192.168.1.132
  - 192.168.1.133
  - 192.168.1.200
controlPlaneEndpoint: "192.168.1.121:6443"   #DNS, 200的超时
networking:
  podSubnet: "10.244.0.0/16"
EOF
# https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
#kubeadm init --apiserver-advertise-address 192.168.1.121 --kubernetes-version 1.18.3 --pod-network-cidr 10.244.0.0/16
kubeadm init --config=kubeadm-config.yaml

# 秘钥、配置文件分发
# 这里master2、3建立 mkdir -p /etc/kubernetes/pki/etcd/ ; `不要过多复制`
for host in 22 23; do
    scp /etc/kubernetes/admin.conf root@192.168.1.1$host:/etc/kubernetes/
    scp /etc/kubernetes/pki/{ca.crt,ca.key,sa.pub,sa.key,front-proxy-ca.crt,front-proxy-ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/etcd/{ca.crt,ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/etcd/
done
#  VIP配置文件全局变量
export KUBECONFIG=/etc/kubernetes/admin.conf
result=$(cat /etc/rc.d/rc.local | grep "export KUBECONFIG")
if [[ "$result" != "" ]]
then
    echo
else
    echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/rc.d/rc.local
fi

# 配置master1无密码远程登录(只测试用)
ssh-keygen -f /root/.ssh/id_rsa.pub -N ""
CONTROL_PLANE_IPS="192.168.1.122 192.168.1.123 192.168.1.131 192.168.1.132 192.168.1.133"
for host in ${CONTROL_PLANE_IPS}; do
    sshpass -p 'vagrant' ssh-copy-id -i /root/.ssh/id_rsa.pub root@$host
done
#!/bin/bash

cat <<EOF >  kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.3
apiServer:
  certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP
  - master1
  - master2
  - master3
  - worker1
  - worker2
  - worker3
  - 192.168.1.121
  - 192.168.1.122
  - 192.168.1.123
  - 192.168.1.131
  - 192.168.1.132
  - 192.168.1.133
  - 192.168.1.200
controlPlaneEndpoint: "192.168.1.121:6443"   #DNS, 200的超时
networking:
  podSubnet: "10.244.0.0/16"
EOF
# https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
#kubeadm init --apiserver-advertise-address 192.168.1.121 --kubernetes-version 1.18.3 --pod-network-cidr 10.244.0.0/16
kubeadm init --config=kubeadm-config.yaml

# 配置master1无密码远程登录(只测试用)
ssh-keygen -f /root/.ssh/id_rsa -N ""
CONTROL_PLANE_IPS="192.168.1.122 192.168.1.123 192.168.1.131 192.168.1.132 192.168.1.133"
yum install -y sshpass
for host in ${CONTROL_PLANE_IPS}; do
    sshpass -p 'vagrant' ssh-copy-id -i /root/.ssh/id_rsa.pub root@$host
done
# 秘钥、配置文件分发
# 这里master2、3建立 mkdir -p /etc/kubernetes/pki/etcd/ ; `不要过多复制`
for host in 22 23; do
    scp /etc/kubernetes/admin.conf root@192.168.1.1$host:/etc/kubernetes/
    scp /etc/kubernetes/pki/{ca.crt,ca.key,sa.pub,sa.key,front-proxy-ca.crt,front-proxy-ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/etcd/{ca.crt,ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/etcd/
done
#  VIP配置文件全局变量
export KUBECONFIG=/etc/kubernetes/admin.conf
result=$(cat /etc/rc.d/rc.local | grep "export KUBECONFIG")
if [[ "$result" != "" ]]
then
    echo
else
    echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/rc.d/rc.local
fi

echo ">> 完成!"
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 192.168.1.121:6443 --token 3l0971.r1y2kl78b2q5zowe \
--discovery-token-ca-cert-hash sha256:7d4d9dab1a73082a7a4b4e2df98c042028cdba08400a91b925669448eb95d6ee \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.121:6443 --token 3l0971.r1y2kl78b2q5zowe \
--discovery-token-ca-cert-hash sha256:7d4d9dab1a73082a7a4b4e2df98c042028cdba08400a91b925669448eb95d6ee

c. non-vip.sh 加入集群(2+3台)

在vip-master1主机查看集群信息。

kubectl get pod --all-namespaces
kubectl get node
### 如果报错,导入或重载
# export KUBECONFIG=/etc/kubernetes/admin.conf 
# kubectl replace --force -f kube-flannel.yml
master加入集群
# 全局变量
export KUBECONFIG=/etc/kubernetes/admin.conf
result=$(cat /etc/rc.d/rc.local | grep "export KUBECONFIG")
if [[ "$result" != "" ]]
then
    echo
else
    echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/rc.d/rc.local
fi

kubeadm join 192.168.1.121:6443 --token ovplco.edwbwsrtff7egbio \
    --discovery-token-ca-cert-hash sha256:a566c43f8cfa958d17b07193f9bc4e3a0f0303b44108a8ed153d553647d9d566 \
    --control-plane 
worker加入集群
kubeadm join 192.168.1.121:6443 --token ovplco.edwbwsrtff7egbio \
    --discovery-token-ca-cert-hash sha256:a566c43f8cfa958d17b07193f9bc4e3a0f0303b44108a8ed153d553647d9d566 

master1再次查看集群信息确认。

d. 扩展安装

部署flannel网络
wget -O kube-flannel.yml "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/' kube-flannel.yml 
kubectl apply -f kube-flannel.yml 
安装kubernetes-dashboard
wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
#修改
kubectl create -f recommended.yaml

沧浪水
97 声望12 粉丝