前面我们安装了一个简单的kubernetes集群,选用了1个master节点和三个node节点。etcd也没有安装成集群.
这次我们安装一个3个master节点+etcd集群的kubernetes集群.

节点规划

本次选用三个master节点,三个node节点来安装k8s集群。 
etcd集群安装在master节点上.
并准备一个虚拟ip来做keepalived。  

节点 IP
M0 10.xx.xx.xx
M1 10.xx.xx.xx
M2 10.xx.xx.xx
N0 10.xx.xx.xx
N1 10.xx.xx.xx
N2 10.xx.xx.xx

virtual_ipaddress: 10.xx.xx.xx


集群启动前的准备(请用root用户执行)

节点准备工作(在每台机器上执行)

包括修改主机名,关闭防火墙等操作。  
k8s集群会识别主机名字,确保每个主机名设为不同值。  
关闭防火墙是为了避免不必要的网络问题。  

# ${hostname}变量请替换成规划的主机名,比如M0, N0, N1
sudo hostnamectl set-hostname ${hostname}
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i -re '/^\s*SELINUX=/s/^/#/' -e '$i\\SELINUX=disabled'  /etc/selinux/config

建立ssh的互信,方便后面传文件什么的。可以使用ssh-copy-id命令快速建立,也可以自己手动建立。这个网上教程很多,自己搜一下

安装docker(在每台机器上执行)

yum install docker -y
systemctl enable docker && systemctl start docker

修改docker的log driver为json-file,这个不影响安装,只是为了后期安装efk日志收集系统方便。
docker info可以查看当前log driver,centos7默认使用journald.
不同版本的docker可能修改方式不一样,最新官网文档是修改/etc/docker/daemon.json文件,我安装的版本是1.12.6,按如下方式修改。

vim /etc/sysconfig/docker

# 修改为如下,然后重启docker
OPTIONS='--selinux-enabled --log-driver=json-file --signature-verification=false'
systemctl restart docker

安装kubeadm, kubelet, kubectl(每台机器上执行)

  • kubeadm: 快速创建k8s集群的工具
  • kubelet: k8s的基础组件,负责对pod和container的创建和管理,与k8s集群master建立联系
  • kubectl: k8s的客户端工具,用来像集群发送命名
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet kubeadm kubectl

官网文档上写一些用户在RHEL/Centos7系统上安装时,由于iptables被绕过导致路由错误,需要在
sysctl的config文件中将net.bridge.bridge-nf-call-iptables设置为1.

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

启动kubelet:

systemctl enable kubelet && systemctl start kubelet

至此,准备工作就做好了。目前每隔几秒kubelet就会重启,直到收到kubeadm的命令。  
所以用systemctl status kubelet看到kubelet没有启动是正常现象,可以多执行几次查看,就会发现kubelet处于不断停止和重启的状态.

安装etcd集群(在三个master节点安装)

创建etcd CA证书

  1. 安装cfsslsfssljson

    curl -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    curl -o /usr/local/bin/cfssljson   https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    chmod +x /usr/local/bin/cfssl*
  2. ssh到etcd0节点(我这里规划的是master0节点),执行下面命令
    执行完成可以看到/etc/kubernetes/pki/etcd文件夹下生成了ca-config.json和ca-csr.json两个文件

    mkdir -p /etc/kubernetes/pki/etcd
    cd /etc/kubernetes/pki/etcd
    
    cat >ca-config.json <<EOF
    {
       "signing": {
           "default": {
               "expiry": "43800h"
           },
           "profiles": {
               "server": {
                   "expiry": "43800h",
                   "usages": [
                       "signing",
                       "key encipherment",
                       "server auth",
                       "client auth"
                   ]
               },
               "client": {
                   "expiry": "43800h",
                   "usages": [
                       "signing",
                       "key encipherment",
                       "client auth"
                   ]
               },
               "peer": {
                   "expiry": "43800h",
                   "usages": [
                       "signing",
                       "key encipherment",
                       "server auth",
                       "client auth"
                   ]
               }
           }
       }
    }
    EOF
    
    cat >ca-csr.json <<EOF
    {
       "CN": "etcd",
       "key": {
           "algo": "rsa",
           "size": 2048
       }
    }
    EOF
  3. 生成ca证书

    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

生成etcd客户端证书

在etcd0节点执行以下操作,会生成两个文件client.pem, client-key.pem

cat >client.json <<EOF
{
    "CN": "client",
    "key": {
        "algo": "ecdsa",
        "size": 256
    }
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client

生成etcd的server和peer证书

  1. 设置PEER_NAME和PRIVATE_IP环境变量(在每台etcd机器上执行)

    # 注意下面ens192是你实际网卡的名字,有可能是eth1之类的。用ip addr查看。
    export PEER_NAME=$(hostname)
    export PRIVATE_IP=$(ip addr show ens192 | grep -Po 'inet \K[\d.]+')
  2. 将刚刚在etcd上生成的CA拷贝到另外两台etcd机器上(在两台etch peers上执行)。
    这里需要ssh信任权限,这个在上面已经让你建立好了。

    mkdir -p /etc/kubernetes/pki/etcd
    cd /etc/kubernetes/pki/etcd
    scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/ca.pem .
    scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/ca-key.pem .
    scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/client.pem .
    scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/client-key.pem .
    scp root@<etcd0-ip-address>:/etc/kubernetes/pki/etcd/ca-config.json .
  3. 在所有etcd机器上执行下面命令,生成peer.pem, peer-key.pem, server.pem, server-key.pem

    cfssl print-defaults csr > config.json
    sed -i '0,/CN/{s/example\.net/'"$PEER_NAME"'/}' config.json
    sed -i 's/www\.example\.net/'"$PRIVATE_IP"'/' config.json
    sed -i 's/example\.net/'"$PEER_NAME"'/' config.json
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server config.json | cfssljson -bare server
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer config.json | cfssljson -bare peer

启动etcd集群(在每台etcd机器上执行)

这里有两种方式:在虚拟机上直接运行或在k8s上运行static pods.我这里选用第一种,在虚拟机上直接运行.

  1. 安装etcd

    cd /tmp
    export ETCD_VERSION=v3.1.10
    curl -sSL https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz | tar -xzv --strip-components=1 -C /usr/local/bin/
    rm -rf etcd-$ETCD_VERSION-linux-amd64*
  2. 生成etcd的环境文件,后面将会用到

    touch /etc/etcd.env
    echo "PEER_NAME=$PEER_NAME" >> /etc/etcd.env
    echo "PRIVATE_IP=$PRIVATE_IP" >> /etc/etcd.env
  3. 创建etcd服务systemd的配置文件
    注意修改下面<etcd0-ip-address>等变量为虚拟机的真实ip地址。m0, m1等为etcd的名字

    cat >/etc/systemd/system/etcd.service <<EOF
    [Unit]
    Description=etcd
    Documentation=https://github.com/coreos/etcd
    Conflicts=etcd.service
    Conflicts=etcd2.service
    
    [Service]
    EnvironmentFile=/etc/etcd.env
    Type=notify
    Restart=always
    RestartSec=5s
    LimitNOFILE=40000
    TimeoutStartSec=0
    
    ExecStart=/usr/local/bin/etcd --name ${PEER_NAME} \
        --data-dir /var/lib/etcd \
        --listen-client-urls https://${PRIVATE_IP}:2379 \
        --advertise-client-urls https://${PRIVATE_IP}:2379 \
        --listen-peer-urls https://${PRIVATE_IP}:2380 \
        --initial-advertise-peer-urls https://${PRIVATE_IP}:2380 \
        --cert-file=/etc/kubernetes/pki/etcd/server.pem \
        --key-file=/etc/kubernetes/pki/etcd/server-key.pem \
        --client-cert-auth \
        --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \
        --peer-cert-file=/etc/kubernetes/pki/etcd/peer.pem \
        --peer-key-file=/etc/kubernetes/pki/etcd/peer-key.pem \
        --peer-client-cert-auth \
        --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \
        --initial-cluster   m0=https://<etcd0-ip-address>:2380,m1=https://<etcd1-ip-address>:2380,m2=https://<etcd2-ip-address>:2380 \
        --initial-cluster-token my-etcd-token \
        --initial-cluster-state new
    
    [Install]
    WantedBy=multi-user.target
    EOF
  4. 启动etcd集群

    systemctl daemon-reload
    systemctl start etcd

设置master节点的负载均衡器(keepalived,在三台master节点上执行)

  1. 安装keepalived

    yum install keepalived -y
  2. 修改配置文件

    • state: 填写MASTER(主master节点m0)或BACKUP(其他master节点)
    • interface: 填写网卡的名字(我这里是ens192)\
    • priority: 权重,主master节点应该比其他节点高(比如m0填写101,其他节点填写100)
    • auth_pass: 任意随机字符
    • virtual_ipaddress: 应该填写为master节点准备的虚拟ip
    ! Configuration File for keepalived
    global_defs {
      router_id LVS_DEVEL
    }
    
    vrrp_script check_apiserver {
      script "/etc/keepalived/check_apiserver.sh"
      interval 3
      weight -2
      fall 10
      rise 2
    }
    
    vrrp_instance VI_1 {
        state <STATE>
        interface <INTERFACE>
        virtual_router_id 51
        priority <PRIORITY>
        authentication {
            auth_type PASS
            auth_pass 4be37dc3b4c90194d1600c483e10ad1d
        }
        virtual_ipaddress {
            <VIRTUAL-IP>
        }
        track_script {
            check_apiserver
        }
    }
  3. 健康检测脚本
    将下面的<VIRTUAL-IP>替换成准备的虚拟ip

    #!/bin/sh
    
     errorExit() {
         echo "*** $*" 1>&2
         exit 1
     }
    
     curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error GET https://localhost:6443/"
     if ip addr | grep -q <VIRTUAL-IP>; then
         curl --silent --max-time 2 --insecure https://<VIRTUAL-IP>:6443/ -o /dev/null || errorExit "Error GET https://<VIRTUAL-IP>:6443/"
     fi
  4. 启动keepalived

    systemctl start keepalived

启动k8s集群

启动master0节点

  1. 生成配置文件:

    • <private-ip>: 为master节点的IP地址
    • <etcd0-ip>, <etcd1-ip>, <etcd2-ip> : etcd集群的ip地址
    • <podCIDR>:POD CIDR,k8s的pod的网络模式。我这里选择flannel,即配置 为10.244.0.0/16。详细信息查看CNI network section
    • 为了安装flannel,需要在每台机器上执行sysctl net.bridge.bridge-nf-call-iptables=1
    cat >config.yaml <<EOF
    apiVersion: kubeadm.k8s.io/v1alpha1
    kind: MasterConfiguration
    api:
      advertiseAddress: <private-ip>
    etcd:
      endpoints:
      - https://<etcd0-ip-address>:2379
      - https://<etcd1-ip-address>:2379
      - https://<etcd2-ip-address>:2379
      caFile: /etc/kubernetes/pki/etcd/ca.pem
      certFile: /etc/kubernetes/pki/etcd/client.pem
      keyFile: /etc/kubernetes/pki/etcd/client-key.pem
    networking:
      podSubnet: <podCIDR>
    apiServerCertSANs:
    - <load-balancer-ip>
    apiServerExtraArgs:
      apiserver-count: "3"
    EOF
  2. 运行kubeadm

    kubeadm init --config=config.yaml

启动master1, master2节点

  1. 将刚刚master0生成的文件copy到master1和master2机器

    scp root@<master0-ip-address>:/etc/kubernetes/pki/ca.crt /etc/kubernetes/pki
    scp root@<master0-ip-address>:/etc/kubernetes/pki/ca.key /etc/kubernetes/pki
    scp root@<master0-ip-address>:/etc/kubernetes/pki/sa.key /etc/kubernetes/pki
    scp root@<master0-ip-address>:/etc/kubernetes/pki/sa.pub /etc/kubernetes/pki
    scp root@<master0-ip-address>:/etc/kubernetes/pki/front-proxy-ca.crt /etc/kubernetes/pki
    scp root@<master0-ip-address>:/etc/kubernetes/pki/front-proxy-ca.key /etc/kubernetes/pki
    scp -r root@<master0-ip-address>:/etc/kubernetes/pki/etcd /etc/kubernetes/pki
  2. 重复master0的操作,生成config.yaml,运行kubeadm.

安装CNI网络

这里跟上面<podCIDR>那里设置的要对应起来。我这里选用的是Flannel,执行下面命令。
官网详解Installing a pod network

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

加入node节点

在每台node机器上执行以下格式的命令,在master节点执行完kubeadm init后会生成下面命令,复制执行就好。
这里统一将node加入到master0管理中。

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

完了可以使用kubectl get nodes查看集群是否安装完成。

kubernetes 官方文档


VincentFF
84 声望0 粉丝