大爷来玩呀你懂得

大爷来玩呀你懂得 查看完整档案

北京编辑通化师范学院  |  计算机科学与技术 编辑恒天财富  |  高级运维工程师 编辑 blog.51cto.com/13386520 编辑
编辑

这个人很懒,没有什么说的。

个人动态

大爷来玩呀你懂得 发布了文章 · 1月8日

kibana添加索引模式失败

image.png


 PUT _settings
 {
   "index": {
     "blocks": {
       "read_only_allow_delete": "false"
     }
   }
 } 

image.png

查看原文

赞 0 收藏 0 评论 0

大爷来玩呀你懂得 发布了文章 · 2020-12-30

nginx非80/443端口 映射的地址出现源端口port_in_redirect off

image.png

nginx未使用80,443端口,已经做了nginx飞80,443端口映射到外网ip的80,443端口

image.png

image.png

通过指定port_in_redirect off;告知nginx在redirect的时候不要带上port,如果没有配置,默认该值为true

如果url结尾是/也不会出现问题,二层代理时,自动添加斜杠带端口,只需要把 port_in_redirect 设置成 off 就可以解决了

查看原文

赞 0 收藏 0 评论 0

大爷来玩呀你懂得 发布了文章 · 2020-12-22

es7.6.2 Could not index event to Elasticsearch

报错如下

image.png


[2020-12-22T11:18:42,441][WARN ][logstash.outputs.elasticsearch][main] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-insurance-data-push-2020-12-22", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0xe216446>], :response=>{"index"=>{"_index"=>"logstash-insurance-data-push-2020-12-22", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [3000]/[3000] maximum shards open;"}}}}
{
           "level_value" => 20000,
                    "ip" => "10.5.235.137",
              "hostName" => "uatapipeline-data-push-job-profile-a-55b8c45df7-jv4qk",
           "contextName" => "data-push-job-uatapipeline-data-push-job-profile-a-55b8c45df7-jv4qk-6",
                  "tags" => [
        [0] "_jsonparsefailure"

Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [3000]/[3000] maximum shards open
  • 问题:
    分片数量超过限制数1000
  • 原因:
    ELASTICSEARCH7版本及以上的,默认只允许1000个分片,因为集群分片数不足引起的。
  • 解决:

在KIBANA的TOOLS中改变临时设,如图:

PUT /_cluster/settings
{
  "transient": {
    "cluster": {
      "max_shards_per_node":10000
    }
  }
}

image.png

查看原文

赞 0 收藏 0 评论 0

大爷来玩呀你懂得 发布了文章 · 2020-12-22

Cannot connect toxxx docker daemon running


Dec 22 14:50:27 master49 kubelet: F1222 14:50:27.123073   30898 server.go:274] failed to run Kubelet: failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Dec 22 14:50:27 master49 systemd: Unit kubelet.service entered failed state.
Dec 22 14:50:27 master49 systemd: kubelet.service failed.
^C
[root@master49 ~]# docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.0-docker)

Server:
ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
errors pretty printing info
[root@master49 ~]# service docker restart
Redirecting to /bin/systemctl restart docker.service
[root@master49 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@master49 ~]# systemctl daemon-reload
[root@master49 ~]# service docker restart
Redirecting to /bin/systemctl restart docker.service
[root@master49 ~]# kubectl get node
NAME       STATUS   ROLES    AGE   VERSION
master49   Ready    master   28h   v1.18.2
master50   Ready    master   28h   v1.18.2
master56   Ready    master   28h   v1.18.2
node52     Ready    <none>   28h   v1.18.2
node53     Ready    <none>   28h   v1.18.2


我是修改过 /etc/docker/daemon.json 然后改好后出现的问题

image.png

查看原文

赞 0 收藏 0 评论 0

大爷来玩呀你懂得 发布了文章 · 2020-12-21

centos7.6 使用 kubeadm搭建k8s (1.18.2版本)高可用集群

使用kubeadm搭建k8s(1.18.2版本)高可用集群

如果想更熟悉k8s的各个组件的话还是建议使用二进制搭建学习。

1 节点规划信息

172.16.197.49 master49
172.16.197.50 master50
172.16.197.56 master56
172.16.197.52 node52
172.16.197.53 node53
172.16.197.200 k8s-lb

2 基础环境准备

  • 环境信息
    | 软件 | 版本 |

    | kubernetes | 1.18.2 |
    | docker | 19.0.3 |

2.1 环境初始化 (在每台机器上执行)

  • 1)配置主机名,以k8s-master49为例(需要依次根据节点规划角色修改主机名)
k8s-lb不需要设置主机名,只需在及群里的每台机器写hosts(可以理解成keepalive的hosts或者域名,在kubeadmi 创建集群使用k8s-lb:xxxx 所以需要配host)
[root@localhost ~]# hostnamectl set-hostname master49 
  • 2)配置主机hosts映射(在每台机器上执行)
[root@localhost ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.197.49 master49
172.16.197.50 master50
172.16.197.56 master56
172.16.197.52 node52
172.16.197.53 node53
172.16.197.200 k8s-lb
  • 3)禁用防火墙
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld` 


  • 4)关闭selinux
[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config 
  • 5)关闭swap分区
[root@localhost ~]# swapoff -a # 临时
[root@localhost ~]# sed -i '/ swap / s/^/#/g' /etc/fstab #永久 
  • 6)时间同步
[root@localhost ~]# yum install chrony -y
[root@localhost ~]# systemctl enable chronyd
[root@localhost ~]# systemctl start chronyd
[root@localhost ~]# chronyc sources
  • 7)配置ulimt
[root@localhost ~]# ulimit -SHn 65535
  • 8)配置内核参数
[root@localhost ~]# cat >> /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
[root@localhost ~]# sysctl -p
  • 9) 安装基础软件
yum -y install ntpdate lrzsz tree cmake gcc gcc-c++ autoconf l libjpeg libjpeg-devel libpng libpng-devel freetype freetype-devel libxml2 libxml2-devel zlib zlib-devel glibc glibc-devel glib2 glib2-devel bzip2 bzip2-devel ncurses ncurses-devel curl curl-devel libxslt-devel libtool-ltdl-devel make wget docbook-dtds asciidoc e2fsprogs-devel gd gd-devel openssl openssl-devel lsof git unzip gettext-devel gettext libevent libevent-devel pcre pcre-devel vim readline readline-devel
  • 10 ) 修改主内核配置
[root@node53 ~]# cat /etc/sysctl.conf 
net.core.netdev_max_backlog = 262144
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.route.gc_timeout = 20
net.ipv4.ip_local_port_range = 1024  65535
net.ipv4.tcp_retries2 = 5
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_keepalive_time = 120
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_wmem = 8192 131072 16777216
net.ipv4.tcp_rmem = 32768 131072 16777216
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.ip_forward = 1
vm.swappiness = 0
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 1024 65535


sysctl -p ##读取生效
sysctl -w net.ipv4.ip_forward=1 ###临时开启转发

2.2 内核升级(在每台机器上执行)

由于centos7.6的系统默认内核版本是3.10,3.10的内核有很多BUG,最常见的一个就是group memory leak(四台主机都要执行)

  • 1)下载所需要的内核版本,我这里采用rpm安装,所以直接下载的rpm包
[root@localhost ~]# wget https://cbs.centos.org/kojifiles/packages/kernel/4.9.220/37.el7/x86_64/kernel-4.9.220-37.el7.x86_64.rpm
  • 2)执行rpm升级即可
[root@localhost ~]# rpm -ivh kernel-4.9.220-37.el7.x86_64.rpm
  • 3)升级完reboot,然后查看内核是否成功升级################一定要重启
[root@localhost ~]# reboot
[root@master49 ~]# uname -r

3 组件安装

3.1 安装ipvs (在每台机器上执行)

  • 1)安装ipvs需要的软件

由于我准备使用ipvs作为kube-proxy的代理模式,所以需要安装相应的软件包。

[root@master49 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y
  • 2)加载模块
[root@master49 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
modprobe -- ip_tables
modprobe -- ip_set
modprobe -- xt_set
modprobe -- ipt_set
modprobe -- ipt_rpfilter
modprobe -- ipt_REJECT
modprobe -- ipip
EOF
注意:在内核4.19版本nf_conntrack_ipv4已经改为nf_conntrack
  • 3)配置重启自动加载
[root@master49 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

3.2 安装docker-ce (在每台机器上执行)

所有主机都需要安装
[root@master49 ~]# # 安装需要的软件
[root@master49 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master49 ~]# # 添加yum源
[root@master49 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  • 安装docker-ce
[root@master49 ~]# yum install docker-ce-19.03.8-3.el7 -y
[root@master49 ~]# systemctl start docker
[root@master49 ~]# systemctl enable docker` 
  • 配置镜像加速
[root@master49 ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
[root@master49 ~]# systemctl restart docker

3.3 安装kubernetes组件 (在每台机器上执行)

以上操作在所有节点执行
  • 添加yum源
[root@master49 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  • 安装软件
[root@master49 ~]# yum install -y kubelet-1.18.2-0 kubeadm-1.18.2-0 kubectl-1.18.2-0 --disableexcludes=kubernetes
  • 将kubelet设置为开机自启动
[root@master49 ~]# systemctl enable kubelet.service

4 集群初始化 (在master所有节点机器上执行)

4.1 配置集群高可用

高可用采用的是HAProxy+Keepalived来进行高可用和master节点的流量负载均衡,HAProxy和KeepAlived以守护进程的方式在所有Master节点部署

  • 安装软件
[root@master49 ~]# yum install keepalived haproxy -y` 
  • 配置haproxy

所有master节点的配置相同,如下:

注意:把apiserver地址改成自己节点规划的master地址
[root@master49 ~]# vim /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes
    mode                 tcp
    bind                 *:16443
    option               tcplog
    default_backend      kubernetes-apiserver

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server  master49 172.16.197.49:6443 check
    server  master50 172.16.197.50:6443 check
    server  master56 172.16.197.56:6443 check

#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
    bind                 *:9999
    stats auth           admin:P@ssW0rd
    stats refresh        5s
    stats realm          HAProxy Statistics
    stats uri            /admin?stats
  • 配置keepalived

master49

[root@master49 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
# 定义脚本
vrrp_script check_apiserver {
    script "/etc/keepalived/check_apiserver.sh" 
    interval 2                                  
    weight -5                                  
    fall 3                                   
    rise 2                               
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100       ####master主节点一定要配置集群中最大的
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
   172.16.197.200
    }

    # 调用脚本
    #track_script {
    #    check_apiserver
    #}
}

master50节点配置

[root@k8s-master02 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
# 定义脚本
vrrp_script check_apiserver {
    script "/etc/keepalived/check_apiserver.sh" 
    interval 2                                  
    weight -5                                  
    fall 3                                   
    rise 2                               
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
   172.16.197.200
    }

    # 调用脚本
    #track_script {
    #    check_apiserver
    #}
}

master56节点配置

[root@k8s-master03 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
# 定义脚本
vrrp_script check_apiserver {
    script "/etc/keepalived/check_apiserver.sh" 
    interval 2                                  
    weight -5                                  
    fall 3                                   
    rise 2                               
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 98
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
   172.16.197.200
    }

    # 调用脚本
    #track_script {
    #    check_apiserver
    #}
}

master节点的 priority 的数值 越大权重越大,当master主节点keepalive停止时,主节点会漂移到 priority 值 高的机器
  • 172.16.197.200 keepalive是vip

编写健康检测脚本(每台机器都一样即可)

[root@master49 ~]# vim /etc/keepalived/check-apiserver.sh
#!/bin/bash

function check_apiserver(){
 for ((i=0;i<5;i++))
 do
  apiserver_job_id=${pgrep kube-apiserver}
  if [[ ! -z ${apiserver_job_id} ]];then
   return
  else
   sleep 2
  fi
 done
 apiserver_job_id=0
}

# 1->running    0->stopped
check_apiserver
if [[ $apiserver_job_id -eq 0 ]];then
 /usr/bin/systemctl stop keepalived
 exit 1
else
 exit 0
fi

启动haproxy和keepalived

[root@master49 ~]# systemctl enable --now keepalived
[root@master49 ~]# systemctl enable --now haproxy
[root@master49 ~]# systemctl status haproxy

[root@master49 ~]# systemctl status keepalived

4.2 部署master

1)在master49上,编写kubeadm.yaml配置文件,如下:

  • master49
[root@master49 ~]# cat >> kubeadm.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
controlPlaneEndpoint: "k8s-lb:16443"
networking:
  dnsDomain: cluster.local
  podSubnet: 10.17.0.0/16
  serviceSubnet: 10.217.0.0/12
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
EOF

2)下载镜像

  • master49
[root@master49 ~]# kubeadm config images pull --config kubeadm.yaml

镜像地址是使用的阿里云的地址,理论上应该也会很快,大家也可以直接下载文中开头所提供的镜像,然后导入到节点中

docker load -i  1-18-kube-apiserver.tar.gz
docker load -i  1-18-kube-scheduler.tar.gz
docker load -i  1-18-kube-controller-manager.tar.gz
docker load -i  1-18-pause.tar.gz
docker load -i  1-18-cordns.tar.gz
docker load -i  1-18-etcd.tar.gz
docker load -i 1-18-kube-proxy.tar.gz
说明:
pause版本是3.2,用到的镜像是k8s.gcr.io/pause:3.2
etcd版本是3.4.3,用到的镜像是k8s.gcr.io/etcd:3.4.3-0        
cordns版本是1.6.7,用到的镜像是k8s.gcr.io/coredns:1.6.7
apiserver、scheduler、controller-manager、kube-proxy版本是1.18.2,用到的镜像分别是
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-proxy:v1.18.2

3)进行初始化

  • master49
[root@master49 ~]# kubeadm init --config kubeadm.yaml --upload-certs
W1218 17:44:40.664521   27338 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master49 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-lb] and IPs [10.217.0.1 172.16.197.49]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master49 localhost] and IPs [172.16.197.49 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master49 localhost] and IPs [172.16.197.49 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1218 17:44:46.319361   27338 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1218 17:44:46.320750   27338 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.522265 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
df34e5886551aaf2c4afea226b7b6709d197c43334798e1be79386203893cd3b
[mark-control-plane] Marking the node master49 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master49 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: n838nb.1ienzev40tbbhbrp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8s-lb:16443 --token n838nb.1ienzev40tbbhbrp \
    --discovery-token-ca-cert-hash sha256:5d2ac53eb502d852215fc856d23184c05196faba4e210b9257d68ef7cb9fd5c1 \
    --control-plane --certificate-key df34e5886551aaf2c4afea226b7b6709d197c43334798e1be79386203893cd3b

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-lb:16443 --token n838nb.1ienzev40tbbhbrp \
    --discovery-token-ca-cert-hash sha256:5d2ac53eb502d852215fc856d23184c05196faba4e210b9257d68ef7cb9fd5c1 
[root@master49 ~]# echo $?
0
最后输出的kubeadm jion需要记录下来,后面的master节点和node节点需要用
image.png

4)配置环境变量

  • master49
[root@master49 ~]# cat >> /root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@master49 ~]# source /root/.bashrc
[root@master49 ~]# mkdir -p $HOME/.kube
[root@master49 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master49 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
  • master50/56 安装提示
[root@master49 ~]# cat >> /root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@master49 ~]# source /root/.bashrc
[root@master49 ~]# mkdir -p $HOME/.kube
[root@master49 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master49 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

5)查看节点状态

  • master49
[root@master49 ~]# kubectl get node
NAME           STATUS     ROLES    AGE     VERSION
master49   NotReady   master   3m47s   v1.18.2

6)安装网络插件

  • master49
如果有节点是多网卡,所以需要在资源清单文件中指定内网网卡(如何单网卡可以不用修改))
[root@master49 ~]# wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
[root@master49 ~]# vi calico.yaml
......
      containers:
        # Runs calico-node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: calico/node:v3.8.8-1
          env:
            # Use Kubernetes API as the backing datastore.
            - name: DATASTORE_TYPE
              value: "kubernetes"
            # Wait for the datastore.
            - name: IP_AUTODETECTION_METHOD # DaemonSet中添加该环境变量
              value: interface=ens33 # 指定内网网卡
            - name: WAIT_FOR_DATASTORE
              value: "true"
            # Set based on the k8s node name.
            - name: NODENAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
......

[root@master49 ~]# grep -C5 10.17.0.0 calico.yaml 
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.17.0.0/16"   ###使用pod网段
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
#################
# 安装calico网络插件
[root@master49 ~]# kubectl apply -f calico.yaml 

当网络插件安装完成后,查看node节点信息如下:

[root@master49 ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
master49   Ready    master   10m   v1.18.2

可以看到状态已经从NotReady变为ready了。

7)将master50加入集群

  • 下载镜像
[root@master50 ~]# kubeadm config images pull --config kubeadm.yaml
  • 加入集群
[root@master50 ~]# kubeadm join k8s-lb:16443 --token q4ui64.gp5g5rezyusy9xw9 
    --discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1 
    --control-plane
  • 输出如下:
...
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
...
  • 配置环境变量
[root@k8s-master02 ~]# cat >> /root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@k8s-master02 ~]# source /root/.bashrc
  • 另一台的操作一样,把master56加入集群
  • 查看集群状态
[root@master49 ~]# kubectl get node
NAME       STATUS   ROLES    AGE     VERSION
master49   Ready    master   3h38m   v1.18.2
master50   Ready    master   3h36m   v1.18.2
master56   Ready    master   3h36m   v1.18.2
node52     Ready    <none>   3h34m   v1.18.2
node53     Ready    <none>   3h35m   v1.18.2

  • 查看集群组件状态

全部都Running,则所有组件都正常了,不正常,可以具体查看pod日志进行排查

  • 查看pod
[root@master49 ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-75d555c48-9z5ql   1/1     Running   1          3h36m
calico-node-2xt99                         1/1     Running   0          3m57s
calico-node-8hd6s                         1/1     Running   0          3m45s
calico-node-bgg29                         1/1     Running   0          3m26s
calico-node-pq2pc                         1/1     Running   0          4m7s
calico-node-rs77f                         1/1     Running   0          3m37s
coredns-7ff77c879f-ltmr7                  1/1     Running   1          3h37m
coredns-7ff77c879f-rzbsj                  1/1     Running   1          3h37m
etcd-master49                             1/1     Running   1          3h37m
etcd-master50                             1/1     Running   1          3h36m
etcd-master56                             1/1     Running   1          3h36m
kube-apiserver-master49                   1/1     Running   1          3h37m
kube-apiserver-master50                   1/1     Running   1          3h36m
kube-apiserver-master56                   1/1     Running   1          3h36m
kube-controller-manager-master49          1/1     Running   4          3h37m
kube-controller-manager-master50          1/1     Running   2          3h36m
kube-controller-manager-master56          1/1     Running   2          3h36m
kube-proxy-4csh5                          1/1     Running   1          3h37m
kube-proxy-54pqr                          1/1     Running   0          3h34m
kube-proxy-h2ttm                          1/1     Running   1          3h36m
kube-proxy-nr7z4                          1/1     Running   0          3h34m
kube-proxy-xtrqz                          1/1     Running   1          3h35m
kube-scheduler-master49                   1/1     Running   4          3h37m
kube-scheduler-master50                   1/1     Running   2          3h36m
kube-scheduler-master56                   1/1     Running   3          3h36m

4.3 部署node

  • node节点只需加入集群即可
[root@master49 ~]# kubeadm join k8s-lb:16443 --token q4ui64.gp5g5rezyusy9xw9 
    --discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1
  • 输出日志如下:
`W0509 23:24:12.159733   10635 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.` 
  • 最后然后查看集群节点信息
[root@master49 ~]# kubectl get node
NAME       STATUS   ROLES    AGE     VERSION
master49   Ready    master   3h38m   v1.18.2
master50   Ready    master   3h36m   v1.18.2
master56   Ready    master   3h36m   v1.18.2
node52     Ready    <none>   3h34m   v1.18.2
node53     Ready    <none>   3h35m   v1.18.2

5 测试集群高可用

关闭master01主机,然后查看整个集群。

# 模拟关掉keepalived
systemctl stop keepalived
# 然后查看集群是否可用

##49

[root@master49 ~]# systemctl stop keepalived
[root@master49 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:90:60:e2 brd ff:ff:ff:ff:ff:ff
    inet 172.16.197.49/24 brd 172.16.197.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.17.58.0/32 brd 10.17.58.0 scope global tunl0
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:c2:74:27:17 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c2:ea:ea:96:4d:71 brd ff:ff:ff:ff:ff:ff
6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 0e:1b:31:48:12:1c brd ff:ff:ff:ff:ff:ff
    inet 10.217.0.10/32 brd 10.217.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.217.0.1/32 brd 10.217.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
19: calie4cbb6e0f40@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
20: cali0d463746e41@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
23: cali1153f4082bb@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
[root@master49 ~]# kubectl get node
NAME       STATUS   ROLES    AGE     VERSION
master49   Ready    master   3h28m   v1.18.2
master50   Ready    master   3h27m   v1.18.2
master56   Ready    master   3h27m   v1.18.2
node52     Ready    <none>   3h25m   v1.18.2
node53     Ready    <none>   3h25m   v1.18.2




##50

[root@master50 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:90:5d:c3 brd ff:ff:ff:ff:ff:ff
    inet 172.16.197.50/24 brd 172.16.197.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet 172.16.197.200/32 scope global ens160
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.17.45.128/32 brd 10.17.45.128 scope global tunl0
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:56:05:6c:76 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 02:92:f7:64:4c:21 brd ff:ff:ff:ff:ff:ff
6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether ae:4f:c7:5d:14:5b brd ff:ff:ff:ff:ff:ff
    inet 10.217.0.10/32 brd 10.217.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.217.0.1/32 brd 10.217.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever



##49
[root@master49 ~]# systemctl restart keepalived
[root@master49 ~]# 
[root@master49 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:90:60:e2 brd ff:ff:ff:ff:ff:ff
    inet 172.16.197.49/24 brd 172.16.197.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet 172.16.197.200/32 scope global ens160
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.17.58.0/32 brd 10.17.58.0 scope global tunl0
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:c2:74:27:17 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c2:ea:ea:96:4d:71 brd ff:ff:ff:ff:ff:ff
6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 0e:1b:31:48:12:1c brd ff:ff:ff:ff:ff:ff
    inet 10.217.0.10/32 brd 10.217.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.217.0.1/32 brd 10.217.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
19: calie4cbb6e0f40@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
20: cali0d463746e41@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
23: cali1153f4082bb@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
[root@master49 ~]# kubectl get node
NAME       STATUS   ROLES    AGE     VERSION
master49   Ready    master   3h29m   v1.18.2
master50   Ready    master   3h27m   v1.18.2
master56   Ready    master   3h27m   v1.18.2
node52     Ready    <none>   3h26m   v1.18.2
node53     Ready    <none>   3h26m   v1.18.2


如果你的master节点也想当作work node
[root@master49 ~]# kubectl taint node master49  node-role.kubernetes.io/master-

如果要恢复 Master Only 状态,执行如下命令:

kubectl taint node k8s-master node-role.kubernetes.io/master="”:NoSchedule

6 安装自动补全命令

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

7 重新安装


[root@master49 ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1218 17:43:18.148470   27043 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: configmaps "kubeadm-config" not found
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1218 17:43:21.762933   27043 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@master49 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@master49 ~]# ls
anaconda-ks.cfg  kernel-4.9.220-37.el7.x86_64.rpm  kubeadm.yaml
[root@master49 ~]# rm -rf /root/.kube/
如果master,node节点都已经加入了k8s集群,需要在每台节点上执行上述操作。
apiVersion: apps/v1  # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: #APP_NAME#-#PROFILE#
spec:
  replicas: 1
  minReadySeconds: 30 #滚动升级时120s后认为该pod就绪
  strategy:
    rollingUpdate:  ##由于replicas为3,则整个升级,pod个数在2-4个之间
      maxSurge: 1      #滚动升级时会先启动1个pod
      maxUnavailable: 0 #滚动升级时允许的最大Unavailable的pod个数
  selector:
    matchLabels:
      app: #APP_NAME#
      profile: #PROFILE#
  template:
    metadata:
      labels: 
        app: #APP_NAME#
        profile: #PROFILE#
    spec:
      terminationGracePeriodSeconds: 20 ##k8s将会给应用发送SIGTERM信号,可以用来正确、优雅地关闭应用,默认为30秒
      volumes:
      - name: "cephfs-fuse"
        hostPath:
          path: "/data/WEBLOG"
      - name: "preproductnfs"
        hostPath:
          path: "/home/nfs"
      containers:
      - name: #APP_NAME#-#PROFILE#
        image: #REPOSITORY_URL#/#APP_NAME#:#APP_VERSION#
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: cephfs-fuse
          mountPath: /data/WEBLOG
        - name: preproductnfs
          mountPath: /home/nfs
        resources:
          limits:
            memory: 2500Mi
          requests:
            memory: 2000Mi
        ports:
        - containerPort: #port#
        env:
        - name: app_name
          value: #APP_NAME#  
        - name: project_name
          value: #APP_NAME#
        lifecycle:
          preStop:
            exec:
              command: ["/bin/sh","-c","/jar-stop.sh"]
apiVersion: v1
kind: Service
metadata:
  name: uatapipeline-job-task-profile-a
  labels:
    app: uatapipeline-job-task
    profile: profile-a
spec:
  selector:
    app: uatapipeline-job-task
    profile: profile-a
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 31706
查看原文

赞 1 收藏 0 评论 0

大爷来玩呀你懂得 发布了文章 · 2020-12-09

n 模块切换 node 版本无效的解决办法

问题概述

全局安装 n 模块

npm install -g n

安装最新稳定版 node 版本

n stable

这样就安装到最新版本。

使用 n 切换版本

n 
# 出现
  node/8.12.0
ο node/11.0.0

选择 11.0.0 版本回车,

node -v
# 8.12.0

解决办法

  • 查看 node 安装路径
# /usr/local/bin/node 
  • n 默认安装路径是 /usr/local,若你的 node 不是在此路径下,n 切换版本就不能把 binlibincludeshare 复制该路径中,所以我们必须通过 N_PREFIX 变量来修改 n 的默认 node 安装路径。
  • 使用vim 编辑 vim .bash_profile 文件,在结尾处添加两行。
export N_PREFIX=/usr/local/bin/node #根据你的安装路径而定
export PATH=$N_PREFIX/bin:$PATH` 


  • 保存,刷新文件。
source .bash_profile
  • 重新安装
n stable

重新切换版本即可,根据提示可能要授予 sudo 权限。

node -v 
# 11.0.0

Node 修改默认镜像源

在使用Node 安装依赖的时候,安装速度会很慢,默认镜像源地址:https://registry.npmjs.org, 我们可以通过修改仓库地址来加快安装速度。
可以通过 npm config get registry 来查看当前镜像源。

可以尝试把镜像源修改成淘宝的

npm config set registry http://registry.npm.taobao.org/
查看原文

赞 0 收藏 0 评论 0

大爷来玩呀你懂得 发布了文章 · 2020-12-07

elastic-job-lite-console 部署到k8s集群

[elastic-job-lite-console 部署到k8s]

1.准备构建镜像的文件

[root@jenkins elastic-job]# ls
Configurations.xml      Dockerfile     elastic-job-lite-console-2.1.5.tar.gz       run.sh

[root@jenkins elastic-job]# cat Dockerfile
FROM harbor.reg/library/centos7-jdk:8u221

#构建参数

#环境变量
ENV WORK_PATH /data/work/
ENV LANG=zh_CN.UTF-8 
LANGUAGE=zh_CN:zh 
LC_ALL=zh_CN.UTF-8

#设置时区
RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && 
echo "Asia/Shanghai" > /etc/timezone && 
localedef -c -f UTF-8 -i zh_CN zh_CN.UTF-8
ADD elastic-job-lite-console-2.1.5.tar.gz $WORK_PATH
COPY run.sh /
WORKDIR $WORK_PATH
RUN chmod +x /run.sh
CMD ["sh","/run.sh"]

[root@jenkins elastic-job]# cat run.sh
sh +x elastic-job-lite-console-2.1.5/bin/start.sh -p 8899

##  2.构建镜像并上传 

> docker build -t harbor.reg/library/elastic-job-lite:2.1.5 .

> docker push harbor.reg/library/elastic-job-lite:2.1.5

##  3.创建k8s  yaml 文件
 deploy.yaml

cat deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dev-elastic-job-lite-profile-a
spec:
replicas: 1
minReadySeconds: 30 #滚动升级时120s后认为该pod就绪
strategy:
rollingUpdate: ##由于replicas为3,则整个升级,pod个数在2-4个之间
maxSurge: 1 #滚动升级时会先启动1个pod
maxUnavailable: 0 #滚动升级时允许的最大Unavailable的pod个数
template:
metadata:
labels:
app: dev-elastic-job-lite
profile: profile-a
spec:
terminationGracePeriodSeconds: 40 ##k8s将会给应用发送SIGTERM信号,可以用来正确、优雅地关闭应用,默认为30秒
volumes:
- name: "dev-configurations"
hostPath:
path: "/home/elastic-job-console/dev/" # 本地host目录,即挂载目录
#- name: "mfs"
# hostPath:
# path: "/home/nfs"
containers:
- name: dev-elastic-job-lite-profile-a
image: harbor.reg/library/elastic-job-lite:2.1.5
imagePullPolicy: IfNotPresent
volumeMounts:
- name: dev-configurations
mountPath: /root/.elastic-job-console
#- name: mfs
# mountPath: /home/nfs
resources:
limits:
memory: 2000Mi
requests:
memory: 1500Mi
ports:
- containerPort: 8899
env:
- name: app_name
value: dev-elastic-job-lite-profile-a
- name: project_name
value: dev-elastic-job-lite
svc.yaml

apiVersion: v1
kind: Service
metadata:
name: dev-elastic-job-lite-profile-a
labels:
app: dev-elastic-job-lite
profile: profile-a
spec:
selector:
app: dev-elastic-job-lite
profile: profile-a
type: NodePort
ports:
- port: 8899
targetPort: 8899
nodePort: 32222

4.nginx配置文件


[root@bogon vhost]# cat ss.conf
upstream elastic-lite.dev.xx.com {
server 172.16.167.10:32222 weight=1 max_fails=0 fail_timeout=0;
server 172.16.167.11:32222 weight=1 max_fails=0 fail_timeout=0;
server 172.16.167.12:32222 weight=1 max_fails=0 fail_timeout=0;
}

server {
listen 80;
server_name elastic-lite.dev.xx.com;
rewrite ^(.*) https://$server_name$1 permanent;
}
server {
listen 443;
server_name elastic-lite.dev.xx.com;
charset UTF-8;
access_log /var/logs/ss/access/elastic-lite.dev.xx.com_access.log main;
ssl on;
ssl_certificate /usr/local/nginx/conf/CA/ss.crt;
ssl_certificate_key /usr/local/nginx/conf/CA/ss.key;
send_timeout 10;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:RC4-SHA:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!DSS:!PKS;
#include drop_sql.conf;
include favicon.conf;


location / {
proxy_pass http://elastic-lite.dev.xx.com;
}
}

默认密码  root root

image.png

查看原文

赞 0 收藏 0 评论 0

大爷来玩呀你懂得 发布了文章 · 2020-11-27

docker容器matomo安装

docker容器matomo安装

  • docker pull mysql:5.7.25
  • docker pull matomo
容器目录配置

mkdir -pv /home/[matomodockerfile](http://matomodockerfile/var/www/html)

mkdir -pv /home/mysql-matomo5.7.25/data/

mkdir -pv /home/mysql-matomo5.7.25/conf/
docker mysql5.7.25 的配置文件

cat /home/mysql-matomo5.7.25/data/my.cnf      //mysql5.7   的配置文件

[client]
port = 3306
socket = /tmp/mysql.sock
[mysqld]
server-id = 1

port = 3306

datadir = /var/lib/mysql

tmpdir = /tmp

socket = /tmp/mysql.sock

skip-external-locking

skip_name_resolve = 1

transaction_isolation = READ-COMMITTED

character-set-server = utf8mb4

collation-server = utf8mb4_general_ci

init_connect='SET NAMES utf8mb4'

lower_case_table_names = 1

max_connections = 400

max_connect_errors = 1000

explicit_defaults_for_timestamp = true

max_allowed_packet = 128M

interactive_timeout = 1800
wait_timeout = 1800

tmp_table_size = 134217728
max_heap_table_size = 134217728

query_cache_size = 0
query_cache_type = 0

sort_buffer_size = 2097152

binlog_cache_size = 524288

back_log = 130

log_error = error.log

log_queries_not_using_indexes = 1

log_throttle_queries_not_using_indexes = 5

long_query_time = 8

min_examined_row_limit = 100

expire_logs_days = 5

master_info_repository = TABLE

relay_log_info_repository = TABLE

innodb_buffer_pool_size = 1G

innodb_flush_method = O_DIRECT

innodb_file_format = Barracuda

innodb_write_io_threads = 4
innodb_read_io_threads = 4

innodb_io_capacity = 500

innodb_lock_wait_timeout = 30

innodb_buffer_pool_dump_pct = 40

innodb_print_all_deadlocks = 1

[mysqldump]
quick
max_allowed_packet = 128M

[mysql]
no-auto-rehash

[myisamchk]
key_buffer_size = 20M
sort_buffer_size = 256k
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout

[mysqld_safe]
open-files-limit = 28192
容器启动

docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=xxxxxx --privileged=true --restart=always --name mysql-matomo5.7.25 -v /home/mysql-matomo5.7.25/conf/:/etc/mysql/ -v /home/mysql-matomo5.7.25/data/:/var/lib/mysql [docker.io/mysql:5.7.25](http://docker.io/mysql:5.7.25)

docker run -d --link mysql-matomo5.7.25:db --restart=always --name matomo -v /home/[matomodockerfile:/var/www/html](http://matomodockerfile/var/www/html) -p 80:80 matomo:latest                    //启动matomo

 docker ps
7ca5d8353500 matomo:latest "/entrypoint.sh apac…" 26 minutes ago Up 26 minutes 0.0.0.0:80->80/tcp matomo
51676564dc10 mysql:5.7.25 "docker-entrypoint.s…" 46 hours ago Up 27 minutes 0.0.0.0:3306->3306/tcp, 33060/tcp mysql-matomo5.7.25

image.png

访问 ip

image.png

查看原文

赞 0 收藏 0 评论 0

大爷来玩呀你懂得 发布了文章 · 2020-11-25

centso7 部署bind9.9.3+mysql5.6

1.先安装MySQL,直接yum安装
* yum -y install mysql mysql-server

2.再安装一些依赖的包

  • yum -y install openssl openssl-devel libss-dev gcc gcc-c++ mysql-devel
3.下载bind, bind官网(https://www.isc.org/)
4.下载mysql-bind 补丁源码
5.解压bind和mysql-bind源码压缩文件
    tar zxvf bind-9.10.3-P2.tar.gz

    tar zxvf mysql-bind.tar.gz
6.将mysql-bind源码目录下的mysqldb.c 和 mysqldb.h拷贝到bind源码目录下的bin/named和bin/named/include/ 目录下
    cd mysql-bind

    cp -f mysqldb.c mysqldb.h ../bind-9.10.3-P2/bin/named/

    cp -f mysqldb.c mysqldb.h ../bind-9.10.3-P2/bin/named/include/
7.修改bind源码目录下bin/named/Makefile.in文件
    cd ../bind-9.10.3-P2

    vim bin/named/Makefile.in

    将以下几行:  

    DBDRIVER_OBJS =                          

    DBDRIVER_SRCS =                          

    DBDRIVER_INCLUDES =                      

    DBDRIVER_LIBS =

    修改为:

    DBDRIVER_OBJS = mysqldb.@O@

    DBDRIVER_SRCS = mysqldb.c

    DBDRIVER_INCLUDES = -I/usr/include/mysql  -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fwrapv -fPIC   -DUNIV_LINUX -DUNIV_LINUX

    DBDRIVER_LIBS = -rdynamic -L/usr/lib64/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -lssl -lcrypto

    DBDRIVER_INCLUDES 的值是通过 mysql_config --cflags 命令获取的

    DBDRIVER_LIBS 的值是通过 mysql_config --libs 命令获取的

8.修改bind源码目录下bin/named目录下的main.c文件

    vim bin/named/main.c

    加入 #include "mysqldb.h"

    如下:

#include <config.h>

#include "mysqldb.h"

#include <ctype.h>

#include <stdlib.h>

#include <string.h> 

    然后在注释代码段 /*  xxdb_init(); */ 后加入 mysqldb_init();

    在注释代码段 /*  xxdb_clear(); */后加入 mysqldb_clear();
9.修改mysqldb.c
  • 修改mysqldb.c中的#include <named/mysqldb.h>为 #include <include/mysqldb.h>
10.安装bind
    ./configure --prefix=/usr/local/bind --enable-threads  # 指定安装目录和开启多线程的处理能力

    make

    make install
11.配置bind
    cd /usr/local/bind-9.10/etc

    /usr/local/bind/sbin/rndc-confgen -r /dev/urandom>rndc.conf

    cat rndc.conf|tail |head -9 |sed "s/^#//g">named.conf   # 生成配置文件
12.创建一个数据库
############这版不太好用
    Create database mydomain;

    创建一张数据表

    CREATE TABLE dnsrecord ( 

      name varchar(255) default NULL,

      ttl int(11) default NULL, 

      rdtype varchar(255) default NULL,

      rdata varchar(255) default NULL )TYPE=MyISAM;

    插入一些测试数据

    INSERT INTO dnsrecord VALUES ('test.net', 259200, 'SOA', 'test.net.  www.test.net  200505101 28800 7200 86400 28800');

    INSERT INTO dnsrecord VALUES ('test.net', 259200, 'NS', 'ns1.test.net.');

    INSERT INTO dnsrecord VALUES ('ns1.test.net', 259200, 'A', '192.168.2.2');

    INSERT INTO dnsrecord VALUES ('www.test.net', 259200, 'A', '192.168.2.1');
##########################好使###############################

DROP TABLE IF EXISTS 10_outside;
CREATE TABLE 10_outside (
 name varchar(255) default NULL,
 ttl int(11) default NULL,
 rdtype varchar(255) default NULL,
 rdata varchar(255) default NULL
);

--
-- Dumping data for table `10_outside`
--

LOCK TABLES 10_outside WRITE;
INSERT INTO 10_outside VALUES ('25.71.210.10.in-addr.arpa',3600,'PTR','cas1.test.mydomain.com.cn.');
INSERT INTO 10_outside VALUES ('10.in-addr.arpa',3600,'SOA','test.mydomain.com.cn. zhengyu.staff.mydomain.com.cn. 20070319 1800 600 604800 600');
INSERT INTO 10_outside VALUES ('10.in-addr.arpa',3600,'NS','cas1.test.mydomain.com.cn.');
INSERT INTO 10_outside VALUES ('10.in-addr.arpa',3600,'NS','cas2.test.mydomain.com.cn.');
INSERT INTO 10_outside VALUES ('10.in-addr.arpa',3600,'NS','cas3.test.mydomain.com.cn.');
INSERT INTO 10_outside VALUES ('27.71.210.10.in-addr.arpa',3600,'PTR','cas2.test.mydomain.com.cn.');
UNLOCK TABLES;

--
-- Table structure for table `test_mydomain_com_cn_outside`
--

DROP TABLE IF EXISTS test_mydomain_com_cn_outside;
CREATE TABLE test_mydomain_com_cn_outside (
 name varchar(255) default NULL,
 ttl int(11) default NULL,
 rdtype varchar(255) default NULL,
 rdata varchar(255) default NULL
);

--
-- Dumping data for table `test_mydomain_com_cn_outside`
--

LOCK TABLES test_mydomain_com_cn_outside WRITE;
INSERT INTO test_mydomain_com_cn_outside VALUES ('test.mydomain.com.cn',3600,'SOA','test.mydomain.com.cn. zhengyu.staff.mydomain.com.cn. 20070319 1800 600 604800 600');
INSERT INTO test_mydomain_com_cn_outside VALUES ('test.mydomain.com.cn',3600,'NS','cas1.test.mydomain.com.cn.');
INSERT INTO test_mydomain_com_cn_outside VALUES ('test.mydomain.com.cn',3600,'NS','cas2.test.mydomain.com.cn.');
INSERT INTO test_mydomain_com_cn_outside VALUES ('test.mydomain.com.cn',3600,'NS','cas3.test.mydomain.com.cn.');
INSERT INTO test_mydomain_com_cn_outside VALUES ('cas1.test.mydomain.com.cn',3600,'A','10.210.71.25');
INSERT INTO test_mydomain_com_cn_outside VALUES ('cas2.test.mydomain.com.cn',3600,'A','10.210.71.27');
INSERT INTO test_mydomain_com_cn_outside VALUES ('cas3.test.mydomain.com.cn',3600,'A','10.210.132.80');
INSERT INTO test_mydomain_com_cn_outside VALUES ('yhzh.test.mydomain.com.cn',3600,'A','10.218.26.191');
INSERT INTO test_mydomain_com_cn_outside VALUES ('yhzh.test.mydomain.com.cn',3600,'A','10.218.26.192');
INSERT INTO test_mydomain_com_cn_outside VALUES ('yhzh.test.mydomain.com.cn',3600,'A','10.218.26.193');
INSERT INTO test_mydomain_com_cn_outside VALUES ('yhzh.test.mydomain.com.cn',3600,'A','10.218.26.194');
INSERT INTO test_mydomain_com_cn_outside VALUES ('*',3600,'A','10.210.71.1');
INSERT INTO test_mydomain_com_cn_outside VALUES ('conf.test.mydomain.com.cn',3600,'CNAME','cas2.test.mydomain.com.cn.');
UNLOCK TABLES;


############################################
13.继续配置bind 
vim /usr/local/bind/etc/named.conf

在后面按照以下格式加入

zone "mydomain.com" {

    type master;

    notify no; 

    database "mysqldb dbname tablename hostname user password"; };

mydomain.com为要解析的域名

dbname 为数据库名

hostname为数据库服务器地址

user 为可操作前面数据库表的数据库用户名

password 为对应数据库用户名的密码

配置完成

在命令行下运行

/usr/local/bind/sbin/named -c /usr/local/bind/etc/named.conf -g

查看没问题后

/usr/local/bind/sbin/named -c /usr/local/bind/etc/named.conf 
[root@silence etc]# ls
bind.keys  named.conf  named.root  rndc.conf  root.zone
[root@silence etc]# cat root.zone 
$TTL  86400
@                 IN             SOA           ns1.mydomain.com.       w1.mydomain.com (
                                                        2018070110
                                                        1H
                                                         5M
                                                         7D
                                                         1D)
                      IN               NS            ns1
                      IN               NS            ns2
                      IN               MX   10     mx1
                      IN               MX    20    mx2
ns1               IN                A              192.168.108.160
ns2               IN                A              192.168.108.138
ns3               IN                A              192.168.108.166
mx1              IN                A              192.168.108.138
w1              IN                A              192.168.1.2
w0              IN                A              192.168.1.1
www             IN                A              192.168.108.160
*                  IN                A               192.168.108.166
[root@silence etc]# cat named.conf 
 key "rndc-key" {
     algorithm hmac-md5;
     secret "ZYobWCcSDr2HDCMuojc6gg==";
 };
 
 controls {
     inet 127.0.0.1 port 953
         allow { 127.0.0.1; } keys { "rndc-key"; };
 };
options {
     listen-on port 53 { 127.0.0.1;172.16.188.123; };
     directory "/data/work/bind9.9.3";
     allow-query-cache { any; };
     allow-query   { any; };
     dnssec-enable yes;
     dnssec-validation yes;
     dnssec-lookaside auto;
 };
 zone "." { 
     type hint; 
     file "/data/work/bind9.9.3/etc/root.zone"; 
 };
 zone "mydomain" {
    type forward;    
    forwarders { 114.114.114.114;8.8.8.8; };
    forward first;
 };
 logging {
        channel bind_log {
                file "/data/work/bind9.9.3/logs/bind.log" versions 3 size 20m;
                severity info;
                print-time yes;
                print-severity yes;
                print-category yes;
        };
        category default {
                bind_log;
        };
 };
 zone "test.mydomain.com.cn" IN{ 
     type master;
     notify no;
     database "mysqldb mydomain test_mydomain_com_cn_outside 172.16.188.123 root 111111"; 
 };
 zone "16.172.in-addr.arpa" IN{
     type master;
     notify no;
     database "mysqldb mydomain 10_outside 172.16.188.123 root 111111";
 };

运行bind

写入系统服务

vim /etc/init.d/named

####################################################################

#!/bin/bash

    #

    # description: named daemon  描述信息

    # chkconfig: - 25 88    启动和关闭级别

    #

    # 启动named的pid文件、其实文件路径我们都可以自己指定的

    pidFile=/usr/local/bind/var/run/named.pid

    # 启动服务时的锁文件、判断一个服务是不是启动是靠这个锁文件的

    lockFile=/var/lock/subsys/named

    # named的配置文件路径

    confFile=/usr/local/bind/etc/named.conf

    # 判断/etc/rc.d/init.d/functions路径下的functios是否存在这个文件、存在就sources进来

    [ -r /etc/rc.d/init.d/functions ] && . /etc/rc.d/init.d/functions

    # 定义启动方法

    start() {

            # 判断锁文件是否存在、如果存在就说服务已经启动并退出

            if [ -e $lockFile ]; then

                echo "named is already running..."

                exit 0

            fi

            # 打印服务已经启动的信息

            echo -n "Starting named:"

            # 调用functions中的daemon方法、并用--pidfile指定pid文件在什么地方、还指定用户和配置文件、我们本来是直接启用named的、现在利用daemon这个函数来启用这个服务、他会获得很多额外的属性、如果成功了会帮我们打印什么[ OK ]的、还可以做判断

            daemon --pidfile "$pidFile" /usr/local/bind/sbin/named -c "$confFile"

            RETVAL=$?

            echo

            # 判断上面的命令是否执行成功、说明启动成功

            if [ $RETVAL -eq 0 ]; then

                    # 如果判断成功、就创建锁文件

                    touch $lockFile

                    return $RETVAL

            else

                    # 否则、就是失败了、那就删除锁文件和pid文件

                    rm -f $lockFile $pidFile

                    return 1

            fi

    }

    # 定义停止方法

    stop() {

            # 如果锁文件不存在

            if [ ! -e $lockFile ]; then

                    # 说明服务还没有启动

                    echo "named is stopped."

            fi

            echo -n "Stopping named:"

            killproc named

            RETVAL=$?

            echo

            # 判断以上的命令是否执行停止成功、如果成功

            if [ $RETVAL -eq 0 ];then

                    # 删除锁文件和pid文件

                    rm -f $lockFile $pidFile

                    return 0

            else

                    # 否则打印说没能正常停止

                    echo "Cannot stop named."

                    # 调用functions中的函数、会打印失败

                    failure

                    return 1

            fi

    }

    # 定义重启服务

    restart() {

            # 调用停止方法

            stop

            # 睡眠2秒

            sleep 2

            # 调用启动方法

            start

    }

    # 定义重新加载服务方法

    reload() {

            echo -n "Reloading named: "

            # killprco HUP信号、named进程的信号

            killproc named -HUP

            RETVAL=$?

            echo

            return $RETVAL

    }

    # 定义服务状态

    status() {

            if pidof named &> /dev/null; then

                    echo -n "named is running..."

                    success

                    echo

            else

                    echo -n "named is stopped..."

                    success

                    echo

            fi

    }

    # 定义错误提示信息

    usage() {

            echo "Usage: named {start|stop|restart|status|reload}"

    }

    case $1 in

    start)

            start ;;

    stop)

            stop ;;

    restart)

            restart ;;

    status)

            status ;;

    reload)

            reload ;;

    *)

            usage

            exit 4 ;;

    esac

测试

[root@silence ~]# nslookup cas1.test.mydomain.com.cn
Server:        172.16.188.123
Address:    172.16.188.123#53

Name:    cas1.test.mydomain.com.cn
Address: 10.210.71.25

[root@silence ~]# /data/work/bind9.9.3/bin/dig -t A cas1.test.mydomain.com.cn @172.16.188.123

; <<>> DiG 9.9.3-P1 <<>> -t A cas1.test.mydomain.com.cn @172.16.188.123
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10127
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;cas1.test.mydomain.com.cn.    IN    A

;; ANSWER SECTION:
cas1.test.mydomain.com.cn. 3600    IN    A    10.210.71.25

;; AUTHORITY SECTION:
test.mydomain.com.cn.    3600    IN    NS    cas1.test.mydomain.com.cn.
test.mydomain.com.cn.    3600    IN    NS    cas2.test.mydomain.com.cn.
test.mydomain.com.cn.    3600    IN    NS    cas3.test.mydomain.com.cn.

;; ADDITIONAL SECTION:
cas2.test.mydomain.com.cn. 3600    IN    A    10.210.71.27
cas3.test.mydomain.com.cn. 3600    IN    A    10.210.132.80

;; Query time: 3 msec
;; SERVER: 172.16.188.123#53(172.16.188.123)
;; WHEN: Wed Nov 25 16:37:23 CST 2020
;; MSG SIZE  rcvd: 154

[root@silence ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 172.16.188.123

查看原文

赞 0 收藏 0 评论 0

大爷来玩呀你懂得 发布了文章 · 2020-11-23

centos7部署redis_3.2.9 cluster部署(三主三从)

centos7部署redis_3.2.9 cluster部署(三主三从

1,虚拟机环境

使用的Linux环境已经版本:

Centos 7 64位系统

主机ip :

192.168.56.180

192.168.56.181

192.168.56.182

每台服务器是1主1从,实验3台服务器课成为3主3从。

Redis安装的项目目录,日志,配置文件等都存放在/ root / svr /目录下。

2,下载相关的安装包以及解压

首先在192.168.56.180机器操作:

cd /data/work/
wget http://download.redis.io/releases/redis-3.2.9.tar.gz
tar -zxvf redis-3.2.9.tar.gz

3,安装

在/data/work/redis-3.2.9/目录下执行:

make && make install PREFIX = / data / work / redis-3.2.9

4,配置信息

创建性能配置,日志日志,数据所在的文件夹:

cd /data/work/redis-3.2.9/`
mkdir cluster-conf`

mkdir -pv /data/work/redis-3.2.9/logs
 创建可用端口文件夹:
cd cluster-conf
mkdir 7777
mkdir 8888
 配置复制文件到/数据/工作/redis-3.2.9/cluster-conf/7777目录下:
cp /data/work/redis-3.2.9/redis.conf  /data/work/redis-3.2.9/cluster-conf/7777
  • 修改7777目录下redis.conf(主)配置文件:
bind 192.168.56.181 127.0.0.1
protected-mode yes
masterauth "xxxxx"
requirepass "xxxxx"
port 8888
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /data/work/redis-3.2.9/cluster-conf/8888/redis_8888.pid
loglevel notice
logfile "/data/work/redis-3.2.9/logs/redis_8888.log"

databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
maxclients 100000
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
cluster-enabled yes
cluster-config-file /data/work/redis-3.2.9/cluster-conf/8888/nodes-8888.conf
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
  • 群集配置文件/data/work/redis-3.2.9/cluster-conf/8888/redis.conf (从)
bind 192.168.56.181 127.0.0.1
protected-mode yes
masterauth "xxxxx"
requirepass "xxxxx"
port 8888
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /data/work/redis-3.2.9/cluster-conf/8888/redis_8888.pid
loglevel notice
logfile "/data/work/redis-3.2.9/logs/redis_8888.log"

databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
maxclients 100000
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
cluster-enabled yes
cluster-config-file /data/work/redis-3.2.9/cluster-conf/8888/nodes-8888.conf
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

5,使用scp复制安装和配置好的redis复制到其他服务器上

已经在192.168.56.180机器配置好,复制到181和182机器

在两台新机器上创建文件夹

scp -r /data/work/redis-3.2.9 [root@192.168.56.181](mailto:root@192.168.56.181):/data/work/
`#输入181机器密码即可传输数据。`



scp -r /data/work/redis-3.2.9 [root@192.168.56.182](mailto:root@192.168.56.182):/data/work/`
`#输入182机器密码即可传输数据。`

6,启动3台机器的redis

3台机器执行启动方式:


/data/work/redis-3.2.9/bin/redis-server /root/svr/redis-3.2.9/cluster-conf/7777/redis.conf &
/data/work/redis-3.2.9/bin/redis-server /root/svr/redis-3.2.9/cluster-conf/8888/redis.conf &

 查看状态是否:

ps -ef|grep redis

7,安装ruby2.4.0


gpg2 --recv键409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
`yum -y update nss`

curl -sSL [https://get.rvm.io](https://get.rvm.io/) | bash -s稳定

源/etc/profile.d/rvm.sh

rvm安装2.4.0

rvm使用2.4.0

gem install redis

yum install rubygems

8,创造实力

/data/work/redis-3.2.9/src/redis-trib.rb create --replicas 1 192.168.56.180:7777 192.168.56.180:8888 192.168.56.181:7777 192.168.56.181:8888 192.168.56.182:7777 192.168.56.182:8888

无法创建,提示错误信息:

>>> Creating cluster
[ERR] Sorry, can't connect to node 192.168.56.180:7777
修改配置文件redis.conf的权限设置安全密码(密码自定义设置)
requirepass "xxxxx"   #权限密码`

masterauth "xxxxx"
  • 设置密码之后如果需要使用redis-trib.rb的各种命令
如:./redis-trib.rb check 127.0.0.1:7000,则会报错ERR] Sorry, can’t connect to node 127.0.0.1:7000
解决办法:vim /usr/local/rvm/gems/ruby-2.3.3/gems/redis-4.0.0/lib/redis/client.rb,然后修改passord

find / -name client.rb

vim /usr/local/rvm/gems/ruby-2.4.0/gems/redis-4.2.5/lib/redis/client.rb

1 # frozen_string_literal: true
.

.

18 connect_timeout: nil,
19 timeout: 5.0,
20 password: "xxxxx", #######修改这里的密码(每台机器做同样的操作)

**注意:client.rb路径可以通过find命令查找:find / -name 'client.rb'**
带密码访问集群

/data/work/redis-3.2.9/bin/redis-cli -c -p 8888 -a xxxxx

 >>> Performing Cluster Check (using node 127.0.0.1:7777)
M: d8xxxefxxxxxxxxxxxxxxxxxx2 127.0.0.1:7777
slots: (0 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.

9、测试集群

### 192.168.56.180

/data/work/redis-3.2.9/bin/redis-cli -c -p 8888 -a xxxxx

127.0.0.1:8888> set test hellokugou
-> Redirected to slot [6918] located at 192.168.56.180:7777
OK

### 192.168.56.182

/data/work/redis-3.2.9/bin/redis-cli -c -p 7777 -a xxxxx

127.0.0.1:7777> get test
-> Redirected to slot [6918] located at 192.168.56.182:7777
"hellokugou"

172.22.15.245:7777> CLUSTER NODES
6xxxxxxxxxxxx7c 192.168.56.180:8888 slave 7xxxxxxxxxxxxxxxxxxxxx7 0 1606114181062 4 connected
3xxxxxxxxxxxxx1 192.168.56.181:8888 slave dxxxxxxxxxxxxxxxxxxxxxx2 0 1606114179060 6 connected
7xxxxxxxxxxxxxa 192.168.56.182:8888 slave 8xxxxxxxxxxxxxxxxxxxxx9 0 1606114182064 3 connected
dxxxxxxxxxxxxx2 192.168.56.180:7777 master - 0 1606114180062 5 connected 10923-16383
8xxxxxxxxxxxxx9 192.168.56.181:7777 myself,master - 0 0 3 connected 5461-10922
7xxxxxxxxxxxxx7 192.168.56.182:7777 master - 0 1606114182564 1 connected 0-5460
192.168.56.182:7777> exit
查看原文

赞 0 收藏 0 评论 0

认证与成就

  • 获得 14 次点赞
  • 获得 3 枚徽章 获得 0 枚金徽章, 获得 0 枚银徽章, 获得 3 枚铜徽章

擅长技能
编辑

开源项目 & 著作
编辑

(゚∀゚ )
暂时没有

注册于 2019-06-17
个人主页被 1.6k 人浏览