目的:通过实验了解集群Deployment、Service与Ingress的相互关系

置顶信息:
1、密码统一设置为:appuser2022@devA
2、资源释放先后顺序:集群、节点池(节点)、以及其他已经开通的VPC或网络等等


阿里云ACK部署应用

1、RAM角色授权(存储、监控、日志等角色赋权)

2、创建集群(ACK托管版)
集群名称:aliyun1
集群规格:基础版
地域:华北2(北京) #可用区没有匹配公网NAT,转成地域为杭州
注意:pod地址与service地址(clusterIP)不能重复

3、配置节点池
设置密码:appuser2022@devA
注意:无需要在每个节点配置一个公网IP,只在APIServer配置一个公网即可

4、组件配置
CSI:通过存储类创建PV与PVC

注意事项:

  • 创建 ECS,配置管理节点到其他节点的 SSH 的公钥登录,通过 CloudInit 安装配置 Kubernetes 集群
  • 创建安全组,该安全组允许 VPC 入方向全部 ICMP 端口的访问
  • 创建 VPC 路由规则
  • 创建 NAT 网关和 EIP
  • 创建 RAM 角色及相应策略,该角色拥有 ECS 的查询、实例创建和删除的权限,添加和删除云盘的权限,SLB 的全部权限,云监控的全部权限,VPC 的全部权限,日志服务的全部权限,NAS 的全部权限。Kubernetes 集群会根据用户部署的配置相应的动态创建 SLB、云盘、VPC 路由规则
  • 创建内网 SLB,暴露 6443 端口
  • 在使用容器服务专有版和托管版集群的过程中,系统会收集管理节点上管控组件的监控和日志信息用于集群的稳定性保障。
  • 如果集群创建失败,已创建的资源将会收取费用,请及时清理。

5、部署服务(操作演示灰度发布)
1)基于客户端请求头的方式划分流量,只要是满足请求头就调度到新版本,没有请求头就访问老版本

注意:调用无状态应用用deployment
配置old.yaml

apiVersion: apps/v1 
kind: Deployment       #调用无状态应用用到deployment
metadata:
  name: old-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      run: old-nginx
  template:
    metadata:
      labels:
        run: old-nginx
    spec:
      containers:
        - image: registry.cn-hangzhou.aliyuncs.com/acs-sample/old-nginx
          imagepPullpolicy: Always
          name: old-nginx 
          ports:
          - containerPort: 80   
            protocol: TCP
         restartPolicy:  Always
---
apiVersion: v1 
kind: Service        #配置service
metadata:
  name: old-nginx
spec:
  ports:
  - port: 80   
    protocol: TCP
    targetPort:  80
  selector:
      run: old-nginx       
  sessionAffinity: None  
  type: NodePort   

通过yaml文件生成节点node

shell@Alicloud:~$ kubectl get no
NAME                    STATUS   ROLES    AGE   VERSION
cn-hangzhou.10.0.0.47   Ready    <none>   40m   v1.28.3-aliyun.1
cn-hangzhou.10.0.0.48   Ready    <none>   40m   v1.28.3-aliyun.1
cn-hangzhou.10.0.0.49   Ready    <none>   40m   v1.28.3-aliyun.1
shell@Alicloud:~$ kubectl get no -owide
NAME                    STATUS   ROLES    AGE   VERSION            INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                                 KERNEL-VERSION           CONTAINER-RUNTIME
cn-hangzhou.10.0.0.47   Ready    <none>   41m   v1.28.3-aliyun.1   10.0.0.47     <none>        Alibaba Cloud Linux 3 (Soaring Falcon)   5.10.134-15.al8.x86_64   containerd://1.6.20
cn-hangzhou.10.0.0.48   Ready    <none>   41m   v1.28.3-aliyun.1   10.0.0.48     <none>        Alibaba Cloud Linux 3 (Soaring Falcon)   5.10.134-15.al8.x86_64   containerd://1.6.20
cn-hangzhou.10.0.0.49   Ready    <none>   41m   v1.28.3-aliyun.1   10.0.0.49     <none>        Alibaba Cloud Linux 3 (Soaring Falcon)   5.10.134-15.al8.x86_64   containerd://1.6.20

查看pod状态(都是Running运行中)

shell@Alicloud:~$ kubectl get po -A
NAMESPACE     NAME                                                       READY   STATUS      RESTARTS      AGE
arms-prom     arms-prometheus-ack-arms-prometheus-696fbb6bc9-vglcw       1/1     Running     0             44m
arms-prom     kube-state-metrics-f8c784d79-7w8jd                         1/1     Running     0             44m
arms-prom     node-exporter-25pbr                                        2/2     Running     0             42m
arms-prom     node-exporter-mtdrf                                        2/2     Running     0             42m
arms-prom     node-exporter-sjglj                                        2/2     Running     0             42m
default       old-nginx-69dc7648f4-thglq                                 1/1     Running     0             14m
default       old-nginx-69dc7648f4-xh55c                                 1/1     Running     0             14m
kube-system   ack-node-local-dns-admission-controller-8567c85666-2q8rr   1/1     Running     0             44m
kube-system   ack-node-local-dns-admission-controller-8567c85666-8ldpm   1/1     Running     0             44m
kube-system   ack-node-problem-detector-daemonset-glbj8                  1/1     Running     0             42m
kube-system   ack-node-problem-detector-daemonset-gpxdh                  1/1     Running     0             42m
kube-system   ack-node-problem-detector-daemonset-lpbkq                  1/1     Running     0             42m
kube-system   ack-node-problem-detector-eventer-578c778bf-wxpwc          1/1     Running     0             44m
kube-system   alibaba-log-controller-d87f47bcb-q796l                     1/1     Running     1 (41m ago)   44m
kube-system   alicloud-monitor-controller-7ff6866b74-tmtd7               1/1     Running     0             44m
kube-system   coredns-547b98dbcc-8p8hs                                   1/1     Running     0             44m
kube-system   coredns-547b98dbcc-lfthq                                   1/1     Running     0             44m
kube-system   csi-plugin-j4fps                                           4/4     Running     0             42m
kube-system   csi-plugin-ngsr6                                           4/4     Running     0             42m
kube-system   csi-plugin-rb4jl                                           4/4     Running     0             42m
kube-system   csi-provisioner-79fdb75c95-2q9l2                           9/9     Running     0             44m
kube-system   csi-provisioner-79fdb75c95-fh6xq                           9/9     Running     0             44m
kube-system   kube-eventer-init-v1.7-48a2acc-aliyun-1.2.18-js7z7         0/1     Completed   0             44m
kube-system   kube-proxy-worker-9b98f                                    1/1     Running     0             42m
kube-system   kube-proxy-worker-9kp9m                                    1/1     Running     0             42m
kube-system   kube-proxy-worker-rln6x                                    1/1     Running     0             42m
kube-system   logtail-ds-7cxgr                                           1/1     Running     0             42m
kube-system   logtail-ds-dzb56                                           1/1     Running     0             42m
kube-system   logtail-ds-xsbq5                                           1/1     Running     0             42m
kube-system   logtail-statefulset-0                                      1/1     Running     0             44m
kube-system   metrics-server-b79fb97d4-t5bqb                             1/1     Running     0             44m
kube-system   nginx-ingress-controller-78dc4c87bf-hd5wr                  1/1     Running     0             44m
kube-system   nginx-ingress-controller-78dc4c87bf-jbp5b                  1/1     Running     0             44m
kube-system   node-local-dns-7tgtw                                       1/1     Running     0             42m
kube-system   node-local-dns-m2zp8                                       1/1     Running     0             42m
kube-system   node-local-dns-pxgxp                                       1/1     Running     0             42m
kube-system   security-inspector-54864cb979-m2d8g                        1/1     Running     0             44m
kube-system   sls-kube-state-metrics-5c75465594-thkbz                    1/1     Running     0             44m
kube-system   storage-auto-expander-784b68dd9c-fw2n8                     1/1     Running     0             39m
kube-system   storage-cnfs-77bc6dc6ff-mpxmc                              1/1     Running     0             39m
kube-system   storage-monitor-5cf5b5499c-kblkz                           1/1     Running     0             40m
kube-system   storage-operator-6dc9755f7b-9lhzg                          1/1     Running     0             44m
kube-system   terway-eniip-4lj4x                                         2/2     Running     0             42m
kube-system   terway-eniip-n5vl5                                         2/2     Running     0             42m
kube-system   terway-eniip-ssxr5                                         2/2     Running     0             42m

2)节点与网络绑定,通过ingress访问服务

点击路由,创建yaml文件(ingress)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gray-release
spec:
  rules: 
  - host: www.example.com
    http:
      paths:
       #老版本服务
       - path: /
         backend:
          service:
            name: old-nginx    #通过域名调用后端的backend service 的old nginx
            port:
               number: 80
         pathType: ImplementationSpecific    

注意:通过域名调用后端服务(backend service),ingress是一个七层访问的方式,是通过域名就可以访问服务
即:域名 www.example.com -> old-nginx 已绑定

7、登陆集群访问服务

shell@Alicloud:~$ kubectl get ingress
NAME           CLASS   HOSTS             ADDRESS          PORTS   AGE
gray-release   nginx   www.example.com   116.62.214.106   80      3m57s

通过访问请求头来定位: 返回old说明已发布至老版本

shell@Alicloud:~$ curl -H "Host: www.example.com" 116.62.214.106
old

8、部署一个新的版本new-nginx
无状态中创建yaml文件

apiVersion: apps/v1 
kind: Deployment       
metadata:
  name: new-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      run: new-nginx
  template:
    metadata:
      labels:
        run: new-nginx
    spec:
      containers:
        - image: registry.cn-hangzhou.aliyuncs.com/acs-sample/new-nginx
          imagepPullpolicy: Always
          name: new-nginx 
          ports:
          - containerPort: 80   
            protocol: TCP
---
apiVersion: v1 
kind: Service
metadata:
  name: new-nginx
spec:
  ports:
  - port: 80   
    protocol: TCP
    targetPort:  80
  selector:
      run: new-nginx       
  sessionAffinity: None  
  type: NodePort    

注意:yaml文件可以通过阿里模版或k8s.io中搜索(例如deployment或service)

同样创建路由

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gray-release-canary
  nginx.ingress.kubernetes.io/canary: "true"
  nginx.ingress.kubernetes.io/canary-by-header: "foo"
  nginx.ingress.kubernetes.io/canary-by-header-value: "bar"   #ingress规则
spec:
  rules: 
  - host: www.example.com
    http:
      paths:
       #老版本服务
       - path: /
         backend:
          service:
            name: new-nginx    
            port:
               number: 80
         pathType: ImplementationSpecific    

问题报错:
以下改成了example1,就不会报错了,后续问题跟进

  • host: www.example1.com

**请求参数不合法,请参考错误详情检查后再试
admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "www.example.com" and path "/" is already defined in ingress default/gray-release**

调用服务

shell@Alicloud:~$ curl -H "Host: www.example1.com" -H "foo:bar" 116.62.214.106

如何通过本地连接ACK集群
集群信息 - 连接信息 - 公网访问 - 配置文件复制(其实就是公钥)

mkdir .kube
cd .kube
vim config

复制刚才粘贴的内容就OK了


热心肠的火车
1 声望0 粉丝