Speaking from the launch of a system

Even the simplest system, from code to online, from operation and maintenance to upgrade, has to go through the following life cycle

  • Front-end code compilation (different configurations for different online environments)
  • Front-end code packaging
  • Back-end code compilation (different configurations for different online environments)
  • Back-end code packaging
  • Build nginx to deploy front-end code
  • Build tomcat deployment back-end code
  • Front-end application cluster
  • Back-end application cluster
  • Configure load balancing
  • Application monitoring
  • Fault restart
  • Rolling upgrade
  • Application rollback
  • Elastic scaling
  • System operation and maintenance handover

If these steps are done manually, there are risks in each step

  • Code compilation (compiler version is inconsistent, forget to modify the configuration file, production run test configuration)
  • Environment construction (different environment restricted network environment, operating system version, etc., the same program may behave inconsistently in different environments)
  • Load configuration (adding a node requires manual operation, and node failure cannot be failed over)
  • Application monitoring (manual monitoring is not realistic)
  • Fault restart (manual operation cannot be performed, manual lagging behind the fault)
  • Rolling upgrade (manual operation is not possible)
  • Application rollback (a rollback mechanism needs to be reserved at the solution level, such as retaining the deployment package of the previous version)
  • Flexible scaling (manual operation is not possible)
  • Operation and maintenance handover (each system needs handover, code warehouse, packaging method, deployment steps, restart steps, upgrade steps, etc.)

During the operation and maintenance handover process, we encountered a lot of codes that are inconsistent with online operations, and there is a problem of code loss. It may be that the previous operation and maintenance colleagues did not submit the latest code after deployment, and the code will not be found after a long time, especially for For a system that has not been updated for a long time, we cannot guarantee that the current code is the latest code. Inconsistent codes are disastrous for operation and maintenance. In short,

People are unreliable

Ideally, after develops and submits the code, everything has nothing to do with him

  • No need to compile by the developer's personal computer, the compilation should be a unified platform, a unified environment, and a unified standard
  • The environment cannot be set up twice, just like you write on both sides, even if you write tens of thousands of times, you cannot find exactly the same twice
  • The cluster is automated. What does it mean? The cluster should be a configuration item, not that we need to adopt a completely different installation method and deployment method for a single machine and a cluster.
  • Monitoring is automated, diversified, configurable, and can quickly determine system problems with the help of graphs
  • Elastic scaling should also be automated, and can be automatically scaled according to the system load
  • Failure restart, application rollback are automatic, can be automatically restarted, deployment failure can be automatically rolled back
  • Handover is standardized, and the handover of all systems is a template

What problem does kubernetes solve

  • Ensure that the environment is completely consistent
  • Provide cluster configuration
  • Fault restart
  • Elastic scaling

What problem does jenkins solve

  • Online compilation, packaging, deployment

What problem does prometheus solve

  • monitor
  • Early warning
  • notify

If the above automation can be achieved, then the content of the handover is, if the process is started

  • A project pipeline

All changes come from docker

The development of everything requires standardization. The ancients realized the importance of standardization as early as more than two thousand years ago. After Qin Shihuang unified the six countries, the implementation of with the same track and text is a kind of standardization, and the Chinese clan has only realized it since then. In the true sense of unification. Smart IT people also put forward various solutions on standardization

  • The W3C standard established the Internet standard, and the Internet has entered a stage of rapid development since then
  • The JDBC standard frees programmers from the nightmare of adapting to different databases
  • The J2EE standard makes java the most popular back-end development language
  • TCP/IP protocol laid the foundation of the Internet
  • and many more

To put it bluntly, the standard is the ultimate abstraction of things. So what is the abstraction of an application? There are too many attributes in an application.

  • Different programming languages
  • Different operating environment
  • Rely on different third-party systems
  • The way of packaging and compiling is different
  • Different deployment methods
  • The way to restart is different, systemctl wants to achieve this goal
  • and many more

And this difference is not a small problem, it is a big problem with a knife and a knife. If the application is not solved, it will not run normally. If it is really insisted on abstracting, it will only cause the application to become more complex. Smart programmers Inspiration was found in the container, and there are many things in common between goods and applications

  • Different size
  • Not the same size
  • Different transportation
  • Different loading methods
  • Uninstallation method is different

Biodiversity can make our planet vibrant, but diversity has also led to the difficulty of standardization. Human beings are very clever and designed a container system.

We only ship containers, you can find a way to load the obtained into the container

When we install the application into the docker, we load the cargo into the container. Then the management of the application becomes the management of the docker. Just like the container, the management of the cargo becomes the management of the container, and things become simple. too much.

In fact, before docker, we were exploring standardization, such as virtual machines, copying virtual machines can also achieve docker-like functions, then why this solution has not become popular, because the virtual machine is too heavy, and the truck is loaded with feathers. The efficiency is too low, docker has many advantages over virtual machines

  • Lightweight, docker itself occupies very little system resources
  • Fast startup refers to the fast startup of docker itself, compared to virtual machines
  • Easy to migrate, the essence of docker is a text file of dockerfile, and the virtual motionlessness is several G
  • Good management, only need to master a few commands to manage the application

For students who have never been in touch with docker, you can understand it as an ultra-lightweight virtual machine. Although docker and virtual machine are essentially different, it can give you a preliminary impression of docker.

Why is there kubernetes

In fact, before the emergence of kubernetes, although docker was very popular, few companies actually used it in production, because docker solved big problems, but there are still many small problems.

  • There is no official management interface and it is not friendly
  • Unable to communicate across hosts, resulting in inability to use in large scale
  • No monitoring mechanism, unable to monitor application status
  • Lack of upper-layer services, docker only provides infrastructure
  • Lack of orchestration services, that is, a system often needs to write multiple dockerfiles, although there is composer behind, but this is something later
  • Lack of endorsement by large companies

In the final analysis, docker solves the two major problems of application deployment and application operation and maintenance, but application management, application orchestration, application monitoring, elastic scaling and other application-related issues have not been resolved. The emergence of kubernetes is truly completed. After the great unification, docker has truly exerted its great power since then.

It can be seen from this picture that kubernetes is a docker management system. It is above docker in architecture and under application. It can empower applications and provide a variety of IT resources for applications. Downward, docker can be scheduled. Achieve unified management of applications. What can kubernetes provide for applications? In fact, the problems in the application development process mentioned at the beginning can find solutions in kubernetes, such as

  • Provide Deployment to solve the problem of application deployment
  • Provide Service to solve the problem of application load
  • Provide ConfigMap to solve the problem of different environment configuration
  • Provide Ingress to solve the problem of application access
  • Provide PersistentVolume to solve application storage problems
  • and many more

Basically you can think of all the non-business problems encountered in the development can find solutions in kubernetes, kubernetes is like a small town of applications, providing various required resources for applications

Build a kubernetes environment in five minutes

The main purpose of this part of the content is to eliminate everyone’s fear of kubernetes. Don’t think kubernetes is very complicated. When kubernetes was first released, it was really troublesome to install, but in recent years, there are many solutions to make kubernetes installation very simple, just one line of command. finish installation

  • Set hostname
#配置主机名
hostnamectl set-hostname k1
#添加hosts
172.16.8.49 k1
172.16.8.50 k2

#创建用户
groupadd docker
useradd -g docker rke
passwd rke

#配置互信
ssh-keygen
ssh-copy-id rke@k2
  • Install docker
yum install docker
systemctl start docker
  • Install kubernetes
# 下载rke
https://github.com/rancher/rke/releases

# cluster.yml
nodes:
- address: k1
  internal_address: k1
  role: [controlplane,etcd]
  hostname_override: k1
  user: rke
- address: k2
  internal_address: k2
  role: [worker]
  hostname_override: k2
  user: rke
services:
  kubelet:
    extra_args:
      max-pods: "10000"
  kube-api:
    service_node_port_range: "1-65535"
authentication:
  strategy: x509
authorization:
  mode: rbac

# 执行rke up
./rke_linux-amd64 up
  • Install kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl

install -o rke -g docker -m 0755 kubectl /usr/local/bin/kubectl

$ kubectl get nodes
NAME   STATUS   ROLES               AGE     VERSION
k1     Ready    controlplane,etcd   6m53s   v1.20.8
k2     Ready    worker              6m52s   v1.20.8
  • Install rancher
docker run --privileged -d --restart=unless-stopped -p 8080:80 -p 4443:443 rancher/rancher

kubernetes architecture

Overall structure

Component

  • master: k8s master node, responsible for resource management and application scheduling, usually does not deploy applications
  • node: k8s slave node, responsible for running applications
  • API Server: Resource control interface, all operations of k8s resources need to pass through API Server, API Server only runs on the master node
  • etcd: cluster database, cluster data is stored in etcd, etcd is a high-performance distributed key-value database
  • Kubelet: kubelet runs on each node and is responsible for receiving the scheduling information of the master, deploying the application to the node, and regularly reporting the application status and node status to provide reference data for the master scheduling decision
  • kube-proxy: You can understand it as an nginx (actually the underlying implementation is to use nginx), responsible for the proxy of the application
  • Kube scheduler: scheduler, responsible for determining which node the pod runs on
  • Controller manager: to ensure that the application can reach the user's expected level, for example, if the application is set to 2 copies, then the controller ensures that the application has 2 copies at any time

These components ensure the application on kubernetes

  • The most reasonable deployment
  • Can restart if hung up
  • Insufficient resources to expand

Common resources

POD

  • The smallest unit of Kubernetes management
  • A POD is a mirror instance of docker
  • All kuberntes operations are carried out around POD
  • If the application of docker management is described as a mob, a group of stragglers, then kubernetes is a regular army, with systems, classes, and division of labor.
  • Enough to manage this group of skirmishers, a POD is a skirmisher, no matter how complicated the division of labor, the order is to reach the bottom after all.
  • Various strategies can be defined, such as how many PODs must be running at the same time, to ensure that the application "cannot die", kubernetes will automatically deploy new PODs to meet the requirements

Deployment

  • Multiple POD collection
  • It can be understood as a collection of all PODs required by a system
  • Convenient management, can operate a group of POD

Service

  • Provide a unified address to the outside world
  • Similar to Nginx, it can be used as a reverse proxy

ConfigMap

  • System configuration file
  • There should only be differences in the configuration environment between different environments. The design of configmap achieves the above goals
  • configmap is ultimately used for POD

Ingress

  • Enable applications to route through domain names, similar to nginx servername

kubectl basic commands

# 查看所有节点
kubectl get nodes
NAME   STATUS   ROLES               AGE   VERSION
k1     Ready    controlplane,etcd   22h   v1.20.8
k2     Ready    worker              22h   v1.20.8

# 查看所有的pod
kubectl get pods --all-namespaces

# 查看指定命名空间下pod
kubectl get pods -n poc

# 获取所有资源
kubectl get all --all-namespaces

# 获取所有支持的资源类型

kubectl api-resources

A spring boot deployment example

Code

@RestController
@RequestMapping("example")
public class ExampleController {

    @GetMapping("header")
    public Map header(HttpServletRequest request) {
        Enumeration<String> headerNames = request.getHeaderNames();
        Map<String, String> headers = new HashMap<>();
        while (headerNames.hasMoreElements()) {
            String name = headerNames.nextElement();
            headers.put(name, request.getHeader(name));
        }
        return headers;
    }
}

The code logic is very simple, print out all the request header information, no database connection

Dockerfile

FROM openjdk:8-jdk-alpine
MAINTAINER definesys.com
VOLUME /tmp
ADD kubernetes-demo-1.0-SNAPSHOT.jar app.jar
RUN echo "Asia/Shanghai" > /etc/timezone
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-Dfile.encoding=UTF-8","-Duser.timezone=Asia/Shanghai", "-jar","app.jar"]

Build image

docker build -t 172.16.81.92:8000/poc/kubernetes-example:v1.0 .
docker push 172.16.81.92:8000/poc/kubernetes-example:v1.0

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kubernetes-example
    tier: backend
  name: kubernetes-example
  namespace: poc
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubernetes-example
      tier: backend
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: kubernetes-example
        tier: backend
    spec:
      containers:
        - image: 172.16.81.92:8000/poc/kubernetes-example:v2.0
          imagePullPolicy: IfNotPresent
          name: kubernetes-example
          ports:
            - containerPort: 8080
              protocol: TCP
          resources: {}
      dnsPolicy: ClusterFirst
      schedulerName: default-scheduler
status: {}

Deploy via kubectl

kubectl apply -f app-deployment.yml

If there is only deployment, it can only be accessed in the container. We can start a busybox container and use the curl command to test in the container

➜ curl  http://10.42.1.30:8080/example/header
{"host":"10.42.1.30:8080","user-agent":"curl/7.30.0","accept":"*/*"}
busybox is a container for some linux toolsets

There are many problems with this access method

  • In k8s, the ip is changed, because k8s will schedule the application according to changes in the environment, so even if you don’t upgrade, the application may be redeployed
  • Single point of issue, if there are multiple copies, it cannot be accessed through a single application ip
  • Load problem, if there are multiple copies, it will inevitably face load problems

Then the solution to these problems is Service

Service

In traditional development, if we deploy multiple copies, that is, a cluster, then we will deploy a reverse proxy in front of the cluster, such as nginx, which can not only act as a proxy but also do load balancing. In k8s, we don’t need to be separate Then build such a reverse proxy server and load balancing, and the Service resources in k8s can meet the requirements.

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-example-svc
  namespace: poc
spec:
  ports:
    - name: app-port
      port: 8080
      protocol: TCP
      targetPort: 8080
  selector:
    app: kubernetes-example
    tier: backend
  type: ClusterIP

Deploy via kubectl

kubectl apply -f app-svc.yaml

Note that the type of Service type: ClusterIP indicates that this is still an IP inside the cluster and cannot be accessed from the outside, but this Service solves the problem of load balancing and reverse proxy, and can be accessed by the name service in other applications, such as

➜ curl http://kubernetes-example-svc:8080/example/header
{"host":"kubernetes-example-svc:8080","user-agent":"curl/7.30.0","accept":"*/*"}

So how to solve the problem of external access, there are two solutions, we first introduce the first one

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-example-svc
  namespace: poc
spec:
  ports:
    - name: app-port
      port: 8080
      protocol: TCP
      targetPort: 8080
      nodePort: 18080
  selector:
    app: kubernetes-example
    tier: backend
  type: NodePort

NodePort type of Service can directly open the port on the host, which is similar to the -p parameter in docker. By specifying the nodeport, it can be accessed externally through the host ip:nodePort.

➜ curl http://k2:18080/example/header
{"host":"k2:18080","user-agent":"curl/7.29.0","accept":"*/*"}

Some students may ask, if there are multiple hosts, do you need to load a load? The answer is yes, you can use F5 or nginx. Here is another way Ingress

Ingress

There are several serious problems in the way of exposing services through nodePort

  • Ports are difficult to manage. If there are many services and each service has a port, it is very troublesome to maintain the mapping relationship between ports and services.
  • By digging holes on the host, it is not elegant in implementation

In order to solve the NodePort problem, Ingress, Ingress is routed by domain name, that is, each service can be assigned a domain name, and access through the domain name can be routed to the corresponding service. All traffic is carried out through kube-proxy Forwarding, so there is no need to open a port on the host

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 500m
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
  name: kubernetes-example-svc-ingress
  namespace: poc
spec:
  rules:
    - host: kubernetes-example.definesys.com
      http:
        paths:
          - backend:
              serviceName: kubernetes-example-svc
              servicePort: app-port

kubernetes-example.definesys.com specifies the domain name, and matches the service name through serviceName. Through Ingress, we can directly use the domain name to access. Of course, this domain name needs to be added to the dns system. If there are multiple nodes, you can do dns load or another layer of nginx load. If you are testing locally, you can directly configure the hosts file to directly access

ConfigMap

ConfigMap is a very genius design solution in kubernetes. Before introducing ConfigMap, we would like to ask the question What is the difference between the war package in the production environment and the war package in the test environment? The answer is the configuration file. The configuration file determines the environmental properties of the application. The application can be divided into There are two parts, one part is the program and the other part is the configuration file. According to the principle of docker, testing and production can only be different from the configuration file. The program should be exactly the same, that is, the same image, when the test environment is running The configuration file of the test environment should be used, and the configuration file of the production environment should be used when the production environment is running. If it is docker, we can achieve it by mounting the volume

#测试
docker run -v /data/dev/application.properties:/u01/webapps/application.properties -d app
#生产
docker run -v /data/prod/application.properties:/u01/webapps/application.properties -d app

In kubernetes, it can be implemented through ConfigMap, which is also a key-value structured file

Part of the content of a jenkins configmap file

apiVersion: v1
data:
  apply_config.sh: |-
    mkdir -p /usr/share/jenkins/ref/secrets/;
    echo "false" > /usr/share/jenkins/ref/secrets/slave-to-master-security-kill-switch;
    cp -n /var/jenkins_config/config.xml /var/jenkins_home;
    cp -n /var/jenkins_config/jenkins.CLI.xml /var/jenkins_home;
    cp -n /var/jenkins_config/hudson.model.UpdateCenter.xml /var/jenkins_home;
  config.xml: |-
    <?xml version='1.0' encoding='UTF-8'?>
    <hudson>
      <disabledAdministrativeMonitors/>
      <version></version>
      <numExecutors>0</numExecutors>
      <mode>NORMAL</mode>
      ....
  hudson.model.UpdateCenter.xml: |-
    <?xml version='1.1' encoding='UTF-8'?>
    <sites>
      <site>
        <id>default</id>
        <url>https://updates.jenkins.io/update-center.json</url>
      </site>
    </sites>
  jenkins.CLI.xml: |-
    <?xml version='1.1' encoding='UTF-8'?>
    <jenkins.CLI>
      <enabled>false</enabled>
    </jenkins.CLI>
  plugins.txt: ""
kind: ConfigMap
metadata:
  name: jenkins
  namespace: poc

The data part defines the data part of the ConfigMap. Note that the keys in this file data are all file names. Yes, each key can be mounted in the container in the form of a file. The content of the file is the value. Under the code modification

@RestController
@RequestMapping("example")
public class ExampleController {
    @Value("${kubernetes.demo.env.name:}")
    private String envName;
    @GetMapping("header")
    public Map header(HttpServletRequest request) {
        Enumeration<String> headerNames = request.getHeaderNames();
        Map<String, String> headers = new HashMap<>();
        while (headerNames.hasMoreElements()) {
            String name = headerNames.nextElement();
            headers.put(name, request.getHeader(name));
        }
        headers.put("envName", envName);
        return headers;
    }
}

The code is injected into the configuration item kubernetes.demo.env.name

  • Prepare configMap.yaml
apiVersion: v1
data:
  application.properties: kubernetes.demo.env.name=dev
kind: ConfigMap
metadata:
  name: example-configmap
  namespace: poc
  • Import kubectl into kubernetes
kubectl apply -f configMap.yaml
  • Modify the Dockerfile of the application to choose to read the configuration file from the specified path
FROM openjdk:8-jdk-alpine
MAINTAINER definesys.com
VOLUME /tmp
ADD kubernetes-demo-1.0-SNAPSHOT.jar app.jar
RUN echo "Asia/Shanghai" > /etc/timezone
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-Dfile.encoding=UTF-8","-Duser.timezone=Asia/Shanghai", "-Dspring.config.location=/app/", "-jar","app.jar"]

Added --spring.config.location=/u01/config/application.properties startup parameters

Rebuild the image

docker build -t 172.16.81.92:8000/poc/kubernetes-example-configmap:v1.0 .
  • Modify the Deployment and mount the ConfigMap into the container
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kubernetes-example
    tier: backend
  name: kubernetes-example
  namespace: poc
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubernetes-example
      tier: backend
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: kubernetes-example
        tier: backend
    spec:
      containers:
        - image: 172.16.81.92:8000/poc/kubernetes-example-configmap:v1.0
          imagePullPolicy: IfNotPresent
          name: kubernetes-example
          volumeMounts:
            - mountPath: /app/
              name: configmap-data
          ports:
            - containerPort: 8080
              protocol: TCP
          resources: {}
      dnsPolicy: ClusterFirst
      volumes:
        - name: configmap-data
          configMap:
            name: example-configmap
      schedulerName: default-scheduler
status: {}

Jenkins

The deployment of an application requires at least the following yaml configuration files

  • Deployment
  • Service
  • Ingress

If it is handwritten, it is not only inefficient, error-prone, and difficult to manage. It is not recommended to put the configuration file in the project for the following reasons

  • Each project needs to copy a configuration for modification, which is cumbersome
  • If you don’t understand k8s, it will increase its learning cost
  • If k8s is upgraded, the configuration file may need to be changed, and all configuration files need to be modified

We need a tool to help us complete the generation of configuration files and deploy them to the kubernetes environment. Jenkins can help us complete a series of automated operations

#!/bin/bash

set -e

v_app_name=kubernetes-example
v_module=.
v_app_host=kubernetes-example.definesys.com
v_k8s_namespace=poc
#v_app_name=$appName
#v_module=$module
#v_app_host=${v_app_name}.fuyaogroup.com
#v_k8s_namespace='fone-application'
v_app_version=`date +"%Y%m%d%H%M%S"`
v_harbor_prefix='172.16.81.92:8000/poc/'

if [ "$v_app_host" == "" ]; then
  v_app_host=${v_app_name}.definesys.com
fi

echo "app name    ====>"$v_app_name
echo "app version ====>"$v_app_version
echo "module      ====>"$v_module
echo "workspace   ====>"$WORKSPACE
echo "profile     ====>"$v_profile
echo "app host    ====>"$v_app_host


v_workspace=$WORKSPACE/workspace

mkdir -p $v_workspace
cd $v_module


mvn clean package -Dmaven.test.skip

#临时存放jar包目录
v_build_directory_name=build
v_build_directory=$v_workspace/$v_build_directory_name
v_app_jar=target/*.jar

v_app_jar=`basename target/*.jar`

rm -rf $v_build_directory
mkdir -p $v_build_directory

cp -rf target/$v_app_jar $v_build_directory
cd $v_build_directory

#v_app_name=${v_app_jar%.*}
#v_app_name=${v_app_name%-*}

echo "app jar name  =====>"$v_app_jar
echo "app name    =====>"$v_app_name


#docker镜像构建
v_image_tag=$v_harbor_prefix$v_app_name:$v_app_version
cat 1>Dockerfile <<EOF
FROM openjdk:8-jdk-alpine
MAINTAINER definesys.com
VOLUME /tmp
ADD $v_app_jar app.jar
RUN echo "Asia/Shanghai" > /etc/timezone
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-Dfile.encoding=UTF-8","-Duser.timezone=Asia/Shanghai", "-jar","app.jar"]
EOF
docker build -t $v_image_tag . -f Dockerfile
docker push $v_image_tag
docker rmi -f $v_image_tag


#部署kubernetes
cat 1>app-deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  generation: 1
  labels:
    app: $v_app_name
    tier: backend
  name: $v_app_name
  namespace: $v_k8s_namespace
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: $v_app_name
      tier: backend
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: $v_app_name
        tier: backend
    spec:
      containers:
        - image: $v_image_tag
          imagePullPolicy: IfNotPresent
          name: $v_app_name
          ports:
            - containerPort: 8080
              protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
#      dnsConfig:
#        searches:
#          - k2-infrastructure.svc.cluster.local
      restartPolicy: Always
      imagePullSecrets:
      - name: harbor-registry
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  name: ${v_app_name}-svc
  namespace: $v_k8s_namespace
spec:
  ports:
    - name: app-port
      port: 8080
      protocol: TCP
      targetPort: 8080
  selector:
    app: $v_app_name
    tier: backend
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 500m
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
  creationTimestamp: null
  generation: 1
  name: ${v_app_name}-svc-ingress
  namespace: $v_k8s_namespace
spec:
  rules:
    - host: $v_app_host
      http:
        paths:
          - backend:
              serviceName: ${v_app_name}-svc
              servicePort: app-port
status:
  loadBalancer: {}
EOF

kubectl apply -f app-deployment.yaml --record

DQuery
300 声望93 粉丝

幸福是奋斗出来的