Hello everyone, I'm dishes, from previous articles we have built k8s cluster and then to the k8s in NameSpace say to k8s in Pod used, if it is to get enough dry, the next Pod controller It is also a hard dish!
~ After reading it, remember to give me a triple match!
This article mainly introduces the use of pod controller in If necessary, you can refer to
If it helps, don’t forget to 1609b51aeb4c4c ❥
The WeChat public account has been opened, , students who have not followed please remember to pay attention!
Previous article review:
- "k8s cluster building" Don't let poverty kill your interest in learning k8s!
- million word warning-to get started with k8s, Pod should go first!
Now that there is a pod, you need to manage the pod, and the controller is missing, so the next time will be left to Xiaocai to take you to the end!
Pod controller
1. Warm up the front
We already know that Pod is the smallest management unit k8s Think about how we created pods before, moving our heads, and vaguely remembering that there are three ways! 1. imperative object management 2. imperative object configuration 3. declarative object configuration . If you can think of these three, it means that you didn't learn it in vain! But for these three creation methods, we are all around pod . Whether it is created directly through commands or regenerated through yaml file configuration, the Kind
we generate directly indicates Pod
, as if we recognize the pod. Knowing is also limited to this. But today, Xiaocai will show you something different. We can create pods Pod Manager
1) Concept
What is a pod controller? The above just simply said that it can be used to create pod, is it just that, so why should I do it again~
Pod controller, as its name suggests. It is used to control the pod. It is used as the middle layer of pod management. After using the pod controller, you only need to tell the pod controller that I want how many pods and what kind of pods I want, and it will create them for us Pods that meet the conditions and ensure that each pod resource is in the target state that the user expects. If the pod resource fails during operation, it will re-orchestrate the pod based on the specified strategy
The controller is equivalent to a housekeeper, which can better manage pods for us.
The pod created by the pod controller has the biggest difference, that is: if the pod is created directly, the pod will be deleted after it is deleted and will not be rebuilt. The pod created by the pod controller will be recreated based on the specified strategy after being deleted!
pod controller also divided into many types. The controller types supported in k8s
- ReplicaSet : Ensure that the number of replicas is consistently maintained at the expected value, and supports the expansion and contraction of the number of pods and the upgrade of the mirror version
- Deployment: controls Pod by controlling ReplicaSet , supporting rolling upgrade and rollback version
- Horizontal Pod Autoscaler : The number of pods can be automatically adjusted horizontally according to the cluster load
- DaemonSet: on the specified Node in the cluster and only runs one copy, generally used for daemon tasks
- Job : The pod created by it will exit as soon as the task is completed, without restarting or rebuilding. It is used to perform one-time tasks
- Cronjob: The pod it creates is responsible for periodic task control and does not require continuous background operation
- StatefulSet: manages stateful applications
With so many controllers, don’t panic when you look at them. Then we will get to know the past one by one~
2. Actual operation
1)ReplicaSet
ReplicaSet is referred to as rs . Its main function is to ensure the normal operation of a certain number of pods. It will continue to monitor the running status of these pods. Once a pod fails, it will restart or rebuild. It also supports the expansion of the number of pods. Upgrading and downscaling of reduced and mirrored versions.
Let's take a look at the resource list RS
apiVersion: apps/v1 # 版本号
kind: ReplicaSet # 类型
metadata: # 元数据信息
name: # 名称
namespace: # 命名空间
labels: # 标签
key: value
spec: # 详细描述
replicas: # 副本数
selector: # 选择器,通过它指定该控制器管理那些Pod
matchLabels: # labels 匹配规则
app:
matchExpressions:
- key: xxx
operator: xxx
values: ['xxx', 'xxx']
template: # 模板,当副本数量不足时,会根据以下模板创建Pod副本
metadata:
labels:
key: value
spec:
containers:
- name:
image:
ports:
- containerPort:
The properties that we need to know about here are as follows:
- spec.replicas: The number of The current number of pods created by rs, the default is 1
- spec.selector: selector. Establish the relationship between the pod controller and the pod, define the label on the pod template, and define the selector on the controller, so that the pod belongs to which controller
- spec.template: template. The template used by the current controller to create the pod. The definition in it is the same as the pod definition mentioned in the previous section.
Create RS
We write a yaml try to create an RS controller:
After passing kubectl create -f rs.yaml
, it can be found that there are two pods, indicating that the number of copies is effective. When we delete a pod, a pod will be automatically started after a while!
Scaling
Since RS can control the number of pod copies when it is created, of course, it can also dynamically expand and shrink when it is running.
Direct modification
We can directly edit the yaml file of rs through the following command
kubectl edit rs rs-ctrl -n cbuc-test
replicas
number of 0609b51aeb55ba to 3. After saving and exiting, you can find that the number of running pods has become 3.
Command modification
In addition to the above method of modifying yaml, we can also modify it directly by command
kubectl scale rs rs-ctrl --replicas=2 -n cbcu-test
The above command is to modify the number of copies of the pod with the help of the scale
Mirror update
In addition to controlling the number of copies, mirroring can also be upgraded. The image we used to run the Pod above is nginx:1.14.1
If we want to upgrade the image version number, there are also two ways:
Direct modification
We can directly edit the yaml file of rs through the following command
kubectl edit rs rs-ctrl -n cbuc-test
After editing the mirror version number, save and exit, you can find that the mirror used by the running pod has changed
Command modification
To deal with the above by modifying yaml, we can also modify it directly by command
kubectl set image rs rs-ctrl nginx=nginx:1.17.1 -n cbuc-test
The format is as follows:
kubectl set image rs 控制器名称 容器名称=镜像名称:镜像版本 -n 命名空间
Delete mirror
If we don't want to use the controller, the best way is to delete it. We can delete according to the resource list
kubectl delete -f rs.yaml
You can also delete it directly
kubectl delete deploy rs-ctrl -ncbuc-test
But what we need to be clear is that default 1609b51aeb5827, after the controller is deleted , but sometimes we just want to delete the controller, but do not want to delete the pod, we have to use command options --cascade=false
kubectl delete rs rs-ctrl -n cbuc-test --cascade=false
2)Deployment
The Deployment controller is referred to as Deploy . This controller was introduced after the kubernetes v1.2 This kind of controller does not directly manage pods, but indirectly manages pods ReplicaSet controller
As can be seen from the figure, Deployment will only be more powerful:
- supports all functions of
- Support the stop and continue of the
- supports rolling upgrade and rollback version
Three great tools, which will be used to develop hardly~ Let's take a look at how the resource list is configured:
From the resource list, we can see that ReplicaSet has Deployment , and many attributes are added.
Create Deploy
Prepare a list of deploy resources:
Then kubectl create -f deploy-ctrl.yaml
and view the results:
Scaling
The method of expansion and contraction is the ReplicaSet . There are two methods, which are briefly mentioned here.
Direct modification
kubectl edit deploy deploy-ctrl -n cbuc-test
replicas
number of 0609b51aeb5afb to 3. After saving and exiting, you can find that the number of running pods has become 3.
Command modification
kubectl scale deploy deploy-ctrll --replicas=2 -n cbcu-test
Mirror update
Deployment supports two update strategies: reconstruction update and
rolling update, you can strategy , and support two attributes:
strategy: # 指定新的Pod替换旧的Pod的策略, 支持两个属性:
type: # 指定策略类型,支持两种策略
Recreate: # 在创建出新的Pod之前会先杀掉所有已存在的Pod
RollingUpdate: # 滚动更新,就是杀死一部分,就启动一部分,在更新过程中,存在两个版本Pod
rollingUpdate: # 当type为RollingUpdate时生效,用于为RollingUpdate设置参数,支持两个属性
maxUnavailable: # 用来指定在升级过程中不可用Pod的最大数量,默认为25%。
maxSurge: # 用来指定在升级过程中可以超过期望的Pod的最大数量,默认为25%。
Normally, the rolling update will be more friendly, and the update process tends to be "no sense" update
Version rollback
Deployment also supports many functions such as pause, resume, and version rollback during the version upgrade process, which are related to rollout
:
How to use:
- history
Show upgrade history
kubectl rollout history deploy deploy-ctrl -ncbuc-test
- pause
Pause the version upgrade process
kubectl rollout pause deploy deploy-ctrl -ncbuc-test
- restart
Restart the version upgrade process
kubectl rollout restart deploy deploy-ctrl -ncbuc-test
- resume
Continue the suspended version upgrade process
kubectl rollout resume deploy deploy-ctrl -ncbuc-test
- status
Show current upgrade status
kubectl rollout status deploy deploy-ctrl -ncbuc-test
- undo
Roll back to the previous version
kubectl rollout undo deploy deploy-ctrl -ncbuc-test
3)Horizontal Pod Autoscaler
to look at the name. The controller is referred to as 1609b51aeb5f32 HPA . We used to manually control the expansion or contraction of the pod, but this method is not smart. We need to observe the resource status anytime and anywhere, and then control the number of copies. It is a very time-consuming and laborious work~ So we want to be able to have a controller that can automatically control the number of pods to help us monitor the usage of the pods and realize the automatic adjustment of the number of pods. And K8s also provides us with Horizontal Pod Autoscaler-HPA
HPA can obtain the utilization of each pod, and then compare it with the indicators defined in HPS, and calculate the specific number that needs to be scaled, and then adjust the pod.
If you need to monitor the load of the pod, we need metrics-server , so we first need to install metrics-server
# 安装git
yum install -y git
# 获取mertrics-server
git clone -b v0.3.6 https://github.com/kubernetes-incubator/metrics-server
# 修改metrics-server deploy
vim /root/metrics-server/deploy/1.8+/metrics-server-deployment.yaml
# 添加下面选项
hostNetwork: true
image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
Then create it directly:
kubectl apply -f /root/metrics-server/deploy/1.8+/
After the installation is over, we can use the following command to view the resource usage of each node
kubectl top node
View pod resource usage
kubectl top pod -n cbuc-test
Then we can create HPA and prepare the resource list:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-ctrl
namespace: cbuc-test
spec:
minReplicas: 1 # 最小Pod数量
maxReplicas: 5 # 最大Pod数量
targetCPUUtilzationPercentage: 3 # CPU使用率指标
scaleTargetRef: # 指定要控制 deploy 的信息
apiVersion: apps/v1
kind: Deployment
name: deploy-ctrl
# 创建hpa
[root@master /]# kubectl create -f hpa-deploy.yaml
# 查看hpa
[root@master /]# kubectl get hpa -n dev
At this point, we have created a HPA , which can dynamically expand and shrink the pod. We will not do the test here. If you are interested, you can perform a pressure test to see if the result is what you expect~
4)DaemonSet
DaemonSet controller is referred to as DS . Its function is to ensure that a copy is run on each (or designated) node in the cluster. This role scenario is generally used for log collection or node monitoring.
If the function provided by a Pod is at the node level (each node needs and only needs one), then this type of Pod is suitable for creating with a DaemonSet type controller
Features:
- Every time a node is added to the cluster, the specified Pod copy will also be added to the node
- When the node is removed from the cluster, the Pod will also be recycled
Resource List Template :
In fact, it is not difficult to find that the list is Deployment , so we might as well guess that this controller is an Deployment , which can automatically help us create Pod~
actual combat list:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pc-daemonset
namespace: dev
spec:
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.14-alpine
After creating this list, we can find that nginx pod is created on each Node node~
5)Job
Job , as its name implies, the task of this controller is responsible for batch processing (a specified number of tasks are processed at a time), a short one-time task (each task is only run once and then ends).
Features:
- When the of the Pod created by 1609b51aeb6415 Job ends successfully, the Job will record the number of pods that ended successfully
- When the number of successfully ended pods reaches the specified number, the job will complete execution
Resource List Template:
The restart strategy must be specified here, and it can only be Never or OnFailure, for the following reasons:
- If specified as OnFailure, the job will restart the container when the pod fails, instead of creating the pod, and the number of failed is unchanged
- If specified as Never, the job will create a new pod when the pod fails, and the failed pod will not disappear or restart, and the number of failed times will be increased by 1.
- If it is specified as Always, it means restarting all the time, which means that the job task will be executed repeatedly. Of course it is wrong, so it cannot be set to
Always
actual combat list:
apiVersion: batch/v1
kind: Job
metadata:
name: job-ctrl
namespace: cbuc-test
labels:
app: job-ctrl
spec:
manualSelector: true
selector:
matchLabels:
app: job-pod
template:
metadata:
labels:
app: job-pod
spec:
restartPolicy: Never
containers:
- name: test-pod
image: cbuc/test/java:v1.0
command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 2;done"]
By observing the status of the pod, you can see that after the task is completed, the pod will become the Completed state.
6)CronJob
CronJob controller is abbreviated as CJ , its role is to use the Job controller as the management unit, and use it to manage pod resource objects. The task defined by the Job controller will be executed immediately when it is created, but the cronJob controller can control the way it runs at and repeatedly running .
Resource list template:
The concurrent execution strategy has the following types:
- Allow: allows Jobs to run concurrently
- Forbid: forbids concurrent running, if the previous run has not been completed, skip the next run
- Replace: Replace , cancel the currently running job and replace it with a new job
actual combat list:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cj-ctrl
namespace: cbuc-test
labels:
controller: cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
metadata:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: test
image: cbuc/test/java:v1.0
command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep3;done"]
From the results, we have also successfully implemented the timing task, which is executed once every second~
END
In this article, we introduced the use of Pod controllers. There are 6 types of controllers. I don’t know how many other types of controllers you can remember after reading it. K8s not over yet, we still have Service, Ingress and Volumn not been introduced yet! The road is long, Xiaocai will search with you~
If you work harder today, you will be able to say less begging words tomorrow!I am Xiaocai, a man who studies with you.
💋
The WeChat public account has been opened, , students who have not followed please remember to pay attention!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。