2

Welcome to my GitHub

https://github.com/zq2599/blog_demos

Content: Classification and summary of all original articles and supporting source code, involving Java, Docker, Kubernetes, DevOPS, etc.;

Links to series of articles

  1. kubebuilder combat one: preparation
  2. kubebuilder actual combat 2: first experience with kubebuilder
  3. kubebuilder combat three: a
  4. kubebuilder combat four: operator requirement description and design
  5. kubebuilder combat five: operator coding
  6. kubebuilder combat six: build, deploy and run
  7. kubebuilder combat seven: webhook
  8. kubebuilder combat eight: knowledge points

Overview of this article

  • As the sixth chapter of the "kubebuilder combat" series, front of has been coded. Now to the verification function, please make sure that your docker and kubernetes environment is normal, and then we can complete the following operations together:
  • Deploy CRD
  • Run Controller locally
  • Create elasticweb resource object through yaml file
  • Verify that elasticweb functions normally through logs and kubectl commands
  • The browser accesses the web to verify whether the business service is normal
  • Modify singlePodQPS to see if elasticweb automatically adjusts the number of pods
  • Modify totalQPS to see if elasticweb automatically adjusts the number of pods
  • Delete elasticweb, see related service and deployment are automatically deleted
  • Build the Controller image, run the Controller in kubernetes, and verify that the above functions are normal
  • The seemingly simple deployment verification operations, the scattered ones add up to so many... well, I am not emotional, let's start now;

Deploy CRD

  • Enter the directory where Makefile is located from the console and execute the command <font color="blue">make install</font> to deploy CRD to kubernetes:
zhaoqin@zhaoqindeMBP-2 elasticweb % make install
/Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
kustomize build config/crd | kubectl apply -f -
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/elasticwebs.elasticweb.com.bolingcavalry configured
  • It can be seen from the above content that the actual operation is to use kustomize to merge the yaml resources under <font color="blue">config/crd</font> and create them in kubernetes;
  • You can use the command <font color="blue">kubectl api-versions</font> to verify the success of CRD deployment:
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl api-versions|grep elasticweb
elasticweb.com.bolingcavalry/v1

Run Controller locally

  • First try to use the easiest way to verify the function of the Controller. As shown in the figure below, the Macbook computer is my development environment. You can directly use the Makefile in the elasticweb project to run the Controller code locally:

在这里插入图片描述

  • Enter the directory where the Makefile file is located, and execute the command <font color="blue">make run</font> to compile and run the controller:
zhaoqin@zhaoqindeMBP-2 elasticweb % pwd
/Users/zhaoqin/github/blog_demos/kubebuilder/elasticweb
zhaoqin@zhaoqindeMBP-2 elasticweb % make run
/Users/zhaoqin/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
/Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
go run ./main.go
2021-02-20T20:46:16.774+0800    INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": ":8080"}
2021-02-20T20:46:16.774+0800    INFO    setup   starting manager
2021-02-20T20:46:16.775+0800    INFO    controller-runtime.controller   Starting EventSource    {"controller": "elasticweb", "source": "kind source: /, Kind="}
2021-02-20T20:46:16.776+0800    INFO    controller-runtime.manager      starting metrics server {"path": "/metrics"}
2021-02-20T20:46:16.881+0800    INFO    controller-runtime.controller   Starting Controller     {"controller": "elasticweb"}
2021-02-20T20:46:16.881+0800    INFO    controller-runtime.controller   Starting workers        {"controller": "elasticweb", "worker count": 1}

New elasticweb resource object

  • The Controller responsible for processing elasticweb is already running, so let's start creating elasticweb resource objects, using yaml files to create them;
  • In the <font color="blue">config/samples</font> directory, kubebuilder created the demo file <font color="red">elasticweb_v1_elasticweb.yaml</font> for us, but the content of the spec here is not ours The four fields defined need to be changed to the following:
apiVersion: v1
kind: Namespace
metadata:
  name: dev
  labels:
    name: dev
---
apiVersion: elasticweb.com.bolingcavalry/v1
kind: ElasticWeb
metadata:
  namespace: dev
  name: elasticweb-sample
spec:
  # Add fields here
  image: tomcat:8.0.18-jre8
  port: 30003
  singlePodQPS: 500
  totalQPS: 600
  • Several parameters of the above configuration are described as follows:
  • The namespace used is <font color="blue">dev</font>
  • The application deployed in this test is tomcat
  • service uses the host's <font color="blue">30003</font> port to expose tomcat services
  • Assuming that a single pod can support 500QPS, the QPS requested by the external is 600
  • Execute the command <font color="blue">kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml</font> to create an elasticweb instance in kubernetes:
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml
namespace/dev created
elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created
  • I went to the controller window and found that a lot of logs were printed. Through analyzing the logs, it was found that the Reconcile method was executed twice, and resources such as deployment and service were created during the first execution:
2021-02-21T10:03:57.108+0800    INFO    controllers.ElasticWeb  1. start reconcile logic        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.108+0800    INFO    controllers.ElasticWeb  3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [nil]       {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.210+0800    INFO    controllers.ElasticWeb  4. deployment not exists        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.313+0800    INFO    controllers.ElasticWeb  set reference   {"func": "createService"}
2021-02-21T10:03:57.313+0800    INFO    controllers.ElasticWeb  start create service    {"func": "createService"}
2021-02-21T10:03:57.364+0800    INFO    controllers.ElasticWeb  create service success  {"func": "createService"}
2021-02-21T10:03:57.365+0800    INFO    controllers.ElasticWeb  expectReplicas [2]      {"func": "createDeployment"}
2021-02-21T10:03:57.365+0800    INFO    controllers.ElasticWeb  set reference   {"func": "createDeployment"}
2021-02-21T10:03:57.365+0800    INFO    controllers.ElasticWeb  start create deployment {"func": "createDeployment"}
2021-02-21T10:03:57.382+0800    INFO    controllers.ElasticWeb  create deployment success       {"func": "createDeployment"}
2021-02-21T10:03:57.382+0800    INFO    controllers.ElasticWeb  singlePodQPS [500], replicas [2], realQPS[1000] {"func": "updateStatus"}
2021-02-21T10:03:57.407+0800    DEBUG   controller-runtime.controller   Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
2021-02-21T10:03:57.407+0800    INFO    controllers.ElasticWeb  1. start reconcile logic        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.407+0800    INFO    controllers.ElasticWeb  3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [1000]      {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.407+0800    INFO    controllers.ElasticWeb  9. expectReplicas [2], realReplicas [2] {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.407+0800    INFO    controllers.ElasticWeb  10. return now  {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.407+0800    DEBUG   controller-runtime.controller   Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
  • Then use the kubectl get command to check the resource objects in detail, everything is in line with expectations, elasticweb, service, deployment, and pod are all normal:
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml
namespace/dev created
elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get elasticweb -n dev                                 
NAME                AGE
elasticweb-sample   35s
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get service -n dev                                    
NAME                TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
elasticweb-sample   NodePort   10.107.177.158   <none>        8080:30003/TCP   41s
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get deployment -n dev                                 
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
elasticweb-sample   2/2     2            2           46s
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev                                        
NAME                                 READY   STATUS    RESTARTS   AGE
elasticweb-sample-56fc5848b7-l5thk   1/1     Running   0          50s
elasticweb-sample-56fc5848b7-lqjk5   1/1     Running   0          50s

Browser verification business function

  • The docker image used in this deployment operation is tomcat. It is very simple to verify. If you open the default page and see the cat, it proves that the tomcat has started successfully. The IP address of my kubernetes host is <font color="blue">192.168.50.75 </font>, then use a browser to visit <font color="blue"> http://192.168.50.75:30003 </font>, as shown in the figure below, the business function is normal:

在这里插入图片描述

Modify the QPS of a single Pod

  • If you optimize yourself or rely on external changes (such as caching, database expansion), these may lead to an increase in the QPS of the current service. Assuming that the QPS of a single Pod is increased from 500 to 800, see if our Operator can automatically make adjustments (total QPS is 600, so the number of pods should be reduced from 2 to 1)
  • Add a new file named <font color="red">update_single_pod_qps.yaml</font> in the <font color="blue">config/samples/</font> directory, with the following content:
spec:
  singlePodQPS: 800
  • Execute the following command to update the QPS of a single Pod from 500 to 800 (note that the parameter <font color="red">type</font> is very important, don’t miss it):
kubectl patch elasticweb elasticweb-sample \
-n dev \
--type merge \
--patch "$(cat config/samples/update_single_pod_qps.yaml)"
  • At this time, look at the controller log, as shown in the figure below, the red box 1 indicates that the spec has been updated, and the red box 2 indicates the number of pods calculated with the latest parameters, which is in line with expectations:

在这里插入图片描述

  • Check the pod with the kubectl get command, it can be seen that it has dropped to one:
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev                                                                                       
NAME                                 READY   STATUS    RESTARTS   AGE
elasticweb-sample-56fc5848b7-l5thk   1/1     Running   0          30m
  • Remember to use the browser to check whether the tomcat is normal;

Modify the total QPS

  • The external QPS is also changing frequently. Our operator also needs to adjust the pod instance according to the total QPS in time to ensure the overall service quality. Next, we will modify the total QPS to see if the operator takes effect:
  • Add a new file named <font color="red">update_total_qps.yaml</font> in the <font color="blue">config/samples/</font> directory, with the following content:
spec:
  totalQPS: 2600
  • Execute the following command to update the total QPS from 600 to 2600 (note that the parameter <font color="red">type</font> is very important, don’t miss it):
kubectl patch elasticweb elasticweb-sample \
-n dev \
--type merge \
--patch "$(cat config/samples/update_total_qps.yaml)"
  • At this time, look at the controller log, as shown in the figure below, the red box 1 indicates that the spec has been updated, and the red box 2 indicates the number of pods calculated with the latest parameters, which is in line with expectations:

在这里插入图片描述

  • Use the kubectl get command to check the pods. It can be seen that the number of pods has grown to 4, and the QPS that can be supported by 4 PDs is 3200, which meets the current requirements of 2600:
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev
NAME                                 READY   STATUS    RESTARTS   AGE
elasticweb-sample-56fc5848b7-8n7tq   1/1     Running   0          8m22s
elasticweb-sample-56fc5848b7-f2lpb   1/1     Running   0          8m22s
elasticweb-sample-56fc5848b7-l5thk   1/1     Running   0          48m
elasticweb-sample-56fc5848b7-q8p5f   1/1     Running   0          8m22s
  • Remember to use the browser to check whether the tomcat is normal;
  • Smart, you will definitely think that using this method to adjust the number of pods is too low, uh... you are right, you are indeed low, but you can develop your own application and automatically call client-go to modify elasticweb after receiving the current QPS The totalQPS allows the operator to adjust the number of pods in time, which is barely automatic adjustment... right

Delete verification

  • Currently, there are service, deployment, pod, and elasticweb resource objects under the entire dev namespace. If you want to delete all of them, you only need to delete elasticweb, because service and deployment are associated with elasticweb. The code is as shown in the red box:

在这里插入图片描述

  • Execute the command to delete elasticweb:
kubectl delete elasticweb elasticweb-sample -n dev
  • Go to view other resources, they are all automatically deleted:
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl delete elasticweb elasticweb-sample -n dev
elasticweb.elasticweb.com.bolingcavalry "elasticweb-sample" deleted
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev                            
NAME                                 READY   STATUS        RESTARTS   AGE
elasticweb-sample-56fc5848b7-9lcww   1/1     Terminating   0          45s
elasticweb-sample-56fc5848b7-n7p7f   1/1     Terminating   0          45s
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev
NAME                                 READY   STATUS        RESTARTS   AGE
elasticweb-sample-56fc5848b7-n7p7f   0/1     Terminating   0          73s
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev
No resources found in dev namespace.
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get deployment -n dev
No resources found in dev namespace.
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get service -n dev   
No resources found in dev namespace.
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get namespace dev 
NAME   STATUS   AGE
dev    Active   97s

Build image

  1. Earlier, we tried all the functions of running the controller in the development environment. In the actual production environment, the controller is not so independent of kubernetes, but runs in kubernetes as a pod. Next, we try to compile and build the controller code. Become a docker image, and then run on kubernetes;
  2. The first thing to do is to execute <font color="blue">Ctrl+C</font> on the previous controller console to stop that controller;
  3. There is a requirement here, that is, you must have a mirror repository that kubernetes can access, such as Harbor in the local area network, or public hub.docker.com. I chose hub.docker.com for the convenience of operation. The premise of using it is Have a registered account of hub.docker.com;
  4. On the kubebuilder computer, open a console, execute the <font color="blue">docker login</font> command to log in, and enter the account and password of hub.docker.com as prompted, so that you can execute it on the current console The docker push command pushes the image to hub.docker.com (the network of this website is very poor, and it may take several logins to succeed);
  5. Execute the following command to build a docker image and push it to hub.docker.com, the image name is <font color="blue">bolingcavalry/elasticweb:002</font>:
make docker-build docker-push IMG=bolingcavalry/elasticweb:002
  1. The network condition of hub.docker.com is not generally bad. The docker on the kubebuilder computer must be set to mirror acceleration. If the above command fails over time, please try again several times. In addition, many go module dependencies will be downloaded during the build process. , You also need to wait patiently, and it is easy to encounter network problems, which require multiple retries. Therefore, it is best to use the Habor service built in the local area network;
  2. Finally, the output after the command is successfully executed is as follows:
zhaoqin@zhaoqindeMBP-2 elasticweb % make docker-build docker-push IMG=bolingcavalry/elasticweb:002
/Users/zhaoqin/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
/Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
go test ./... -coverprofile cover.out
?       elasticweb      [no test files]
?       elasticweb/api/v1       [no test files]
ok      elasticweb/controllers  8.287s  coverage: 0.0% of statements
docker build . -t bolingcavalry/elasticweb:002
[+] Building 146.8s (17/17) FINISHED                                                                                                                                                                                                  
 => [internal] load build definition from Dockerfile                                                                                                                                                                             0.1s
 => => transferring dockerfile: 37B                                                                                                                                                                                              0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                0.0s
 => => transferring context: 2B                                                                                                                                                                                                  0.0s
 => [internal] load metadata for gcr.io/distroless/static:nonroot                                                                                                                                                                1.8s
 => [internal] load metadata for docker.io/library/golang:1.13                                                                                                                                                                   0.7s
 => [builder 1/9] FROM docker.io/library/golang:1.13@sha256:8ebb6d5a48deef738381b56b1d4cd33d99a5d608e0d03c5fe8dfa3f68d41a1f8                                                                                                     0.0s
 => [stage-1 1/3] FROM gcr.io/distroless/static:nonroot@sha256:b89b98ea1f5bc6e0b48c8be6803a155b2a3532ac6f1e9508a8bcbf99885a9152                                                                                                  0.0s
 => [internal] load build context                                                                                                                                                                                                0.0s
 => => transferring context: 14.51kB                                                                                                                                                                                             0.0s
 => CACHED [builder 2/9] WORKDIR /workspace                                                                                                                                                                                      0.0s
 => CACHED [builder 3/9] COPY go.mod go.mod                                                                                                                                                                                      0.0s
 => CACHED [builder 4/9] COPY go.sum go.sum                                                                                                                                                                                      0.0s
 => CACHED [builder 5/9] RUN go mod download                                                                                                                                                                                     0.0s
 => CACHED [builder 6/9] COPY main.go main.go                                                                                                                                                                                    0.0s
 => CACHED [builder 7/9] COPY api/ api/                                                                                                                                                                                          0.0s
 => [builder 8/9] COPY controllers/ controllers/                                                                                                                                                                                 0.1s
 => [builder 9/9] RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go                                                                                                                      144.5s
 => CACHED [stage-1 2/3] COPY --from=builder /workspace/manager .                                                                                                                                                                0.0s
 => exporting to image                                                                                                                                                                                                           0.0s
 => => exporting layers                                                                                                                                                                                                          0.0s
 => => writing image sha256:622d30aa44c77d93db4093b005fce86b39d5ba5c6cd29f1fb2accb7e7f9b23b8                                                                                                                                     0.0s
 => => naming to docker.io/bolingcavalry/elasticweb:002                                                                                                                                                                          0.0s
docker push bolingcavalry/elasticweb:002
The push refers to repository [docker.io/bolingcavalry/elasticweb]
eea77d209b68: Layer already exists 
8651333b21e7: Layer already exists 
002: digest: sha256:c09ab87f6fce3d85f1fda0ffe75ead9db302a47729aefd3ef07967f2b99273c5 size: 739
  1. Go to hub.docker.com website to see, as shown below, the new image has been uploaded, so as long as any machine can access the Internet, you can pull this image to the local for use:

在这里插入图片描述

  1. After the image is ready, execute the following command to deploy the controller in the kubernetes environment:
make deploy IMG=bolingcavalry/elasticweb:002
  1. Next, create an elasticweb resource object as before, and verify that all resources are created successfully:
zhaoqin@zhaoqindeMBP-2 elasticweb % make deploy IMG=bolingcavalry/elasticweb:002
/Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
cd config/manager && kustomize edit set image controller=bolingcavalry/elasticweb:002
kustomize build config/default | kubectl apply -f -
namespace/elasticweb-system created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/elasticwebs.elasticweb.com.bolingcavalry configured
role.rbac.authorization.k8s.io/elasticweb-leader-election-role created
clusterrole.rbac.authorization.k8s.io/elasticweb-manager-role created
clusterrole.rbac.authorization.k8s.io/elasticweb-proxy-role created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/elasticweb-metrics-reader created
rolebinding.rbac.authorization.k8s.io/elasticweb-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/elasticweb-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/elasticweb-proxy-rolebinding created
service/elasticweb-controller-manager-metrics-service created
deployment.apps/elasticweb-controller-manager created
zhaoqin@zhaoqindeMBP-2 elasticweb % 
zhaoqin@zhaoqindeMBP-2 elasticweb % 
zhaoqin@zhaoqindeMBP-2 elasticweb % 
zhaoqin@zhaoqindeMBP-2 elasticweb % 
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml 
namespace/dev created
elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get service -n dev  
NAME                TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
elasticweb-sample   NodePort   10.96.234.7   <none>        8080:30003/TCP   13s
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get deployment -n dev
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
elasticweb-sample   2/2     2            2           18s
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev     
NAME                                 READY   STATUS    RESTARTS   AGE
elasticweb-sample-56fc5848b7-559lw   1/1     Running   0          22s
elasticweb-sample-56fc5848b7-hp4wv   1/1     Running   0          22s
  1. This is not enough! There is another important information we need to check ---controller's log, first to see which pods are:
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pods --all-namespaces
NAMESPACE           NAME                                             READY   STATUS    RESTARTS   AGE
dev                 elasticweb-sample-56fc5848b7-559lw               1/1     Running   0          68s
dev                 elasticweb-sample-56fc5848b7-hp4wv               1/1     Running   0          68s
elasticweb-system   elasticweb-controller-manager-5795d4d98d-t6jvc   2/2     Running   0          98s
kube-system         coredns-7f89b7bc75-5pdwc                         1/1     Running   15         20d
kube-system         coredns-7f89b7bc75-nvbvm                         1/1     Running   15         20d
kube-system         etcd-hedy                                        1/1     Running   15         20d
kube-system         kube-apiserver-hedy                              1/1     Running   15         20d
kube-system         kube-controller-manager-hedy                     1/1     Running   16         20d
kube-system         kube-flannel-ds-v84vc                            1/1     Running   22         20d
kube-system         kube-proxy-hlppx                                 1/1     Running   15         20d
kube-system         kube-scheduler-hedy                              1/1     Running   16         20d
test-clientset      client-test-deployment-7677cc9669-kd7l7          1/1     Running   9          9d
test-clientset      client-test-deployment-7677cc9669-kt5rv          1/1     Running   9          9d
  1. It can be seen that the pod name of the controller is <font color="blue">elasticweb-controller-manager-5795d4d98d-t6jvc</font>, execute the following command to view the log, there is more <font color="blue">-c manager</font> The font> parameter is because there are two containers in this pod, you need to specify the correct container to see the log:
kubectl logs -f \
elasticweb-controller-manager-5795d4d98d-t6jvc \
-c manager \
-n elasticweb-system
  1. I saw the familiar business log again:
2021-02-21T08:52:27.064Z        INFO    controllers.ElasticWeb  1. start reconcile logic        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.064Z        INFO    controllers.ElasticWeb  3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [nil]       {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.064Z        INFO    controllers.ElasticWeb  4. deployment not exists        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.064Z        INFO    controllers.ElasticWeb  set reference   {"func": "createService"}
2021-02-21T08:52:27.064Z        INFO    controllers.ElasticWeb  start create service    {"func": "createService"}
2021-02-21T08:52:27.107Z        INFO    controllers.ElasticWeb  create service success  {"func": "createService"}
2021-02-21T08:52:27.107Z        INFO    controllers.ElasticWeb  expectReplicas [2]      {"func": "createDeployment"}
2021-02-21T08:52:27.107Z        INFO    controllers.ElasticWeb  set reference   {"func": "createDeployment"}
2021-02-21T08:52:27.107Z        INFO    controllers.ElasticWeb  start create deployment {"func": "createDeployment"}
2021-02-21T08:52:27.119Z        INFO    controllers.ElasticWeb  create deployment success       {"func": "createDeployment"}
2021-02-21T08:52:27.119Z        INFO    controllers.ElasticWeb  singlePodQPS [500], replicas [2], realQPS[1000] {"func": "updateStatus"}
2021-02-21T08:52:27.198Z        DEBUG   controller-runtime.controller   Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
2021-02-21T08:52:27.198Z        INFO    controllers.ElasticWeb  1. start reconcile logic        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.198Z        INFO    controllers.ElasticWeb  3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [1000]      {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.198Z        INFO    controllers.ElasticWeb  9. expectReplicas [2], realReplicas [2] {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.198Z        INFO    controllers.ElasticWeb  10. return now  {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.198Z        DEBUG   controller-runtime.controller   Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
  1. Use the browser to verify that the tomcat has started successfully;

Uninstall and clean up

  • After the experience, if you want to clean up all the previously created resources (note that it is cleaning up <font color="red">resources</font>, not <font color="blue">resource objects</font>), You can execute the following commands:
make uninstall
  • At this point, the entire operator design, development, deployment, and verification process has been completed. In your operator development process, I hope this article can bring you some references;

You are not alone, Xinchen and original are with you all the way

  1. Java series
  2. Spring series
  3. Docker series
  4. kubernetes series
  5. database + middleware series
  6. DevOps series

Welcome to pay attention to the public account: programmer Xin Chen

Search "Programmer Xin Chen" on WeChat, I am Xin Chen, and I look forward to traveling the Java world with you...
https://github.com/zq2599/blog_demos

程序员欣宸
147 声望24 粉丝

热爱Java和Docker