The grievances between Docker and k8s (8)-Suddenly looking back on Kubernetes


After the system introduces how to actually deploy a K8S project, as the last article of this series, let's take a look at the overview of the Kubernetes cluster content, and then summarize some of the deeper functions.

Kubernetes overview

The following figure is an overview structure content of k8s

You can see that among the functional modules mentioned in the figure, there are some that are not shown in this article:

l  ConfigMap

It is used to store the definition of the user configuration file, which is realized through its internal Volume projection technology, which is actually a way to mount the Volume. In this way, not only can the application be reused, but also more flexible functions can be realized through different configurations. When creating a container, the user can package the application as a container image and perform configuration injection through environment variables or external mounting files.


l  Secret

The Secret object type is used to store sensitive information, such as passwords, OAuth tokens, and SSH keys. Putting this information in Secret is more secure and flexible than putting it in Pod definition, container image, and ConfigMap. Secret is a standard k8s resource object, and its usage is very similar to ConfigMap. At the same time, we can perform access control on Secret to prevent confidential data from being accessed

l  PV

PVC is the implementation of persistent data volumes in Kubernetes. It is the core function of StatefulSet and a necessary means of Pod persistence. Kubernetes splits PV and PVC to achieve the decoupling of function points.

In addition to the content mentioned in the article, the content of the Kubernetes cluster is far more complicated than what we have seen so far, and there are many more content waiting for us to explore.

Here, we make a summary of these in-depth functions, and it is also a sort of in-depth study of Kubernetes.

Kubernetes components

The servers (hosts) we usually use in the development process are called Node nodes in the Kubernetes cluster.

At the same time, there are one or more Master nodes in Kubernetes to control multiple hosts to implement clusters, and the core scheduling function of the entire Kubernetes is basically on the Master node.

The main functions of Kubernetes are composed of five major components:

  1. kubelet: installed on the Node node to control the container in the Node node to complete the scheduling logic of Kubernetes
  2. ControllerManager: It is the core management component of the controller mode we mentioned above, which manages the controller logic in all Kubernetes clusters
  3. API Server: The service handles api requests in the cluster. The kubectl we have been writing is actually a request sent to the API Server. The request will be processed and forwarded internally.
  4. Scheduler: Responsible for Kubernetes service scheduling. For example, the controller only controls the orchestration of the Pod. The final scheduling logic is completed by the Scheduler and sends the request to the kubelet for execution.
  5. Etcd: This is a distributed database storage project, developed by CoreOS, and finally acquired by RedHat to become part of Kubernetes. It stores all configuration information in the Kubernetes cluster, such as the name, IP, secret, configMap, etc. of all cluster objects All data, relying on its own consistency algorithm can ensure that various configuration information is returned quickly and stably in the system, so this is also the core component of Kubernetes and the heart

Customized function

In addition to various powerful component functions, Kubernetes also provides users with a high degree of freedom.

In order to achieve this high degree of freedom, Kubernetes provides users with three public interfaces, namely:

l CNI (Container Networking Interface): It defines the connection mode of all networks in the Kubernetes cluster, and the entire network of the cluster is implemented through this interface. As long as the network plug-in that implements all the functions in this interface, it can be used as the network configuration plug-in of the Kubernetes cluster. There are various small plug-ins including host routing table configuration, layer 7 network discovery, packet forwarding, etc. , These small plug-ins can also be used at will, and users can freely customize these functions according to their own needs

l CSI (Container Storage Interface, container storage interface) defines some specifications for cluster persistence. As long as the storage function of this interface is implemented, it can be used as a persistence plug-in for Kubernetes
l CRI (Container Runtime Interface): When the Kubernetes container is running, for example, when the default configuration of Docker is running in the container of this cluster, the user can freely choose any other container project that implements this interface, such as the aforementioned Containerd and rkt visited

Here is an interesting point: CRI.

The default container of Kubernetes is Docker, but due to the initial competition of the project, Docker does not actually meet the CRI specification defined by Kubernetes, so what should I do?

In order to solve this problem, Kubernetes has specially written a component called DockerShim for Docker, the Docker gasket, which is used to convert the CRI request specification into the OCI specification for Docker operating Linux (yes, it is the OCI fund mentioned in the second part The norm that will be). But this function has always been maintained by the Kubernetes project. As long as Docker releases a new function, Kubernetes must maintain this DockerShim component.

Therefore, this recent news-Kubernetes will delete the DockerShim component in next year's version v1.20, which means that starting from next year's new version, Kubernetes will not fully support the update of Docker containers.

But in fact, this may not have any impact on our ordinary developers. The worst result is that our image needs to be replaced from Docker to other container images supported by Kubernetes.

However, according to the news released by various cloud platforms during this period, these platforms will provide corresponding conversion measures. For example, we still provide Docker images, and the platform will convert these images into other images when publishing operations and maintenance; or these The platform will maintain a DockerShim to support Docker itself, and there are solutions.

Architecture overview and summary

In this part, let's take a look at the architecture diagram of Kubernetes:

Through this series of learning, as an ordinary programmer, I have to admire Google’s deep and extreme approach to coding. There are so many components in the framework that are only produced by decoupling, and it also provides such a large degree of freedom. , I have to say that it is a very technical and in-depth framework encountered by our movable type development team during the learning process.

But this excessive degree of freedom also has negative effects.

During the deployment process, the complexity of the Kubernetes cluster is very high, and it is even more difficult to deploy a Kubernetes framework that meets the needs of the production environment. There are also scripts that sell Kubernetes production environment cluster deployment on the Internet, which shows that the Kubernetes system is huge.

In the learning process, you can use kinD or minikube to simulate a Kubernetes cluster locally in the form of Docker, but there is still a certain gap between this degree of learning and the production environment.


This series of articles describes in detail several difficult gods that our movable type grid development team encountered in the process of going up to heaven.

From the development of the cloud platform to the specific use of k8s, I explained a cloud platform step by step, from the initial virtual machine, to the prototype of PaaS, to the containerization of Docker, to the transition and evolution of the final Kubernetes form.

Human memory needs to depend on the precursor nodes. It is obviously impossible to explain the technical points and various difficult-to-remember terms involved in Kubernetes with just one article. Our idea is to let everyone pass. Step by step to understand the evolution of the entire cloud ecology, so as to finally understand the entire project.

Finally, I want to give you a word:

It's always shallow on paper, and I absolutely know that I have to do it personally.

After reading these documents for the first time, the members of our development team felt that they had completely mastered them, but in the actual document writing process, they found that their eyes were darkened and they didn’t know where to start.

Too many knowledge points only stay at the stage of having heard of it and only knowing what it is. I still recommend that you start and try the examples mentioned in the article. We believe that after you have written it yourself, you will have a different understanding of these contents.

Although this series of articles has ended, in the follow-up content, we will continue to tell you more about the various technical secrets and sharing of the new and old grapes encountered in the work process in the grape city.

I think the content is good, just like it before leaving~

Please indicate the source for reprinting: , Grape City provides developers with professional development tools, solutions and services, and empowers developers.

阅读 511



1.7k 声望
14.1k 粉丝
0 条评论


1.7k 声望
14.1k 粉丝