1
Please indicate the source for reprinting: Grape City official website , Grape City provides developers with professional development tools, solutions and services, and empowers developers.

In the previous section, we introduced the deployment of movable type grids on k8s, and how to realize the arrangement and management control between containers. In order to further realize internal and external interactive calls, it is necessary to realize the function of service discovery. This is the relationship between "human and dog" we mentioned earlier.

7.jpg

Students who have done microservices may have understood what service discovery is. The Eureka framework in the spring cloud project accomplishes this function. Its main job is to register internal services so that other cluster services can call and access this service.

 

It is reasonable to guess that the existence of Kubernetes is likely to inspire various microservice frameworks to produce service discovery mechanisms.

The corresponding modules for service discovery in Kubernetes are Service and Ingress. Next, let's talk about these two functions separately.

Service and Ingress

Service is similar to the registration function of the service.

 

The logic is very simple. Declare a service in kubernetes to generate a VIP (virtual network). All other components in the Kubernetes cluster can access this service through this VIP, and this service will not change with the change of Service Yes, as long as it is created, it will live forever.

Service

And what is the content of the service? This part, like the above Deployment, is determined by the selector selector. We can create a service through the following yaml:

apiVersion: v1

kind: Service

metadata:

  name: hostnames

spec:

  selector:

    app: hostnames

  ports:

  - name: default

    protocol: TCP

    port: 80

    targetPort: 9376

Through the introduction of the previous article, we can understand that the proxy content required by this service is the Pod of app==hostnames. At the same time, there is also a new field ports. This field describes the service request method (protocol), externally exposed port (port), and internal port (targetPort) of the proxy service.

 

We can create a Service and view a Service through this sample-service.yaml file:


# 创建

kubectl apply -f sample-service.yaml

 

# 查看

kubectl get services hostnames

5.png

There is a ClusterIP in this service. This IP is the VIP generated by this Service. Other members in the cluster can access this Service through this VIP. But because we don't have any specific service to let this Service proxy, so requesting this IP will not succeed now.

 

Then, we need to create a specific implementation for this Service: The following sample-deployment.yaml file is to create a multi-copy Pod whose function is to return its own podname:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: hostnames

spec:

  selector:

    matchLabels:

      app: hostnames

  replicas: 3

  template:

    metadata:

      labels:

        app: hostnames

    spec:

      containers:

      - name: hostnames

        image: k8s.gcr.io/serve_hostname

        ports:

        - containerPort: 9376

          protocol: TCP

~

In this code, we expose port 9376 of the container, because this Pod communicates with the outside through this port. Similarly, we execute the following commands to create and view a copy of this pod:

# 创建

kubectl apply -f sample-deployment.yaml

 

# 查看

kubectl get pods -l app=hostnames

6.png

In this part of the content, you can see that the pod copy has been created successfully. At this point, according to the controller mode I mentioned in the previous section, Service also has a corresponding controller that handles Service. A service that satisfies app==hostnames is found inside, which means that this service is bound to Service. At this point, we can request the ClusterIP just mentioned above through any host in the cluster:

7.png

As you can see in this part, we have made many requests, but the results returned are different each time. This is because Service has done load balancing processing through the network plug-in (CNI) internally, so we can achieve it through Service Load balancing function.

"Going astray" in the learning process

When learning to understand this part of the content, I have always had a misunderstanding: I think that Service must correspond to the orchestration controller object of the Pod of Deployment to work, so I memorize the logical relationship of Service --> Deployment --> Pods. Yuxin, but this understanding is actually wrong.

 

In Kubernetes, each functional component performs its own duties, and they only handle what they should do. For example, here, Service binding Pod relies on app==hostnames in the selector, and this definition appears in Deployment Is defined in Pod, so Service and Deployment have nothing to do with each other. They don’t know each other. The relationship can be described by the following figure:

8.png

Moreover, in the previous study, I also mistakenly believed that the load balancing service is provided by the Deployment. In fact, this function is handled by the network plug-in in the Service, and users can also customize the network search or load balancing algorithm to be used. What, Kubernetes gives users enough freedom.

Ingress

With Service, our services can be accessed in the cluster at will to achieve the communication relationship between services, but if we want our services to be accessed by end users, we also need the last component, Ingress.

 

Ingress is a reverse proxy service in Kubernetes. It can parse the configured domain name and point it to our internal Service. Its definition can be implemented by the following yaml:

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

  name: sample-ingress

spec:

  rules:

  - host: hostname.sample.com

    http:

      paths:

      - path: /

        backend:

          serviceName: hostnames

          servicePort: 80

In the above code, we point the domain name hostname.sample.com to the service hostnames just defined. Through this operation, our service can be accessed by external services by defining the domain name configuration. And the creation command of Ingress is the same as mentioned above:

kubectl apply -f sample-ingress.yaml

 

With this part of the configuration, our functions can in principle be accessible to the outside world. But in actual applications, we do not have a local environment for testing. The local Kubernetes environment is generated through kindD, and its core is multiple Docker Containers instead of multiple machines. The above content runs inside the Container and uses Docker to simulate the function of Kubernetes, so this is also the only functional module in this article that cannot be verified successfully.

Completely deploy a movable type grid application

Through the previous section, we learned the use of the orchestration controller between Pods. In this section, we have implemented internal and external interaction calls to further realize the function of service discovery. Now we can return to the previous question again: How to successfully deploy a movable type格 Application.

 

By introducing the basic use process of the entire Kubernetes, we can see that a service becomes a Pod in Kubernetes, deployed through Deployment, discovered through Service services, and through the entire process of Ingress reverse proxy. After the cooperation of these modules, our The movable type grid application can finally be deployed in this Kubernetes cluster.

10.png

I hope this picture display can bring you a more intuitive feeling.

 

Summarize

Up to this chapter, we have fully introduced the whole process of k8s deployment with movable type grid. The next section will bring you the last article of this series-Kubernetes Overview, so that everyone has an overall impression of the content of the Kubernetes cluster, and a summary of some in-depth functions.

 

Interested friends, don’t miss it~ Let’s talk about it in the next article.


葡萄城技术团队
2.7k 声望28.6k 粉丝

葡萄城创建于1980年,是专业的软件开发技术和低代码平台提供商。以“赋能开发者”为使命,葡萄城致力于通过各类软件开发工具和服务,创新开发模式,提升开发效率,推动软件产业发展,为“数字中国”建设提速。