3
头图

Hello, everyone, I am Xiao Cai, a Xiao Cai who is eager to be a Cai not Cai in the Internet industry. It can be soft or strong, like it is soft, white whoring is strong!
~ After reading it, remember to give me a triple play!

This article mainly introduces the network settings in

If necessary, you can refer to

If it helps, don’t forget 160bd619257ead ❥

WeChat public account has been opened, , students who have not followed please remember to pay attention!

We have Volumn NameSpace、Pod、PodController to 060bd619257ee5. I believe that the friends who have read it will also be very rewarding~ So today we will continue to the k8s class. In this section, we will talk about how to access k8s after the service is built. !

First of all, we need to know what Service and Ingress . Simply put, these two components are used for traffic load. So what is the traffic load? When we have deployed our application services through pods within the cluster, what should we do next? That is to allow users to access our application services. This is the most important thing. Otherwise, after you finish the deployment, the users can’t access it. Isn’t that useless~

One, Service

In k8s, pod is the carrier of the application. We can access our application through the IP of the pod, but we have already understood that the pod has a life cycle. Once the pod has a problem, the pod controller will destroy the pod. rebuild. Then the IP of the pod will change at this time, so using the pod IP to access the application directly pass , then in order to solve this problem, k8s introduced the Service resource, through this resource, you can integrate multiple pods, Provide a unified entry address, and you can access the pod service behind by accessing the entry address of the Service!

Service did not appear out of thin air, I don’t know if you remember the key component kube-proxy Node node! The key point is here~ Let’s look at an old picture and recall:

This picture will not be unfamiliar to anyone who has seen the previous blog post, yes! kube-proxy plays a key role in this. Each Node node runs a kube-proxy service process. When the Service is created, the 160bd619257fa1 will be API created by service information, and kube-proxy Service change based on the monitoring mechanism, and then it will convert the latest Service information into the corresponding access rule

At this point, there should be a general concept of Service, at least we know its usefulness, then we might as well have a more in-depth understanding~

1) Working mode

kube-proxy supports 3 working modes, as follows:

1. userSpace

This mode is relatively stable, but the efficiency is relatively low! In USERSPACE mode, Kube-Proxy will be for each Service create a listening port, when there is a request sent to Cluster IP time, will be Iptables redirection rules to Kube-Proxy port is listening , 160bd619258034 kube-proxy will select a Pod according to the LB algorithm to provide services and establish a connection.

In this mode, kube-proxy plays the role of a four-layer equalizer , because kube-proxy runs in userSpace mode, the space between the kernel and the user will increase when forwarding processing. Data copy, so the efficiency is relatively low.

2. iptables

In the iptables mode, the kube-proxy will iptable for each pod in the backend of the Service Cluster IP the request to the 6192580a0 directly to the 6192580a0. In this mode, kube-proxy does not assume the role of a four-layer load balancer, and is only responsible for creating iptables rules. The advantage of this mode is that it is more userspace mode, but it cannot provide a flexible LB strategy. When the backend Pod is unavailable, it cannot be retried.

3. ipvs

This mode is similar to the iptables mode. The kube-proxy will monitor the pod changes and create the corresponding ipvs rules. However, the ipvs rule has higher forwarding efficiency than iptables LB algorithms.

practice

Three working modes have been learned above. Let's briefly try the role of ipvs First prepare a list of resources:

The first half of this list is to create a Pod controller, and the second half is to create a Service.

Then we enter the ipvsadm -Ln command to see the ipvs rule policy:

10.108.230.12 is the access entry provided by the service. When accessing this entry, you can find that there are three pod services waiting to be called. kube-proxy will distribute the request to Go to one of the pods, and this rule will be generated on all nodes in the cluster at the same time, so it can be accessed on any node!

This mode must install the ipvs kernel module, otherwise it will be reduced to iptables

open ipvs:

  1. kubectl edit cm kube-proxy -n kube-system

Save after editing (:wq) Exit

  1. kubectl delete pod -l k8s-app=kube-proxy -n kube-system
  2. ipvsadm -Ln

2) Service use

Several working modes of Service have been introduced above. Next we enter the service phase. We have done a simple practice above, created a Deploy , a Service , and then we can access resources serviceIp + targetPort or nodeIp + nodePort

But after learning Service , this is not enough. Service is divided into 5 types, which will be introduced one by one below.

1. ClusterIP

Let's first look at the resource list ClusterIP

clusterIp + port through post-creation test

Let's check the ipvs rule again, we can see that the service can be forwarded to the corresponding 3 pods

Next, we can view what information the service has describe

I scanned it again and found that Endpoints and Session Affinity have not been seen among us. So what is this thing?

Endpoint

Endpoint is a resource object in k8s, stored in etcd, used to record the access addresses of all Pods corresponding to a service, which is generated according to the selector description in the service configuration file. , which are exposed through Endpoints. It can be said that 160bd619258415 Endpoint is a collection of ports that actually implement the service. speaking, 160bd619258417 Endpoint is the bridge between service and pod

Since it is a resource, then we can get

Load distribution

We have successfully achieved access to Pod resources through Service above, then we will make some modifications and enter 3 pods to edit the usr/share/nginx/index.html file:

# pod01
Pod01 :  ip - 10.244.1.73
# pod02
Pod01 :  ip - 10.244.1.73
# pod03
Pod03 :  ip - 10.244.2.63

Then we try to view the result curl 10.96.10.10:80

Have you ever noticed that the load distribution strategy is polling! For service access, k8s provides two load distribution strategies:

  • If the distribution strategy is not defined, the kube-proxy strategy is used by default, such as random, polling
  • The session persistence mode based on the client address means that all requests from the same client will be forwarded to a fixed pod. And here we need to use the things we have not seen above sessionAffinity

When we used the ipvsadm -Ln command to view the distribution policy before, there was a rr field. I don’t know if you noticed it. That’s right, this rr worth polling.

If we want to enable the distribution strategy of session retention, then only need to add the sessionAffinity:ClientIP option spec

Check the distribution strategy again through the ipvsadm -Ln command, and you can find that the result has changed.

Let's test it briefly:

In this way, the distribution strategy session retention has been realized!

Note: ClusterIp Service does not support external access, that is to say, access through a browser is invalid and can only be accessed within the cluster

2. HeadLiness

Many services need to support customization. If the product is positioned as a service, then this product is not a success. In some scenarios, developers do not want to use the load balancing function provided by the service, but want to control the load balancing strategy by themselves. In response to this situation, k8s is also very supportive, introducing HeadLiness Service , this type of service will not be assigned ClusterIp , if you want to access the service, you can only query through the Service domain name.

Let's take a look at the resource list template HeadLiness

The only difference from ClusterIp is the change in attributes of clusterIP: None

After creation, it can be found that the ClusterIP is not assigned, we continue to view the details of the Service

Through the details, we can find that Endpoints has taken effect, and then we arbitrarily enter a pod to check the domain name resolution:

You can see that the domain name has also been resolved. The default domain name is service name. Namespace.svc.cluster.local

3. NodePort

The above two service types can only be accessed inside the cluster, but our deployment service definitely wants users to use it outside the cluster. So this time we need to use the service type we created at the beginning, that is NodePort service.

The working principle of this type of Service is not difficult. In fact, is to map the port of the service to a port of Node , and then access it through NodeIp+NodePort

After reading the schematic diagram, it feels suddenly clear. So let's see how it is created through the resource list:

We create a service from the above resource list, and then visit:

It can be seen that it can be accessed in two ways, we can also try it in the browser:

This result is as we wished!

Don’t feel that you’re already satisfied with this, although it’s said that you can successfully get users to visit~ We will continue to learn about the remaining two types while the iron is hot~

4. LoadBalancer

LoadBalancer listen to the name and know that it is related to load balancing. This type NodePort . The purpose is to expose a port to the outside. The main difference is that LoadBalancer will make a load balancer outside the cluster, and this device needs the support of the external environment, which is sent by the external environment. Requests to this device will be forwarded to the cluster after being loaded by the device.

There is a concept of Vip in the figure. Vip here refers to the Vitual IP, which is also a virtual IP. External users can load onto our different services by accessing this virtual IP to achieve load balancing and high availability.

5. ExternalName

ExternalName type service is used to introduce a service outside the cluster. It externalName attribute, and then accesses this service inside the cluster to access the external service.

Resource list:

After creation, we can check the domain name resolution and find that the resolution has been successful:

dig @10.96.0.10 svc-externalname.cbuc-test.svc.cluster.local

Two, Ingress

1) Working mode

We have already talked about the usage of several types of Service above. We already know that we want to allow external users to access the services in our pod. There are two types of services that are supported. They are: NodePort and LoadBalancer , but in fact, we will analyze it carefully. It is not difficult to find the shortcomings of these two services:

  • NodePort : It will occupy a lot of ports of cluster machines. When cluster services increase, this shortcoming becomes more obvious
  • LoadBalancer : Each Service requires an LB, which is troublesome and wastes resources, and requires load balancing equipment other than k8s to support

This shortcoming is of course not only that we can find, as the initiator of k8s has already realized it, and then introduced the concept Ingress Ingress only needs one NodePort or LB to meet the needs of exposing multiple services:

image-20210513230121499

In fact, Ingress is equivalent to a 7-layer load balancer, which is an abstraction of reverse proxy by K8s. Its working principle is similar to Nginx. It can be understood as in Ingress to establish many rules 160bd61925890e, and then Ingress Controller these configuration rules into Nginx reverse proxy configuration, and then provides the service externally. Two important concepts are involved here:

  • Ingress : A resource object in K8s, which is used to define the rules of how the request is forwarded to the service
  • Ingress Controller : A program that specifically implements reverse proxy and load balancing, parses the rules defined by Ingress, and implements request forwarding according to the configured rules. There are many implementation methods, such as Nginx、Contor、Haproxy etc.

Ingress controller has many ways to implement request forwarding. We usually choose the familiar Nginx as the load. Next, we will take Nginx as an example. Let’s first understand its working principle. :

  1. The user writes the Ingress Service Service in the K8s cluster corresponds to each domain name
  2. The Ingress controller dynamically perceives the changes in the Ingress service rules, and then generates a corresponding Nginx reverse proxy configuration
  3. Ingress controller will Nginx configuration to a running Nginx service and update it dynamically
  4. Then the client accesses the domain name. In fact, Nginx will forward the request to the specific Pod , and the entire request process is completed.

After understanding the working principle, we will implement it on the ground~

2) Ingress use

1. Environment Setup

Before using Ingress , we need to build a Ingress environment

step one:
# 拉取我们需要的资源清单
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml
Step two:
# 创建资源
kubectl apply -f ./
Step three:

Check if the resource is created successfully

At this point we have prepared the Ingress environment, and then we come to the test link~

We prepared two Service , two Deployment , and created 6 replicas of Pod

If you still can’t prepare these resources, you have to go back and do your homework~

The general structure diagram is as follows:

Then we now prepare a Ingress to achieve the following results

Prepare the resource list Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-htpp
  namespace: cbuc-test
spec:
  rules:
  - host: dev.cbuc.cn
    http: 
      paths:
      - path: /
        backend:
          serviceName: svc-nodeport-dev
          servicePort: 80
  - host: pro.cbuc.cn
    http:
      paths:
      - path: /
        backend:
          serviceName: svc-nodeport-pro
          servicePort: 80

After creation, we also need to add domain name mapping hosts

Then you can access it on the webpage through domain name + nodePort

At this point we have realized the access method Ingress

END

Up to now, we have also finished the use process of , from the most basic 160bd619258cb3 nameSpace to the network configuration in this section, don't you know if you have lost your studies~! k8s has come to an end, we will be in the next chapter


What kind of trick do you write? Then remember to pay attention~

If you work harder today, you will be able to say less begging words tomorrow!

I am Xiaocai, a man who studies with you. 💋

The WeChat public account has been opened, , students who have not followed please remember to pay attention!


写做
624 声望1.7k 粉丝