Service mesh (Service mesh) is the current emerging architecture model, which is more and more favored by people. Together with Kubernetes, the service grid can form a powerful platform that can solve the technical needs that arise in the highly distributed environment found on microservice clusters or service infrastructure. The service grid is a specialized infrastructure layer used to facilitate service-to-service communication between microservices.

The service grid solves the typical communication requirements in microservice-based applications, including encrypted tunnels, health checks, circuit breakers, load balancing, and traffic licensing. If you leave the microservices to solve these needs, it will lead to high costs and time-consuming in the development process.

In this article, we will give an overview of the most common microservice communication requirements addressed by the service grid architecture pattern.

Microservice dynamics and inherent challenges

The problem arises when you realize that microservices implement quite a lot of code that has nothing to do with the business logic originally assigned to them. In addition, it is possible that you have multiple microservices that implement similar functions in non-standardized processes. In other words, the microservice development team should focus on business logic and leave low-level communication capabilities to specific layers.

To continue to advance our plan, we need to consider the internal dynamics of microservices. At a given time, you may have multiple instances of a microservice for several reasons:

  • Throughput: Depending on the incoming request, you may have more or fewer microservice instances
  • Canary release
  • Blue-green deployment
  • A/B testing

In short, microservice-to-microservice communication has specific requirements and problems that need to be solved. The following picture shows this scheme:

The diagram describes several technical challenges. Obviously, the main responsibility of Microservice 1 is to balance the load among all Microservice 2 instances. Therefore, Microservice 1 must figure out how many Microservice 2 instances we have at the time of request. In other words, Microservice 1 must implement service discovery and load balancing.

On the other hand, Microservice 2 must implement some service registration functions to inform Microservice 1 when there are new instances.

To have a completely dynamic environment, the following functions should be part of the development of microservices:

  • Flow control: The natural evolution of load balancing. We want to specify the number of requests that should be sent to each Microservice 2 instance. In Microservice
    Encrypted communication between 1 and 2
  • Use circuit breakers and health checks to solve and overcome network problems

All in all, the main problem is that the development team spends a lot of resources to write very complex code, and these codes are not directly related to the business logic expected to be delivered by the microservices.

Potential solutions

How to externalize all non-functional and operational functions in external standardized components that all microservices can call? For example, the following figure compiles all functions that do not belong to a given microservice. Therefore, after determining all the functions, we need to decide where to implement them.

在这里插入图片描述

Solution #1: Encapsulate all functions in a library

The developer will be responsible for calling the functions provided by the library to solve the microservice communication requirements.

This solution has several disadvantages:

This is a tightly coupled solution, which means that microservices are highly dependent on the library
This mode is not easy to distribute and upgrade the new version of the library
This is not in line with the principle of multilingual microservices, because it will apply different programming languages to different contexts.

Solution #2: Transparent Proxy

在这里插入图片描述

This solution implements the same set of functions. However, a very different approach is adopted: each microservice has a specific component that acts as a proxy and is responsible for handling its incoming and outgoing traffic. The proxy solves the shortcomings of the library we described earlier, as follows:

  • The proxy is transparent, which means that the microservice does not realize that it is running nearby and implements all the functions needed to communicate with other microservices.
  • Since it is a transparent proxy, developers do not need to change the code to reference the proxy. Therefore, from the perspective of microservice development, the upgrade agent will be one that will not have much impact on the development process.
  • Agents can be developed using different technologies and programming languages used by microservices.

Service mesh architecture pattern

Although the transparent proxy method has brought some benefits to the microservice development team and the communication requirements of the microservice, there are still some missing parts:

  • The agent only executes strategies to achieve communication requirements, such as load balancing, canary release, and so on.
  • What is responsible for defining such a strategy and publishing it on all running agents?

The solution architecture requires another component, which will be used by the administrator to define the strategy, and it will be responsible for propagating the strategy to the agent.

The following picture shows the final architecture, which is the service grid pattern:

在这里插入图片描述

As you can see, the pattern contains the two main components we described.

  • Data plane: Also known as sidecar, it acts as a transparent proxy. Similarly, each microservice will have its own data plane, intercept all inbound and outbound traffic, and apply the previously described policies.
  • Control plane: Used by administrators to define policies and publish them to the data plane.

Some important things need to be noted:

  • This is a "push-based" architecture. The data plane does not make "calls" to obtain policies-that will consume the network.
  • The data plane usually reports usage metrics to the control plane or specific infrastructure.

Teach you how to use Rancher, Kong and Kong Mesh

Kong provides an enterprise-level integrated service connection platform, which includes API gateway, Kubernetes ingress controller, and service mesh implementation. The platform allows users to deploy multiple environments, such as local, hybrid cloud, multi-regional, and multi-cloud environments.

Let us implement a service mesh with the help of a canary release running on a cloud-agnostic Kubernetes cluster, which may include a GKE cluster or any other Kubernetes distribution. The service mesh will be implemented by Kong Mesh, and Kong for Kubernetes will serve as the Kubernetes Ingress Controller. Generally speaking, the ingress controller is responsible for defining the entry point into your Kubernetes cluster, exposing the microservices deployed in it, and implementing consumption strategies for it.

First, make sure you have installed Rancher and are running a Kubernetes cluster managed by Rancher. After logging in to Rancher, select the Kubernetes cluster we will use, in this case "kong-rancher". Click Cluster Explorer. You will be redirected to the following page:

在这里插入图片描述

Now, let's start with the service mesh:

1、 Kong Mesh Helm Chart

Go back to the Rancher Cluster Manger homepage and select your cluster again. Click the "Tools" option in the menu bar and then click Catalogs to create a new catalog. Click the Add Catalog button to include the Helm chart of Kong Mesh ( https://kong.github.io/kong-mesh-charts/).

Select Global as the scope and Helm v3 as the Helm version.

在这里插入图片描述

Now click on Apps and Launch to see the Kong Mesh available in the Catalog. Please note that Kong, as a partner of Rancher, provides the Helm chart of Kong for Kubernetes by default:

在这里插入图片描述

2. Install Kong Mesh

Click the Namespaces option in the top menu bar and create a "kong-mesh-system" namespace.

在这里插入图片描述

Move the mouse to the top menu option of kong-rancher and click the kong-rancher active cluster.

在这里插入图片描述

Click Launch kubetcl

在这里插入图片描述

Create a file named "license.json" to store the license you received from Kong Mesh. The format is as follows:

{“license”:
{“version”:1,“signature”:“6a7c81af4b0a42b380be25c2816a2bb1d761c0f906ae884f93eeca1fd16c8b5107cb6997c958f45d247078ca50a25399a5f87d546e59ea3be28284c3075a9769”,“payload”:
{“customer”:“Kong_SE_Demo_H1FY22”,“license_creation_date”:“2020-11-30”,“product_subscription”:“Kong Enterprise Edition”,“support_plan”:“None”,“admin_seats”:“5”,“dataplanes”:“5”,“license_expiration_date”:“2021-06-30”,“license_key”:“XXXXXXXXXXXXX”}}}

Now create a Kubernetes general key using the following command:

kubectl create secret generic kong-mesh-license -n kong-mesh-system --from-file=./license.json

Close the kubectl session, click the Default item and Apps on the top menu bar. Click the Launch button and select the kong-mesh Helm chart.

Click Use an existing namespace and select the one we just created. There are several parameters ( https://artifacthub.io/packages/helm/kong-mesh/kong-mesh) to configure Kong Mesh, but we will keep all the default values. After clicking Launch, you should see the Kong Mesh application deployment completed.

在这里插入图片描述

You can use Rancher Cluster Explorer again to check the installation. Click Pods on the left menu and select the namespace of kong-mesh-system.

You can also use kubectl like this:

NAMESPACE          NAME                                                      READY   STATUS    RESTARTS   AGE
cattle-system      cattle-cluster-agent-785fd5f54d-r7x8r                     1/1     Running   0          75m
fleet-system       fleet-agent-77c78f9c74-f97tv                              1/1     Running   0          75m
kong-mesh-system   kuma-control-plane-5b9c6f4598-nvq8q                       1/1     Running   0          16m
kube-system        event-exporter-gke-666b7ffbf7-n9lfl                       2/2     Running   0          76m
kube-system        fluentbit-gke-xqsdv                                       2/2     Running   0          76m
kube-system        gke-metrics-agent-gjrqr                                   1/1     Running   0          76m
kube-system        konnectivity-agent-4c4hf                                  1/1     Running   0          76m
kube-system        kube-dns-66d6b7c877-tq877                                 4/4     Running   0          76m
kube-system        kube-dns-autoscaler-5c78d65cd9-5hcxs                      1/1     Running   0          76m
kube-system        kube-proxy-gke-c-kpwnf-default-0-be059c1c-49qp            1/1     Running   0          76m
kube-system        l7-default-backend-5b76b455d-v6dvg                        1/1     Running   0          76m
kube-system        metrics-server-v0.3.6-547dc87f5f-qntjf                    2/2     Running   0          75m
kube-system        prometheus-to-sd-fdf9j                                    1/1     Running   0          76m
kube-system        stackdriver-metadata-agent-cluster-level-68d94db6-64n4r   2/2     Running   1          75m

3.

Our Service Mesh deployment is based on a simple microservice-to-microservice communication scenario. Since we are running a canary release, there are two versions of the called microservice:

"Magnanimo": Expose the Kubernetes ingress controller through Kong.
"Benigno": Provides a "hello" endpoint, in which it echoes the current datetime. It has a canary release and sends a slightly different response.

The following figure shows this architecture:

在这里插入图片描述

Create a namespace with sidecar injection annotations. You can use Rancher Cluster Manager again: select your cluster and click Projects/Namespaces. Click Add Namespace. Enter "kong-mesh-app" as the name and include a comment with the "kuma.io/sidecar-injection" key and "enabled" as its value.

在这里插入图片描述

Of course, you can also choose to use kubectl

kubectl create namespace kong-mesh-app

kubectl annotate namespace kong-mesh-app kuma.io/sidecar-injection=enabled

Submit the following declaration to deploy Magnanimo injecting the Kong Mesh data plane

cat <<EOF | kubectl apply -f -

apiVersion: apps/v1

kind: Deployment

metadata:

name: magnanimo

namespace: kong-mesh-app

spec:

replicas: 1

selector:

matchLabels:

app: magnanimo

template:

metadata:

labels:

app: magnanimo

spec:

containers:

- name: magnanimo

image: claudioacquaviva/magnanimo

ports:

- containerPort: 4000

---

apiVersion: v1

kind: Service

metadata:

name: magnanimo

namespace: kong-mesh-app

labels:

app: magnanimo

spec:

type: ClusterIP

ports:

- port: 4000

name: http

selector:

app: magnanimo

EOF

Use Rancher Cluster Manager to check your deployment. Move the mouse to the kong-rancher menu, click the Default item, you can see the current deployment situation:

在这里插入图片描述

Click magnanimo to check the details of the deployment, including its pods:

在这里插入图片描述

Click on the magnanimo pod to check the containers running inside it.

在这里插入图片描述

We can see that the pod has two running containers:

  • magnanimo: where the microservice actually runs.
  • kuma-sidecar: injected during deployment as the data plane of Kong Mesh.

Similarly, when deploying Benigno, it also has its own sidecar:

cat <<EOF | kubectl apply -f -

apiVersion: apps/v1

kind: Deployment

metadata:

name: benigno-v1

namespace: kong-mesh-app

spec:

replicas: 1

selector:

matchLabels:

app: benigno

template:

metadata:

labels:

app: benigno

version: v1

spec:

containers:

- name: benigno

image: claudioacquaviva/benigno

ports:

- containerPort: 5000

---

apiVersion: v1

kind: Service

metadata:

name: benigno

namespace: kong-mesh-app

labels:

app: benigno

spec:

type: ClusterIP

ports:

- port: 5000

name: http

selector:

app: benigno

EOF

And finally, deploy Benigno canary release. Notice that the canary release will be abstracted by the same Benigno Kubernetes Service created before:

cat <<EOF | kubectl apply -f -

apiVersion: apps/v1

kind: Deployment

metadata:

name: benigno-v2

namespace: kong-mesh-app

spec:

replicas: 1

selector:

matchLabels:

app: benigno

template:

metadata:

labels:

app: benigno

version: v2

spec:

containers:

- name: benigno

image: claudioacquaviva/benigno\_rc

ports:

- containerPort: 5000

EOF

Use the following command to check the deployment and Pods:

$ kubectl get pod --all-namespaces
NAMESPACE          NAME                                                      READY   STATUS    RESTARTS   AGE
cattle-system      cattle-cluster-agent-785fd5f54d-r7x8r                     1/1     Running   0          75m
fleet-system       fleet-agent-77c78f9c74-f97tv                              1/1     Running   0          75m
kong-mesh-app      benigno-v1-fd4567d95-drnxq                                2/2     Running   0          110s
kong-mesh-app      benigno-v2-b977c867b-lpjpw                                2/2     Running   0          30s
kong-mesh-app      magnanimo-658b67fb9b-tzsjp                                2/2     Running   0          5m3s
kong-mesh-system   kuma-control-plane-5b9c6f4598-nvq8q                       1/1     Running   0          16m
kube-system        event-exporter-gke-666b7ffbf7-n9lfl                       2/2     Running   0          76m
kube-system        fluentbit-gke-xqsdv                                       2/2     Running   0          76m
kube-system        gke-metrics-agent-gjrqr                                   1/1     Running   0          76m
kube-system        konnectivity-agent-4c4hf                                  1/1     Running   0          76m
kube-system        kube-dns-66d6b7c877-tq877                                 4/4     Running   0          76m
kube-system        kube-dns-autoscaler-5c78d65cd9-5hcxs                      1/1     Running   0          76m
kube-system        kube-proxy-gke-c-kpwnf-default-0-be059c1c-49qp            1/1     Running   0          76m
kube-system        l7-default-backend-5b76b455d-v6dvg                        1/1     Running   0          76m
kube-system        metrics-server-v0.3.6-547dc87f5f-qntjf                    2/2     Running   0          75m
kube-system        prometheus-to-sd-fdf9j                                    1/1     Running   0          76m
kube-system        stackdriver-metadata-agent-cluster-level-68d94db6-64n4r   2/2     Running   1          75m


$ kubectl get service --all-namespaces
NAMESPACE          NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                                                AGE
default            kubernetes             ClusterIP   10.0.16.1     <none>        443/TCP                                                79m
kong-mesh-app      benigno                ClusterIP   10.0.20.52    <none>        5000/TCP                                               4m6s
kong-mesh-app      magnanimo              ClusterIP   10.0.30.251   <none>        4000/TCP                                               7m18s
kong-mesh-system   kuma-control-plane     ClusterIP   10.0.21.228   <none>        5681/TCP,5682/TCP,443/TCP,5676/TCP,5678/TCP,5653/UDP   18m
kube-system        default-http-backend   NodePort    10.0.19.10    <none>        80:32296/TCP                                           79m
kube-system        kube-dns               ClusterIP   10.0.16.10    <none>        53/UDP,53/TCP                                          79m
kube-system        metrics-server         ClusterIP   10.0.20.174   <none>        443/TCP                                                79m

You can also use the Kong Mesh console to check the microservices and data plane. Run the following command on Terminal:

kubectl port-forward service/kuma-control-plane -n kong-mesh-system 5681

Redirect your browser to http://localhost:5681/gui. Click Skip to Dashboard and All Data Plane Proxies:

在这里插入图片描述

Start a loop and see how the canary release is running. Note that the services have been deployed as the ClusterIP type, so you need to directly expose them with "port-forward". The next step will show how to expose services with Ingress Controller.

Run on the local terminal:

kubectl port-forward service/magnanimo -n kong-mesh-app 4000

Open another Terminal and start the loop. The request should go to port 4000 provided by Magnanimo. The path "/hw2" routes the request to the Benigno service. There are two endpoints behind it, which are related to the two versions of Benigno:

while [1]; do curl http://localhost:4000/hw2; echo; done

You should see results similar to the following:

Hello World, Benigno: 2020-11-20 12:57:05.811667
Hello World, Benigno: 2020-11-20 12:57:06.304731
Hello World, Benigno, Canary Release: 2020-11-20 12:57:06.789208
Hello World, Benigno: 2020-11-20 12:57:07.269674
Hello World, Benigno, Canary Release: 2020-11-20 12:57:07.755884
Hello World, Benigno, Canary Release: 2020-11-20 12:57:08.240453
Hello World, Benigno: 2020-11-20 12:57:08.728465
Hello World, Benigno: 2020-11-20 12:57:09.208588
Hello World, Benigno, Canary Release: 2020-11-20 12:57:09.689478
Hello World, Benigno, Canary Release: 2020-11-20 12:57:10.179551
Hello World, Benigno: 2020-11-20 12:57:10.662465
Hello World, Benigno: 2020-11-20 12:57:11.145237
Hello World, Benigno, Canary Release: 2020-11-20 12:57:11.618557
Hello World, Benigno: 2020-11-20 12:57:12.108586
Hello World, Benigno, Canary Release: 2020-11-20 12:57:12.596296
Hello World, Benigno, Canary Release: 2020-11-20 12:57:13.093329
Hello World, Benigno: 2020-11-20 12:57:13.593487
Hello World, Benigno, Canary Release: 2020-11-20 12:57:14.068870

4. Control the cost of canary release

As we have seen, the requests issued by the two Benigno microservices use a round-robin strategy. In other words, we have no control over the cost of canary releases. Service Mesh allows us to define when and how canary releases are exposed to our consumers (in this case, Magnanimo microservices).

To define a strategy to control the flow of traffic to the two versions, the following statement needs to be used. It says that 90% of the traffic should go to the current version, and only 10% of the traffic should be redirected to the canary release.

cat <<EOF | kubectl apply -f -
apiVersion: kuma.io/v1alpha1
kind: TrafficRoute
mesh: default
metadata:
namespace: default
name: route-1
spec:
sources:
- match:
kuma.io/service: magnanimo\_kong-mesh-app\_svc\_4000
destinations:
- match:
kuma.io/service: benigno\_kong-mesh-app\_svc\_5000
conf:
split:
- weight: 90
destination:
kuma.io/service: benigno\_kong-mesh-app\_svc\_5000
version: v1
- weight: 10
destination:
kuma.io/service: benigno\_kong-mesh-app\_svc\_5000
version: v2
EOF

After applying the declaration, you should see the following result:

Hello World, Benigno: 2020-11-20 13:05:02.553389
Hello World, Benigno: 2020-11-20 13:05:03.041120
Hello World, Benigno: 2020-11-20 13:05:03.532701
Hello World, Benigno: 2020-11-20 13:05:04.021804
Hello World, Benigno: 2020-11-20 13:05:04.515245
Hello World, Benigno, Canary Release: 2020-11-20 13:05:05.000644
Hello World, Benigno: 2020-11-20 13:05:05.482606
Hello World, Benigno: 2020-11-20 13:05:05.963663
Hello World, Benigno, Canary Release: 2020-11-20 13:05:06.446599
Hello World, Benigno: 2020-11-20 13:05:06.926737
Hello World, Benigno: 2020-11-20 13:05:07.410605
Hello World, Benigno: 2020-11-20 13:05:07.890827
Hello World, Benigno: 2020-11-20 13:05:08.374686
Hello World, Benigno: 2020-11-20 13:05:08.857266
Hello World, Benigno: 2020-11-20 13:05:09.337360
Hello World, Benigno: 2020-11-20 13:05:09.816912
Hello World, Benigno: 2020-11-20 13:05:10.301863
Hello World, Benigno: 2020-11-20 13:05:10.782395
Hello World, Benigno: 2020-11-20 13:05:11.262624
Hello World, Benigno: 2020-11-20 13:05:11.743427
Hello World, Benigno: 2020-11-20 13:05:12.221174
Hello World, Benigno: 2020-11-20 13:05:12.705731
Hello World, Benigno: 2020-11-20 13:05:13.196664
Hello World, Benigno: 2020-11-20 13:05:13.680319

5. Install Kong for Kubernetes

Let's go back to Rancher to install our Kong for Kubernetes Ingress Controller and control the exposure of the service mesh. On the Rancher Catalog page, click the Kong icon. Accept the default values and click Launch:

在这里插入图片描述

You should see that both the Kong and Kong Mesh applications have been deployed:

图片

在这里插入图片描述

Use kubectl again to check the installation:

$ kubectl get pod --all-namespaces
NAMESPACE          NAME                                                      READY   STATUS    RESTARTS   AGE
cattle-system      cattle-cluster-agent-785fd5f54d-r7x8r                     1/1     Running   0          84m
fleet-system       fleet-agent-77c78f9c74-f97tv                              1/1     Running   0          83m
kong-mesh-app      benigno-v1-fd4567d95-drnxq                                2/2     Running   0          10m
kong-mesh-app      benigno-v2-b977c867b-lpjpw                                2/2     Running   0          8m47s
kong-mesh-app      magnanimo-658b67fb9b-tzsjp                                2/2     Running   0          13m
kong-mesh-system   kuma-control-plane-5b9c6f4598-nvq8q                       1/1     Running   0          24m
kong               kong-kong-754cd6947-db2j9                                 2/2     Running   1          72s
kube-system        event-exporter-gke-666b7ffbf7-n9lfl                       2/2     Running   0          85m
kube-system        fluentbit-gke-xqsdv                                       2/2     Running   0          84m
kube-system        gke-metrics-agent-gjrqr                                   1/1     Running   0          84m
kube-system        konnectivity-agent-4c4hf                                  1/1     Running   0          84m
kube-system        kube-dns-66d6b7c877-tq877                                 4/4     Running   0          84m
kube-system        kube-dns-autoscaler-5c78d65cd9-5hcxs                      1/1     Running   0          84m
kube-system        kube-proxy-gke-c-kpwnf-default-0-be059c1c-49qp            1/1     Running   0          84m
kube-system        l7-default-backend-5b76b455d-v6dvg                        1/1     Running   0          85m
kube-system        metrics-server-v0.3.6-547dc87f5f-qntjf                    2/2     Running   0          84m
kube-system        prometheus-to-sd-fdf9j                                    1/1     Running   0          84m
kube-system        stackdriver-metadata-agent-cluster-level-68d94db6-64n4r   2/2     Running   1          84m


$ kubectl get service --all-namespaces
NAMESPACE          NAME                   TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)                                                AGE
default            kubernetes             ClusterIP      10.0.16.1     <none>          443/TCP                                                85m
kong-mesh-app      benigno                ClusterIP      10.0.20.52    <none>          5000/TCP                                               10m
kong-mesh-app      magnanimo              ClusterIP      10.0.30.251   <none>          4000/TCP                                               13m
kong-mesh-system   kuma-control-plane     ClusterIP      10.0.21.228   <none>          5681/TCP,5682/TCP,443/TCP,5676/TCP,5678/TCP,5653/UDP   24m
kong               kong-kong-proxy        LoadBalancer   10.0.26.38    35.222.91.194   80:31867/TCP,443:31039/TCP                             78s
kube-system        default-http-backend   NodePort       10.0.19.10    <none>          80:32296/TCP                                           85m
kube-system        kube-dns               ClusterIP      10.0.16.10    <none>          53/UDP,53/TCP                                          85m
kube-system        metrics-server         ClusterIP      10.0.20.174   <none>          443/TCP                                                85m

6. Create Ingress

With the following statement, we will expose the Magnanimo microservice through an Ingress and its route "/route1".

cat <<EOF | kubectl apply -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: route1
namespace: kong-mesh-app
annotations:
konghq.com/strip-path: "true"
spec:
rules:
- http:
paths:
- path: /route1
backend:
serviceName: magnanimo
servicePort: 4000
EOF

Now, the temporary "port-forward" exposure mechanism can be replaced by the formal Ingress. And our loop can also start to consume Ingress, the result is as follows:


Rancher
1.2k 声望2.5k 粉丝

Rancher是一个开源的企业级Kubernetes管理平台,实现了Kubernetes集群在混合云+本地数据中心的集中部署与管理。Rancher一向因操作体验的直观、极简备受用户青睐,被Forrester评为“2020年多云容器开发平台领导厂商...