Author|Hao Shuwei (liusheng)

The cloud native technology represented by Kubernetes not only shields the differences in the infrastructure of various cloud vendors and data centers, but also allows applications to be described and deployed in a standardized way on different clouds. On this basis, we can manage Kubernetes clusters in any geographic location at low cost. This article will mainly introduce you how to achieve cluster management and security governance with a consistent experience for public cloud ACK clusters and self-built Kubernetes clusters in the data center.

ACK registration cluster security architecture

To achieve cluster management and security governance that have a consistent experience for public cloud ACK clusters and self-built Kubernetes clusters in data centers, they must be unified to the same control plane. ACK registered clusters allow self-built Kubernetes clusters in any geographic location to pass through the public network. Or the private network (the cloud is connected to the cloud and the network is connected) to connect to the endpoint to access the Alibaba Cloud container service management system. The following is a schematic diagram of the ACK registration cluster:

1.png

In the schematic diagram of the ACK registration cluster architecture, it mainly includes the following components:

  • ACK container service console.
  • ACK registration cluster Agent component: The Agent component is deployed in the self-built Kubernetes cluster (or other cloud vendor's container cluster) in the form of Deployment to receive the ACK registration cluster Stub component (use the ACK container service console or ACK registration cluster kubeconfig) ) The issued request is forwarded to the Kubernetes API Server of the target cluster, and the response from the Kubernetes API Server is received and sent back to the Stub component.
  • ACK registration cluster Stub component: The container service management and control side of the stub component deployment. Each registered cluster corresponds to a stub component, which is used to forward the ACK container service console or ACK registration cluster kubeconfig to access the cluster request, which is forwarded to the Agent component and Receive the response from the Agent component, and finally return the response to the client.
  • Kubernetes API Server: The Kubernetes API Server that targets self-built Kubernetes clusters or other cloud vendor container clusters.

One-way registration and two-way communication

As we mentioned earlier, ACK registered clusters can be connected to self-built Kubernetes clusters in any geographical location. One characteristic of self-built Kubernetes clusters in the data center is that under normal circumstances, these self-built clusters are in a restricted private network environment. , Only the cluster can access the external environment from the public network. ACK registration cluster In order to solve this problem, the Stub/Agent component is designed as the Agent actively register to the Stub component in one direction. When the Agent connects to the Stub, it will bring the pre-generated token and certificate for verification. The entire communication link adopts the TLS protocol. Ensure data encryption.

2.png

Non-"managed" secure access mechanism

Register the cluster through ACK to connect the self-built Kubernetes cluster to the Alibaba Cloud Container Service management and control system. The biggest security concern of users is the management and control of access rights to their own clusters. We use the following points to ensure the absolute security of users of their own clusters control.

  • The ACK control side does not store any secret key information of the user's own cluster. The user-built Kubernetes cluster has its own set of certificate system. If the ACK registration cluster uses the kubeconfig of the user-built Kubernetes cluster to access it, it will inevitably cause the user's cluster access permissions to be uncontrollable. In fact, whether it is from a security perspective or from the perspective of the consistency experience of the management and control side, we are required to register the cluster through ACK to shield the difference between the management and control side and the user-built cluster certificate system. Then the specific solution is that the management and control side will use the certificate system issued by ACK to access the registered cluster Stub components. After the Stub and Agent components complete the request authentication, they will be forwarded to the target API Server as a layer 7 agent through the agent in the form of identity impersonation. API Server completes the requested RBAC authentication and audit, as shown in the figure below.

3.png

  • The management and control of cluster access rights is converged to the Agent component. The Agent component is deployed in the user-built cluster, and the ACK control side has the authority to access the user-built cluster through the Stub/Agent link to converge on the Agent component side, which ensures that the user has full control over the access rights of the user-owned cluster.
  • "Non-intrusive" deployment of Agent components. The Agent component is deployed in the self-built Kubernetes cluster in the form of Deployment. No changes or operations are made to the self-built cluster. The source code of the Agent component will be open source in the future.
  • Support opening security audit. Users can turn on the security audit function in the registered cluster, and any operations on the cluster can be queried and audited.

Cluster management with consistent experience

Assuming that user A has created an ACK cluster in the public cloud and a self-built Kubernetes cluster in the data center, how to use a consistent experience to manage these two Kubernetes clusters in different cloud environments? It's very simple, just create an ACK registration cluster and connect to the self-built cluster.

Create ACK registration cluster

We only need to select the region closest to the geographic location of the self-built Kubernetes cluster on the Create Registration Cluster page of the ACK Container Service console and configure the VPC network and security group. The registration cluster can be created in 3 minutes, as shown in the following figure.

4.png

On the cluster details page, you can see that there is a cluster import agent configuration for public network access and private network access for self-built Kubernetes clusters in the connection information, as shown in the following figure:

5.png

Access self-built Kubernetes cluster

Deploy the above cluster import agent configuration in the self-built Kubernetes cluster:

$ kubectl apply -f agent.yaml

After the agent component is running normally, we can view the cluster list in the ACK container service console, as shown in the following figure, the cluster named ack is the ACK managed version cluster, the Kubernete version is 1.20.4-aliyun.1, and the name is idc- The k8s cluster is the ACK registered cluster, and it is connected to the user-built Kubernetes cluster, and the Kubernetes version is 1.19.4.

6.png

Use the registered cluster idc-k8s to manage the self-built Kubernetes cluster. The cluster overview information and node list information are shown in the figure below.

7.png

8.png

Next, users can use the ACK container service console to use a consistent experience to perform operations such as cluster management, node management, application management, and operation and maintenance on cloud clusters on and off the cloud.

Security governance with consistent experience

When using Kubernetes clusters on different cloud platforms, the security governance capabilities and security policy configuration and management methods of different cloud platforms are also different. This uneven security governance capability will cause the operation and maintenance team to define user roles, When accessing permissions, each cloud platform's security management mechanism is very familiar. If the management and security access control capabilities are insufficient, it is very prone to problems such as role violations and access management risks.

For example, in a scenario where various projects are using Kubernetes container clusters, and the container clusters belong to different cloud platforms, the administrator needs to be able to direct all users and their activities to the corresponding container cluster, so that they can know who is in When did you do something, you may encounter a situation where there are multiple accounts that need to be set to different access levels, or more and more people join, leave, and change teams and projects. How to manage these users' permissions will change It's getting more and more complicated.

The ACK registration cluster provides a consistent experience of security governance capabilities for self-built Kubernetes clusters from the following aspects.

Use Alibaba Cloud's main and sub-account authentication system and Kubernetes RBAC authentication system to manage cluster access control

Assuming that there are 2 users with different job responsibilities in the current enterprise, they are developer testuser01 and tester testuser02, then the administrator can create sub-accounts testuser01 and testuser02 for developers and testers, and then according to the different job responsibilities of developers and testers , Assign the following permissions for the ack cluster and idc-k8s cluster:

  • The developer testuser01 grants read and write permissions to all namespaces in the ack cluster, and grants read and write permissions to the test namespace in the idc-k8s cluster.
  • The tester testuser02 only grants read and write permissions to the test namespace of the idc-k8s cluster.

Use the main account to authorize the developer testuser01 and the tester testuser02, select the corresponding testuser01 and testuser02 sub-accounts in the authorization management of the ACK container service console, and the authorization configuration is shown in the following figure:

9.png

After completing the authorization of testuser01 and testuser02 according to the wizard, use the sub-account testuser01 to log in to the container service console to test that testuser01 has read and write permissions to all namespaces of the ack cluster, and only has read and write permissions to the test namespace of the idc-k8s cluster.

10.png

11.png

Use the sub-account testuser02 to log in to the container service console to test that testuser02 cannot see the ack cluster, and only has read and write permissions to the idc-k8s cluster test namespace.

12.png

13.png

Cluster audit

In a Kubernetes cluster, the audit log of API Server can help cluster managers record or trace the daily operations of different users, which is an important part of cluster security operation and maintenance. The cluster audit function can be used in the registered cluster to help users visually trace the daily operations of different users.

The following is a log audit example of a self-built Kubernetes cluster.

14.png

Configuration inspection

The configuration inspection function can be used to scan the security risks of Workload configuration in the cluster, provide inspection details and reports, analyze and interpret the results, and help users understand in real time whether there are security risks in the configuration of the running application in the current state.

The following is an example of the inspection details of a self-built Kubernetes cluster.

15.png

16.png

17.png

Author profile

Shuwei (Liusheng), Alibaba Cloud container service technology expert, a core member of the cloud-native distributed cloud team, focuses on the research of cloud-native technologies such as unified management and scheduling of cloud-native multi-clusters, hybrid clusters, application delivery and migration.

Click on the link below to view the relevant video interpretation~
https://www.bilibili.com/video/BV1WU4y1c7x7/


阿里云云原生
1k 声望302 粉丝