Original link: https://isovalent.com/blog/post/2021-12-release-111
Author: Cilium parent company Isovalent team
Translator: Fan Bin, Di Weihua, MichelangeloNote: This article has obtained the author's own translation authorization!
The Cilium project has gradually become a star in the limelight, and we are proud to be the core staff of the project. A few days ago, we released Cilium 1.11 version with many new features. This is an exciting version. Many new features also include the much-anticipated Beta version of Cilium Service Mesh. In this article, we will delve into some of the new features.
Service Mesh test version (Beta)
Before discussing version 1.11, let us first understand the new features of Service Mesh announced by the Cilium community.
- Service Mesh (Beta) version based eBPF technology : defines a new service grid capabilities, including L7 traffic management and load balancing, TLS termination, canary publishing, tracking, and many other capabilities.
- integrates the Kubernetes Ingress (Beta) function : By combining eBPF and Envoy, it supports Kubernetes Ingress.
An article on the Cilium website details the Service Mesh Beta version , which also includes how to participate in the development of this feature. Currently, these Beta features are part of the Cilium project and branch , which can be independently tested, fed back, and modified. We look forward to incorporating into the Cilium main branch before the Cilium 1.12 release in early 2022.
Cilium 1.11
Cilium 1.11 includes additional features of Kubernetes and a load balancer that is deployed independently.
- OpenTelemetry supports : Hubble L3-L7 observability data supports the export of OpenTelemetry tracking and metrics (Metrics) format. ( more details )
- Kubernetes APIServer policy matches : A new policy entity is used to easily and conveniently create a policy model for traffic entering and leaving the Kubernetes API Server. ( more details )
- Topology-aware routing : Enhance load balancing capabilities, route traffic to the nearest endpoint based on topology awareness, or keep it in the same region (Region). ( more details )
- BGP announce Pod CIDR : Use BGP to announce Pod CIDR IP routing in the network. ( more details )
- Service backend traffic graceful termination : Support graceful connection termination, traffic sent to the terminated Pod through load balancing can be terminated after normal processing. ( more details )
- host firewall stable version : The host firewall function has been upgraded to a stable version available for production. ( more details )
- Improves the scalability of the load balancer : Cilium load balancer supports more than 64K back-end endpoints. ( more details )
- improves load balancer device support : Load balancing accelerated XDP fast path now supports bond devices ( more details ) At the same time, it can be more commonly used in multi-device settings. ( more details ).
- Kube-Proxy-replacement supports isti : Cilium's kube-proxy replacement mode is compatible with Istio sidecar deployment mode. ( more details )
- Egress egress gateway optimization : Egress gateway capabilities are enhanced to support other data path modes. ( more details )
- Hosted IPv4/IPv6 Neighbor Discovery : The Linux kernel and Cilium load balancer have been extended, its internal ARP library has been deleted, and the next hop discovery of IPv4 and the current IPv6 node are entrusted to the kernel for management. ( more details )
- based device detection : Automatic routing-based detection of external network devices to improve the user experience of Cilium multi-device settings. ( more details )
- Kubernetes Cgroup enhancement : Integrate Cilium's kube-proxy-replacement function in cgroup v2 mode, and at the same time, enhance the Linux kernel in cgroup v1/v2 mixed mode. ( more details )
- Cilium Endpoint Slices : Based on the CRD model, Cilium can interact with the control plane of Kubernetes more efficiently, and does not require a dedicated ETCD instance, and the node can also be expanded to 1000+. ( more details )
- integrates Mirantis Kubernetes engine : supports Mirantis Kubernetes engine. ( more details )
What is Cilium?
Cilium is an open source software that transparently provides network and API connections and security between services for services deployed on a Kubernetes-based Linux container management platform.
The bottom layer of Cilium is a new technology eBPF based on the Linux kernel, which can dynamically inject powerful security, visibility and network control logic into the Linux system. Cilium provides multiple cluster routing based on eBPF, replaces kube-proxy to achieve load balancing, transparent encryption, and network and service security. In addition to providing traditional network security, the flexibility of eBPF also supports application protocols and DNS request/response security. At the same time, Cilium is tightly integrated with Envoy and provides a Go-based extension framework. Because eBPF runs in the Linux kernel, all Cilium functions are applied without any changes to the application code or container configuration.
Please refer to [Introduction to Cilium] for a more detailed introduction to Cilium.
OpenTelemetry support
The new version adds support for OpenTelemetry .
OpenTelemetry is a CNCF project that defines telemetry protocols and data formats, covering distributed tracking, metrics, and logs. The project provides SDK and collector running on Kubernetes. Usually, the application directly detects and exposes OpenTelemetry data, and this detection is most often implemented in the application using the OpenTelemetry SDK. The OpenTelemetry collector is used to collect data from various applications in the cluster and send it to one or more backends. CNCF project Jaeger is one of the backends that can be used to store and present tracking data.
supports OpenTelemetry's Hubble adapter is an add-on component that can be deployed to a cluster running Cilium (Cilium version is best 1.11, of course, it should also be applicable to older versions). The adapter is an OpenTelemetry collector with a Hubble receiver embedded. We recommend using the OpenTelemetry Operator for deployment (see User Guide ). The Hubble adapter reads traffic data from Hubble and converts it into tracking and log data.
Hubble adapters are added to clusters that support OpenTelemetry to provide valuable observability for network events and application-level telemetry. The current version provides the association between HTTP traffic and spans through the OpenTelemetry SDK.
Topology-aware load balancing
It is common for Kubernetes clusters to be deployed across multiple data centers or availability zones. This not only brings high availability benefits, but also brings some operational complexity.
So far, Kubernetes does not have a built-in structure that can describe the location of Kubernetes service endpoints based on the topology level. This means that the service endpoint selected by the Kubernetes node based on the service load balancing decision may be in a different availability zone from the customer requesting the service. This scenario will bring many side effects, which may be an increase in cloud service fees, usually because the traffic spans multiple availability zones, the cloud provider will charge additional fees, or request delays increase. More broadly, we need to define the location of service endpoints based on the topology. For example, service traffic should be in the same node (node), the same rack (rack), the same failure zone (zone), the same failure region (region), and the same Load balancing between the endpoints of the cloud provider.
Kubernetes v1.21 introduced a topology-aware routing to solve this limitation. By setting the service.kubernetes.io/topology-aware-hints
annotation to auto
, the endpoint prompt is set in the EndpointSlice object of the service to prompt the partition in which the endpoint is running. The zone name is obtained from the topology.kubernetes.io/zone label of the node. If the partition label values of two nodes are the same, they are considered to be at the same topology level.
The prompt will be replaced by Cilium's kube-proxy, and the endpoints of the route will be filtered according to the prompts set by the EndpointSlice controller, so that the load balancer will preferentially select the endpoints of the same partition.
This Kubernetes feature is currently in the Alpha stage, so it needs to be enabled with --feature-gate. For more information, please refer to official document .
Kubernetes APIServer policy matching
On hosting Kubernetes environments, such as GKE, EKS, and AKS, the IP address of kube-apiserver is opaque. In previous versions of Cilium, there was no formal way to write Cilium network policies and define access control to kube-apiserver. This involves some implementation details, such as: Cilium security identity allocation, whether kube-apiserver is deployed in the cluster or outside the cluster.
In order to solve this problem, Cilium 1.11 adds new features to provide users with a way to allow the use of a dedicated policy object to define access control for traffic entering and leaving the apiserver. The bottom layer of this function is the entity selector, which can parse the meaning of reserved kube-apiserver tags, and can be automatically applied to the IP address associated with kube-apiserver.
The security team will be particularly interested in this new feature because it provides an easy way to define the Cilium network policy for pods, allowing or prohibiting access to kube-apiserver. The following CiliumNetworkPolicy policy fragment defines that all Cilium endpoints in the kube-system namespace are allowed to access kube-apiserver, and all other Cilium endpoints are prohibited from accessing kube-apiserver.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-to-apiserver
namespace: kube-system
spec:
endpointSelector: {}
egress:
- toEntities:
- kube-apiserver
BGP announces Pod CIDR
With increasing attention to the private Kubernetes environment, we hope to integrate well with the existing data center network infrastructure, which is usually routed and distributed based on the BGP protocol. In the previous version, the Cilium agent has begun to integrate the BGP protocol, and can publish the VIP through BGP.
Now, Cilium 1.11 also introduces the ability to announce Kubernetes Pod subnets through BGP. Cilium can create a BGP peer with any downstream interconnected BGP infrastructure and advertise the subnet of the assigned Pod IP address. In this way, the downstream infrastructure can distribute these routes in a suitable way, so that the data center can route to the Pod subnet through various private/public next hops.
To start using this feature, Kubernetes nodes running Cilium need to read the ConfigMap settings of BGP:
apiVersion: v1
kind: ConfigMap
metadata:
name: bgp-config
namespace: kube-system
data:
config.yaml: |
peers:
- peer-address: 192.168.1.11
peer-asn: 64512
my-asn: 64512
At the same time, Cilium should be installed with the following parameters:
$ cilium install \
--config="bgp-announce-pod-cidr=true"
After Cilium is installed, it will advertise the Pod CIDR range to the BGP router, which is 192.168.1.11
.
Below is the complete demonstration video of the recent Cilium eCHO episode
If you want to know more, such as: how to configure LoadBalancer IP announcement for Kubernetes service, how to publish node's Pod CIDR range through BGP, please refer to docs.cilium.io .
Hosted IPv4/IPv6 neighbor discovery
When Cilium enables eBPF to replace kube-proxy, Cilium will perform neighbor discovery of cluster nodes to collect the L2 address of the immediate neighbor or next hop in the network. This is necessary for service load balancing. The eXpress Data Path (XDP) fast path supports reliable high traffic rates of millions of packets per second. In this mode, technically on-demand dynamic resolution is impossible, because it needs to wait for the neighboring backend to be resolved.
In Cilium 1.10 and earlier versions, the cilium agent itself contains an ARP parsing library, and its controller triggers discovery and regularly refreshes new cluster nodes. The manually resolved neighbor entries are pushed to the kernel and refreshed as PERMANENT entries. The eBPF load balancer retrieves these entries and directs the traffic to the backend. The ARP resolution library of cilium agent lacks support for IPv6 neighbor resolution, and there are many problems with PERMANENT neighbor entries: for example, the entries may become stale, and the kernel refuses to learn address updates because they are static in nature. In this case, data packets are discarded between nodes. In addition, the tight coupling of neighbor resolution to the cilium agent also has a disadvantage. During the agent start-stop cycle, no address update learning will occur.
In Cilium 1.11, the neighbor discovery function has been completely redesigned, and Cilium's internal ARP parsing library has been completely removed from the agent. Now the agent relies on the Linux kernel to discover the next hop or host in the same L2 domain. Cilium now supports both IPv4 and IPv6 neighbor discovery. For the kernel of v5.16 or newer, we have submitted the "management" neighbor entry work proposed on BPF & Networking Summit , which was co-organized during the Linux Plumbers conference this year part1 , ). In this case, the agent pushes down the L3 addresses of the newly added cluster nodes and triggers the kernel to automatically resolve their corresponding L2 addresses on a regular basis.
These neighbor entries are used as "external learning" and "management" neighbor attributes and are pushed to the kernel based on netlink. Although the old attributes ensure that these neighbor entries will not be processed by the kernel's garbage collector under pressure, the "management" neighbor attributes will, where feasible, the kernel needs to automatically keep these neighbor attributes in the REACHABLE
state. This means that if the node's upper stack does not actively send or receive traffic to the backend node, the kernel can relearn, keep the neighbor attributes in the REACHABLE
state, and then periodically trigger explicit neighbor resolution through the internal kernel work queue. For old kernels that do not have the function of "managing" neighbor attributes, the agent controller will periodically urge the kernel to trigger new solutions if necessary. Therefore, Cilium no longer has a PERMANENT
neighbor entry, and when upgrading, the agent will automatically migrate the old entry to the dynamic neighbor entry so that the kernel can learn address updates in it.
In addition, in the case of multipath routing, the agent will do load balancing, and it can now view the failed next hop in the routing lookup. This means that instead of replacing all routes, you can avoid failed paths by looking at neighboring subsystem information. In general, for the Cilium agent, this modification work significantly promotes neighbor management, and the data path is more likely to change when a node in the network or the neighbor address of the next hop changes.
XDP multi-device load balancer
Prior to this version, XDP-based load balancer acceleration can only be enabled on a single network device, operating in hair-pinning mode, that is, the device where the packet is forwarded and left is the same as the device where the packet arrives. This initial limitation was added in XDP-based kube-proxy instead of acceleration. The reason is that the driver support for multi-device forwarding is limited under XDP_REDIRECT
XDP_TX
) is every Linux kernel. Both drivers of XDP are supported.
This means that in an environment with multiple network devices, we need to use the tc eBPF mechanism, so we must use Cilium's regular kube-proxy instead. A typical example of this environment is a host with two network devices, one of which is a public network that accepts external requests for Kubernetes service, and the other is a private network that is used for intra-cluster communication between Kubernetes nodes.
Since on modern LTS Linux kernels, most of the upstream network card drivers of 40G and above 100G support XDP_REDIRECT
out of the box, this restriction can finally be lifted. Therefore, this version is replaced by Cilium’s kube-proxy and Cilium’s On the independent load balancer, the load balancing of multiple network devices at the XDP layer is realized, which makes it possible to maintain data packet processing performance in more complex environments.
XDP transparently supports bond devices
In many internal or cloud environments, nodes usually use bond devices to set up dual-port network cards for external traffic. With the recent optimization of the Cilium version, such as kube-proxy or stand-alone load balancer , a question we often receive from users is whether XDP acceleration can be used in conjunction with bond network devices. Although most 10/40/100Gbit/s network drivers of the Linux kernel support XDP, it lacks the ability to transparently operate XDP in bond (and 802.3ad) mode.
One of the options is to implement 802.3ad in the user space and implement bond load balancing in the XDP program, but this is a rather tedious effort for bond device management, such as: observation of netlink link events, in addition to scheduling The local and bond of the device provide separate programs. On the contrary, the native kernel implementation solves these problems, provides more flexibility, and can handle eBPF programs without the need to change or recompile them. The kernel is responsible for managing the bond device group and can automatically propagate eBPF programs. For v5.15 or newer kernels, we have implemented XDP support for bond devices part1 , part2
When the XDP program is connected to the bond device, XDP_TX
is equivalent to that of the tc eBPF program attached to the bond device, which means that the transfer method of the bond configuration is used to select the slave device when transmitting data packets from the bond device. Both failover and link aggregation modes can be used under XDP operation. For XDP_TX
, we have implemented round robin, active backup, 802.3ad, and hash device selection. This situation is particularly meaningful for hairpin load balancers like Cilium.
Route-based device detection
The 1.11 version significantly improves the automatic detection of the device, and can be used for use eBPF instead of kube-proxy , bandwidth manager and host firewall .
In earlier versions, the devices automatically detected by Cilium need to have devices with default routing and devices with Kubernetes NodeIP. Looking to the future, device detection is now based on all routing table entries in the host namespace. In other words, all non-bridged, non-bond, and non-virtual devices with global unicast routing can now be detected.
With this improvement, Cilium should now be able to automatically detect the correct device in more complex network settings without having to manually specify the device devices
When using the latter option, the device name cannot be consistent with the naming convention, for example: the common prefix regular expression cannot be used to name the device.
Graceful termination of service backend traffic
Kubernetes can terminate Pods for a variety of reasons, such as rolling updates, scaling down, or user-initiated deletions. In this case, it is important to gracefully terminate the active connection with the Pod, so that the application has time to complete the request to minimize interruption. Abnormal connection termination can cause data loss or delay application recovery.
Cilium agent monitors service endpoint updates through the "EndpointSlice" API. When a service endpoint is terminated, Kubernetes provided for this endpoint terminating
state. Then, the Cilium agent deletes the data path state of the endpoint so that the endpoint will not be selected for new requests, but the current connection being served by the endpoint can be terminated within a user-defined grace period.
At the same time, Kubernetes informs the container runtime to send the SIGTERM
signal to the service Pod container and wait for the termination grace period. The container application can then initiate graceful termination of active connections, for example, closing a TCP socket. Once the grace period is over, Kubernetes finally triggers a forced shutdown of the processes that are still running in the Pod container SIGKILL
At this time, the agent will also receive the delete event of the endpoint, and then completely delete the data path state of the endpoint. However, if the application Pod exits before the end of the grace period, Kubernetes will immediately send the deletion event regardless of the grace period setting.
For more details, please follow the guide in docs.cilium.io
Optimization of egress egress gateway
In simple scenarios, Kubernetes applications only communicate with other Kubernetes applications, so traffic can be controlled through mechanisms such as network policies. However, this is not always the case in the real world. For example, some privately deployed applications are not containerized, and Kubernetes applications need to communicate with services outside the cluster. These traditional services are usually configured with static IP and are protected by firewall rules. So in this case, how to control and audit the flow?
The egress egress IP gateway function Cilium 1.10 , and the Kubernetes node acts as a gateway for cluster egress traffic to solve this type of problem. Users use policies to specify which traffic should be forwarded to the gateway node and how to forward the traffic. In this case, the gateway node will use a static egress IP to camouflage the traffic, so rules can be established in the traditional firewall.
apiVersion: cilium.io/v2alpha1
kind: CiliumEgressNATPolicy
metadata:
name: egress-sample
spec:
egress:
- podSelector:
matchLabels:
app: test-app
destinationCIDRs:
- 1.2.3.0/24
egressSourceIP: 20.0.0.1
In the above example policy with app: test-app
Pod and the label of the target CIDR 1.2.3.0/24
traffic need 20.0.0.1
outlet IP gateway node (the SNAT) to communicate with the external cluster.
During the Cilium 1.11 development cycle, we put a lot of effort into stabilizing the export gateway function so that it can be put into production. now,
The egress gateway can now work on directly route , distinguishes internal traffic (that is, the egress policy of CIDR with overlapping addresses in ) and 161b9bcb24e59b uses the same egress IP in different strategies. Some problems, such as replies that were incorrectly described as egress traffic and other etc. have been fixed, and testing has also been improved to find potential problems early.
Kubernetes Cgroup enhancements
Cilium uses eBPF instead of kube-proxy as a stand-alone load balancer . One of the advantages is the ability to attach eBPF programs to socket hooks, such as connect(2)
, bind(2)
, sendmsg(2)
and other related systems. The local application connects to the back-end service. But these programs can only be attached to cgroup v2. Although Kubernetes is working hard to migrate to cgroup v2, the current environment of most users is a mixture of cgroup v1 and v2.
Linux marks the relationship between socket and cgroup in the socket object of the kernel, and due to a setting 6 years ago, the socket labels of cgroup v1 and v2 are mutually exclusive. This means that if a socket is created as a cgroup v2 membership, but is later net_prio
or net_cls
controller with cgroup v1 membership, then cgroup v2 will not execute the program attached to the Pod subpath , But fall back to execute the eBPF program attached to the root of the cgroup v2 hierarchy. This will lead to a very serious consequence. If there are no additional programs at the root of cgroup v2, then the entire cgroup v2 hierarchy will be bypassed.
Now, the assumption that cgroup v1 and v2 cannot run in parallel is no longer true. For details, please refer to the Linux Plumbers conference speech earlier this year. Only in rare cases, when an eBPF program marked as a cgroup v2 membership is attached to a subsystem of the cgroup v2 hierarchy, the cgroup v1 network controller in the Kubernetes cluster will bypass the eBPF program. In order to solve this problem as early as possible in the packet processing path, the Cilium team recently repaired the Linux kernel, allowing two cgroup versions ( part1 , part2 ) in all scenarios. Safe operation between each other. This fix not only makes Cilium's cgroup operations completely robust and reliable, but also benefits all other eBPF cgroup users in Kubernetes.
In addition, container runtimes such as Kubernetes and Docker have recently begun to announce support for cgroup v2. In cgroup v2 mode, Docker will switch to the private cgroup namespace , that is, each container (including Cilium) runs in its own private cgroup namespace. Cilium ensures that eBPF programs are attached to the correct socket hooks of the cgroup hierarchy, so that Cilium socket-based load balancing can work properly in the cgroup v2 environment.
Enhance the scalability of the load balancer
Main external contributor: Weilong Cui (Google)
Recent tests have shown that for large Kubernetes environments running Cilium and Kubernetes Endpoints exceeding 64,000, the Service load balancer will be limited. There are two limiting factors:
- Cilium uses eBPF to replace kube-proxy's local backend ID allocator of the independent load balancer is still limited to the 16-bit ID space.
- The key type used by Cilium's eBPF datapath backend mapping for IPv4 and IPv6 is limited to the 16-bit ID space.
In order to enable the Kubernetes cluster to expand to more than 64,000 Endpoints, Cilium's ID allocator and related datapath structure have been converted to use 32-bit ID space.
Cilium Endpoint Slices
Main external contributors: Weilong Cui (Google), Gobinath Krishnamoorthy (Google)
In version 1.11, Cilium has added support for a new operating mode, which greatly improves Cilium's expansion capabilities through a more effective Pod information broadcasting method.
Previously, Cilium used the watch CiliumEndpoint (CEP)
object to broadcast the IP address and security identity information of the Pod. This approach will bring certain challenges in terms of scalability. The creation/update/deletion of each CEP
object will trigger the multicast of watch events, and its scale is linearly related to the number of Cilium-agents in the cluster, and each Cilium-agent can trigger such fan-out actions. If there are N
nodes in the cluster, the total watch events and traffic may N^2
at the rate of 061b9bcb24e792.
Cilium 1.11 introduces a new CRD CiliumEndpointSlice (CES)
CEPs
under the same namespace will be combined by Operator into CES
objects. In this mode, Cilium-agents no longer watch CEP
, but watch CES
, which greatly reduces the need to broadcast watch events and traffic kube-apiserver
kube-apiserver
and enhancing the scalability of Cilium.
Since CEP
greatly reducing kube-apiserver
pressure, Cilium now no longer dependent on specific instance of ETCD (KVStore mode). For clusters with drastic changes in the number of Pods, we still recommend using KVStore to kube-apiserver
the processing work in 061b9bcb24e7e0 to the ETCD instance.
This model weighs the two aspects of "faster dissemination of Endpoint information" and "a more scalable control plane", and strives to balance the rain and dew. Note that compared with the CEP
mode, when the scale is large, if the number of Pods changes drastically (such as large-scale expansion and contraction), a higher Endpoint information propagation delay may occur, which affects remote nodes.
GKE first adopted CES
. We conducted a series of "worst-case" scale tests on GKE and found that Cilium CES
mode is much stronger than CEP
mode. From 1000 node scale load test run, enable CES
after the peak watch the event from the CEP 18k/s
down to the CES 8k/s
, watch the peak flow from the CEP 36.6Mbps
down to the CES 18.1Mbps
. In terms of controller node resource usage, it reduces the peak CPU usage from 28 cores/sec to 10.5 cores/sec.
For details, please refer to Cilium official document .
Kube-Proxy-Replacement supports Istio
Many users use eBPF's built-in load balancer in replace kube-proxy , and enjoy the 161b9bcb24e8c1 efficient processing method based datapath, which avoids the linear growth of the kube-proxy chain along with the cluster scale. .
eBPF of Kubernetes Service load balancing process is divided into architecturally two parts :
- Process external service traffic entering the cluster (north-south direction)
- Handling service traffic from within the cluster (east-west direction)
With the blessing of eBPF, Cilium has achieved as close as possible to the drive layer in the north-south direction (for example, through XDP) to complete the processing of each data packet; east-west traffic is processed as close as possible to the eBPF application layer. The request (for example, TCP connect(2)
) is directly "connected" from the Service virtual IP to one of the backend IPs to avoid the cost of NAT translation for each packet.
This Cilium processing method is suitable for most scenarios, but there are some exceptions, such as common service mesh solutions (Istio, etc.). Istio dependent iptables namespace inserting additional redirection rules Pod network traffic arrives first to apply Sidecar agent (e.g. Envoy) before exiting Pod enter the host name space, and then through SO_ORIGINAL_DST
directly from inside Socket query the Netfilter connection Tracker to collect the original service destination address.
Therefore, in service mesh scenarios such as Istio, Cilium has improved the way of processing traffic between Pods (east-west direction), and changed it to eBPF-based DNAT to complete the processing of each packet, while applications in the host namespace can still be used Use a socket-based load balancer to avoid the cost of NAT translation for each packet.
bpf-lb-sock-hostns-only: true
in the Helm Chart of the new version of Cilium agent. For detailed steps, please refer to Cilium official document .
Feature enhancement and deprecation
The following features have been further enhanced:
- host firewall (Host Firewall) is converted from the beta version to the stable version. The host firewall protects the host network namespace CiliumClusterwideNetworkPolicies Since the introduction of the host firewall function, we have greatly increased the test coverage and fixed some errors. We have also received feedback from some community users who are satisfied with this feature and are ready to use it in a production environment.
The following features have been deprecated:
- Consul to be Cilium's KVStore backend, but it has now been deprecated. It is recommended to use the more tried-and-tested Etcd and Kubernetes as the KVStore backend. Previously, Cilium developers mainly used Consul for local end-to-end testing, but in the recent development cycle, it has been possible to directly use Kubernetes as the backend for testing, and Consul can be retired.
- IPVLAN previously used as an alternative to veth to provide cross-node Pod network communication. Driven by the Cilium community, the performance of the Linux kernel has been greatly improved. At present, veth has the same performance as IPVLAN. For details, please refer to this article: eBPF host routing .
Policy Tracing (Policy Tracing) was used by many Cilium users in the early Cilium version and can be executed by the command line tool
cilium policy trace
in the Pod. However, over time, it has not kept up with the functional progress of the Cilium strategy engine. Cilium now provides better tools to track Cilium's policies, such as network policy editor and Policy Verdicts .This article is published by the blog one article multi-posting OpenWrite
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。