1
头图
The content of "K8S Ecological Weekly" mainly contains some recommended weekly information related to the K8S ecology that I came into contact with. Welcome to subscribe to the column "k8s ecology" .

KIND v0.11.0 is officially released

KIND (Kubernetes In Docker) follow me must be familiar with it. This is a project that I have been participating in and using a lot. It can be very convenient to use Docker containers as the Node of Kubernetes. It is fast Start one/or more test clusters. It has been 4 months since the last version was released. Let's take a look at the notable changes in this version!

Breaking changes

  • The default k8s version in this version is v1.21.1;
  • Removed out using bazel build mirrored manner, kind build node-image the --type parameters have been abandoned;
  • kind build node-image the --kube-root parameters have been abandoned, you will find the location of the code directory k8s according to standard mode;

New features

  • kind build node-image adds a new --arch , which can support the construction of multi-architecture mirroring;
  • The pre-built images currently released by KIND are all multi-arch and can run on amd64 and arm64 architectures;
  • Currently KIND can run in Docker in rootless mode and Podman in rootless mode. For detailed instructions, please refer to KIND runs in rootless mode ;
  • KIND's default CNI kindnetd already supports dual-stack networks, and 160b1d870972bd is enabled by default in the v1.21 version of ;

You can install the latest version of KIND in any of the following ways:

  • GO111MODULE="on" go get sigs.k8s.io/kind@v0.11.0;
  • wget -O kind https://kind.sigs.k8s.io/dl/v0.11.0/kind-linux-amd64;
  • After clone KIND code warehouse , execute make build ;

For more information about the use and instructions of KIND, please refer to the official document: https://kind.sigs.k8s.io/ Welcome to download and use.

apisix-ingress-controller v0.6.0 released

Apache APISIX Ingress controller is the control plane component of Apache APISIX. It can publish its custom resources (CR) and the native Ingress resources in Kubernetes to APISIX, and then use APISIX as an ingress gateway to manage north-south traffic. Let's take a look at the notable changes brought about in the v0.6.0 version of this release:

  • #115 supports TCP proxy;
  • #242 adds a label to the resources that have been pushed by the ingress controller;
  • Added jsonschema check for ApisixUpstream and ApisixTls
  • #394 records Kubernetes events during resource processing;
  • #395 Support reporting resource status;
  • #402 adds global_rules configuration for cluster-level plug-ins;

Cilium v1.10.0 is officially released

Cilium I have introduced many times in previous articles. It is based on eBPF technology and can provide transparent proxy and protection for network and API connections between application services in Kubernetes. I would like to quickly understand Cilium can refer to what I wrote before "Cilium Quick Start" want to quickly understand eBPF can also look at what I do on the PyCon China 2020 share.

Cilium v1.10 version is a relatively large feature version, in this version brings many features worthy of attention, let's take a look!

Egress IP Gateway

When almost all network components are used as ingress gateways, Cilium found that when integrating cloud-native applications with traditional applications, traditional applications are mostly authorized through IP whitelisting. In addition, the dynamic nature of Pod IP makes it difficult for the IP address to be authorized. Management becomes a pain point.

Now in the new version of Cilium, through the new Kubernetes CRD, the static IP can be associated with the traffic when the packet leaves the Kubernetes cluster, which makes the external firewall use this consistent static IP to identify the Pod traffic.

In fact, Cilium helped to do NAT, and it is very simple to use:

apiVersion: cilium.io/v2alpha1
kind: CiliumEgressNATPolicy
metadata:
  name: egress-sample
spec:
  egress:
  - podSelector:
      matchLabels:
        # The following label selects default namespace
        io.kubernetes.pod.namespace: default
  destinationCIDRs:
  - 192.168.33.13/32
  egressSourceIP: "192.168.33.100"

The above configuration means that egressSourceIP is used to process the egress traffic from the Pod in the default namespace.

BGP integrated support

One of the reasons why many small partners give up Cilium may be because of the support of BGP, but from this version, there is no need to worry!

Cilium does this by integrating MetalLB to achieve BGP L3 protocol support, so that Cilium can allocate IP for LoadBalancer type services and advertise it to the router through BGP, so that external traffic can access the service normally.

The way to configure BGP support is also very simple:

apiVersion: v1
kind: ConfigMap
metadata:
  name: bgp-config
  namespace: kube-system
data:
  config.yaml: |
    peers:
    - peer-address: 10.0.0.1
      peer-asn: 64512
      my-asn: 64512
    address-pools:
    - name: default
      protocol: bgp
      addresses:
      - 192.0.2.0/24

Mainly, peers used to interconnect with existing BGP routers in the network, and address-pools is the IP pool allocated by Cilium for LoadBalancer.

Independent load balancing based on XDP

Cilium's eBPF-based load balancer has recently added support for Maglev consistent hashing and the acceleration of the forwarding plane on the eXpress (XDP) layer. These features make it also available as an independent 4-layer load balancer.

Cilium XDP L4LB has complete IPv4/IPv6 dual-stack support, can be deployed independently of the Kubernetes cluster, and exists as a programmable L4 LB.

other

In addition, it adds support for Wireguard to encrypt traffic between Pods; adds a new Cilium CLI to manage Cilium clusters; and has better performance than ever!

For more information about the changes to the Cilium project, please refer to its ReleaseNote

Upstream progress

  • runc released the v1.0-rc95 version, which is probably the last version before v1.0;
  • The CNCF network team defined a set of Service Mesh Performance specification , through which a unified standard was reached to measure the performance of Service Mesh;
  • The CNCF network team defined a set of Service Mesh Performance specifications, through which a unified standard was reached to measure the performance of Service Mesh;

Welcome to subscribe to my article public account【MoeLove】

TheMoeLove


张晋涛
1.7k 声望19.7k 粉丝