Introduction
eBPF
, Calico has integrated the 060d4429e988d2 data plane.
About what eBPF
, and why Calico
eBPF
not the focus of this article. Friends who are interested can read related documents .
Compared with Calico's default data plane iptables
eBPF
has higher throughput and also has the function of source IP preservation
In K8s, the interface is usually exposed NodePort
For the distributed cluster of K8s, usually, the node where the client connects to the Node Port port and the node where the back-end service Pod responsible for responding to the request is located are not the same node. In order to open up the entire data link, it is inevitable Introduced SNAT
. But this obviously also brings a side effect, that is, after the service Pod receives the Packet, its SRC IP is no longer the actual IP of the client (disguised as the intranet IP of the node). On the other hand, for some business applications, obtaining the client IP is a real rigid need. For example, the business application needs to obtain the geo information of the customer's login through the client IP.
At present, K8s mainly externaltrafficpolicy
, but this scheme itself is not completely satisfactory. eBPF
elegantly solved this problem by integrating 060d4429e989c9 from v3.13.
In this article, we will first demonstrate the KubeKey , and switch the data plane to eBPF
, and finally make a simple demonstration based on the data plane.
Prerequisites
For newer kernels, v4.18+ is generally sufficient.
The author's test cluster:
Deploy K8s cluster
Kubekey's default CNI plug-in is Calico (ipip mode). For the convenience of deployment, a brand new K8s cluster is deployed directly using KubeKey, the version is v1.18.6. For the detailed usage of KubeKey, please refer to document .
Switch Calico data plane
Calico supports multiple data plane, can easily be switched by modifying the configuration, details can be found official documents .
Mainly divided into the following steps:
- Confirm that the BPF file system has been mounted:
mount | grep "/sys/fs/bpf"
If you can see the following information, it means that the BPF file system has been mounted:
- Create Calico configuration file:
- First get the ks-apiserver endpoints information:
kubectl get endpoints kubernetes -o wide
- Since KubeKey is installed by Calico through the manifest method, here we only need to create a cm:
kind: ConfigMap
apiVersion: v1
metadata:
name: kubernetes-services-endpoint
namespace: kube-system
data:
KUBERNETES_SERVICE_HOST: "<API server host>"
KUBERNETES_SERVICE_PORT: "<API server port>"
- Restart Calico pods and wait for Calico Pods to become Running again
kubectl delete pod -n kube-system -l k8s-app=calico-node
kubectl delete pod -n kube-system -l k8s-app=calico-kube-controllers
- Close kube-proxy
kubectl patch ds -n kube-system kube-proxy -p '{"spec":{"template":{"spec":{"nodeSelector":{"non-calico": "true"}}}}}'
- Turn on eBPF mode
calicoctl patch felixconfiguration default --patch='{"spec": {"bpfEnabled": true}}'
- Since we need to keep the client IP, we need to turn on the
DSR
mode.
calicoctl patch felixconfiguration default --patch='{"spec": {"bpfExternalServiceMode": "DSR"}}'
At this point, Calico's entire network environment has been configured.
Experience
In order to verify that after Calico switches to the eBPF
data plane, the backend can indeed get the real IP of the client, we will deploy an Nginx service in the cluster and expose the interface through nodeport.
Create an Nginx instance and expose the external interface:
master:~$ kubectl apply -f - <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
nodePort: 30604
EOF
Wait for the Pod to change to the Running state:
Call the Nginx service externally:
curl http://<external-ip>:30604
Query Nginx log and view client IP:
Note: If the cluster itself is deployed in a cloud platform environment, if the node is located in the VPC network, you need to set the corresponding port forwarding rules and open the corresponding firewall ports.
This article is published by the blog one article multi-posting OpenWrite
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。