LF Edge eKuiper is a lightweight IoT data analysis and stream processing software, usually running on the edge. It provides a management dashboard to manage one or more eKuiper instances. Generally, dashboards are deployed in cloud nodes to manage eKuiper instances across multiple edge nodes.
In most cases, edge nodes are physically inaccessible from cloud nodes due to security or other considerations. This makes deployment difficult, and cloud-to-edge management is impossible. OpenYurt changed this situation. OpenYurt is built on native Kubernetes and can be extended to seamlessly support edge computing. In short, OpenYurt enables users to manage applications running in edge infrastructure as if they were running in cloud infrastructure.
Starting from the v0.4.0 version, OpenYurt will officially support the deployment and management of eKuiper. In this tutorial, we will explain how to deploy eKuiper and its dashboard in an OpenYurt cluster, and use the yurt tunnel to achieve management from the cloud to the edge. In order to simulate the real scenario where cloud nodes and edge nodes may be located in different network areas, we used a two-node kubernetes cluster. The eKuiper instance will be deployed to the edge node, and the dashboard will be deployed to the cloud node.
prerequisites
In this tutorial, both cloud nodes and edge nodes must install kubernetes and its dependencies. In the cloud node, tools such as OpenYurt and helm need to be used to deploy eKuiper.
Make sure that the cloud node has an external IP so that the edge node can access it. Also make sure that the edge node is an internal node so that the cloud node cannot access it.
Cloud node installation work
First, install kubeadm and its dependencies, such as the docker engine. For details, see official documentation of the installation kubeadm . Note that OpenYurt does not support kubernetes version higher than 1.20, so please install version 1.20.x or below. For systems like Debian, use the following command to install:
sudo apt-get install -y kubelet=1.20.8-00 kubeadm=1.20.8-00 kubectl=1.20.8-00
Next, install Golang and then build OpenYurt .
Finally, install helm because we will deploy eKuiper through the helm chart.
In this tutorial, the hostname of the cloud node is cloud-node
. You can modify your hostname to match this name, or you must cloud-node
in this tutorial with your cloud node hostname.
Edge node installation work
kubeadm
in the edge node.
In this tutorial, the host name of the edge node is edge-node
. You can modify your hostname to match this name, or you must edge-node
in this tutorial with your edge node hostname.
Set up a Kubernetes cluster
We will kubeadm
and let the edge nodes join the cluster.
Assume that the external IP of your cloud node is 34.209.219.149
. In the cloud node, enter the following command, we will get a result similar to the following.
# sudo kubeadm init --control-plane-endpoint 34.209.219.149 --kubernetes-version stable-1.20
[init] Using Kubernetes version: v1.20.8
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root:
kubeadm join 34.209.219.149:6443 --token i24p5i.nz1feykoggszwxpq \
--discovery-token-ca-cert-hash sha256:3aacafdd44d1136808271ad4aafa34e5e9e3553f3b6f21f972d29b8093554325 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 34.209.219.149:6443 --token i24p5i.nz1feykoggszwxpq \
--discovery-token-ca-cert-hash sha256:3aacafdd44d1136808271ad4aafa34e5e9e3553f3b6f21f972d29b8093554325
Through the command, we specify the external ip as the control plane endpoint so that the edge nodes can access, and specify the kubernetes version as 1.20, which is the latest version supported in OpenYurt.
Follow the instructions in the output to set up kubeconfig. kubeadm join
command to be used in the edge node.
At the edge node, run the copied command:
sudo kubeadm join 34.209.219.149:6443 --token i24p5i.nz1feykoggszwxpq \
--discovery-token-ca-cert-hash sha256:3aacafdd44d1136808271ad4aafa34e5e9e3553f3b6f21f972d29b8093554325
If everything goes well, go back to the cloud node and enter the following command to get the k8s node list, make sure you can get 2 nodes:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cloud-node NotReady control-plane,master 17m v1.20.8 172.31.6.118 <none> Ubuntu 20.04.2 LTS 5.4.0-1045-aws docker://20.10.7
edge-node NotReady <none> 17s v1.20.8 192.168.2.143 <none> Ubuntu 20.04.2 LTS 5.4.0-77-generic docker://20.10.7
If the node status is'NotReady', the container network may not be configured. We can install the kubernetes network plug-in as described here For example, to install the Weave Net plug-in:
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"
After a few minutes, run kubectl get nodes -o wide
and the node should be ready.
So far, we have created a k8s cluster with two nodes: cloud-node and edge-node.
Make cloud nodes accessible
In kubectl get nodes -o wide
, if the internal IP of cloud-node is not an accessible external IP, we need to make it accessible. You can specify an external IP for the node. However, in most cloud platforms like AWS, the machine does not have an external IP. We need to add iptables rules to convert internal IP access to external IP. Assuming that the internal IP of 172.31.0.236
, add iptables rules to the cloud node.
$ sudo iptables -t nat -A OUTPUT -d 172.31.0.236 -j DNAT --to-destination 34.209.219.149
Add another iptables rule in the edge node.
$ sudo iptables -t nat -A OUTPUT -d 172.31.0.236 -j DNAT --to-destination 34.209.219.149
By running ping 172.31.0.236
172.31.0.236
can be accessed in the edge node.
Deploy eKuiper instances to the edge
eKuiper, as edge stream processing software, is usually deployed at the edge. We will use the eKuiper helm chart to speed up the deployment.
$ git clone https://github.com/lf-edge/ekuiper
$ cd ekuiper/deploy/chart/Kuiper
In order to deploy eKuiper to edge-node, we will modify the template file in the helm chart. Edit template/StatefulSet.yaml
to add nodeName and hostNetwork as shown below. Among them, edge-node
is the host name of the edge node. If your host name is different, please change it to match your edge host name.
...
spec:
nodeName: edge-node
hostNetwork: true
volumes:
{{- if not .Values.persistence.enabled }}
...
Save the changes and deploy eKuiper via the helm command:
$ helm install ekuiper .
You will run two new services.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ekuiper ClusterIP 10.99.57.211 <none> 9081/TCP,20498/TCP 22h
ekuiper-headless ClusterIP None <none> <none> 22h
By verifying the pod, ekuiper should run edge-node
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ekuiper-0 1/1 Running 0 22h 10.244.1.3 edge-node <none> <none>
ekuiper
rest service runs in the cluster and the port is 9081. We can check the service connection by typing the following command in the edge node, where 192.168.2.143
is the intranet ip of the edge node.
$ curl http://192.168.2.143:9081
{"version":"1.2.0","os":"linux","upTimeSeconds":81317}
Deploy eKuiper dashboard to the cloud
We will use kmanager.yaml and the kubectl tool to deploy the ekuiper dashboard in the cloud node. eKuiper manager is a web-based user interface. In the configuration file, we define deployment and services for eKuiper manager.
First, we need to make sure that the dashboard version used in the file matches the eKuiper version. Open and modify line 21 of kmanager.yaml to ensure that the version is correct.
...
containers:
- name: kmanager
image: emqx/kuiper-manager:1.2.1
...
Then, run the kubectl command
$ kubectl apply -f kmanager.yaml
Run the get service, you will get the following results:
$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ekuiper ClusterIP 10.99.57.211 <none> 9081/TCP,20498/TCP 120m
ekuiper-headless ClusterIP None <none> <none> 120m
kmanager-http NodePort 10.99.154.153 <none> 9082:32555/TCP 15s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33h
The dashboard 32555
in the cloud node at port 06125c050710b8. Therefore, use the url http://34.209.219.149:32555 in the browser to open the dashboard. Log in with the default username and password: admin/public.
Our goal is to manage eKuiper instances at edge nodes. Therefore, we will add an eKuiper service to the edge node set in the previous section as a service in the dashboard.
1. Create Add Service
and fill in the form below.
2. After the service is created, click on the service name ekuiper
and switch to the system
page. The connection should be disconnected so that we will get an error message about the connection. That's because http://192.168.2.143:9081/
is the intranet address of the edge eKuiper service and cannot be accessed directly from the cloud.
In the next section, we will set up the yurt tunnel and let the dashboard manage the eKuiper instance on the edge.
Set up yurt tunnel
We will use OpenYurt to set up the tunnel as a communication channel between the cloud and edge nodes. Because we need to connect to the 9081
port on the edge, we have to set up port mapping in the yurt tunnel.
In the cloud node, open the openyurt/config/setup/yurt-tunnel-server.yaml
file, edit line 31 of the configmap yurt-tunnel-server-cfg
, and add nat-ports-pair, as shown below.
apiVersion: v1
kind: ConfigMap
metadata:
name: yurt-tunnel-server-cfg
namespace: kube-system
data:
dnat-ports-pair: "9081=10264"
Then edit line 175 to add the cloud-node external ip as the certificate ip. Only when the cloud node has no public ip and use NAT rules set only need to do this.
...
args:
- --bind-address=$(NODE_IP)
- --insecure-bind-address=$(NODE_IP)
- --proxy-strategy=destHost
- --v=2
- --cert-ips=34.209.219.149
...
Then, we converted the kubernetes cluster to an OpenYurt cluster.
$ _output/bin/yurtctl convert --cloud-nodes cloud-node --provider kubeadm
Next, we will manually set up the yurt tunnel by deploying yurt-tunnel-server and yurt-tunnel-agent respectively.
Before setting up the yurt tunnel server, we first add a label to the cloud node.
$ kubectl label nodes cloud-node openyurt.io/is-edge-worker=false
Then, we can deploy the yurt tunnel server:
$ kubectl apply -f config/setup/yurt-tunnel-server.yaml
Next, we can set up the yurt tunnel proxy. As before, we add a label to the edge node to allow the yurt tunnel agent to run on the edge node:
kubectl label nodes edge-node openyurt.io/is-edge-worker=true
And, apply the yurt-tunnel-agent. yaml file:
kubectl apply -f config/setup/yurt-tunnel-agent.yaml
After the proxy and server are running, we should be able to manage ekuiper from the dashboard. Return to the dashboard in the browser, click the service name ekuiper
and switch to the system
tab, we should find that the service is healthy, as shown in the following figure:
great! Now we can manage eKuiper at the edge through the dashboard, as if it were deployed in the cloud. Refer to manager ui tutorial create and manage eKuiper streams, rules and plugins and any similar management tasks from the cloud.
Extended reading
If you want to know more features of LF Edge eKuiper or OpenYurt, please read the following references:
- eKuiper Github Code Base
- eKuiper Reference Guide
- OpenYurt tutorial
- eKuiper Management Console Tutorial
Copyright statement: This article is EMQ original, please indicate the source for reprinting.
Original link: https://www.emqx.com/zh/blog/edge-stream-processing-solution-deploying-and-managing-ekuiper-with-openyurt
Technical support: If you have any questions about this article or EMQ-related products, you can visit the EMQ Q&A community https://askemq.com ask questions, and we will reply and support in time.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。