This article was first published at Nebula Graph Community public

Nebula Operator

Hi guys! Nebula Operator has been open source for some time. There was also a related blog and introduction, but there is no practice-related blog yet. Now:

It's coming! coming! It came with practice !

Nebula Operator Introduction

Nebula Operator 1611cb707ca277, you can refer to the previous blog: article explains in detail the automated deployment cluster management tool Nebula Operator .

This article will mainly focus on the practical direction, so that you can quickly get started with Nebula Operator and experience the fun of graph database!

Nebula Operator cloud practice

At this point, we will start to get into the topic. This article will use Cloud for 1611cb707ca2cb Nebula Operator practice, similar to other cloud vendors.

Installation tool

This practice requires the following basic tools to be installed on the operating computer:

For installation methods related to the above basic tools, please refer to the corresponding link.

Create Kubernetes on the cloud

Because Operator relies on Kubernetes, it is necessary to prepare the Kubernetes environment before Nebula Operator

Cloud console, and then enter the 1611cb707ca3ef container service Kubernetes version , and then create a cluster. The ACK hosting version is selected for this practice. Please select the relevant creation parameters according to your needs.

Note: In order to facilitate the access to the Kubernetes API Server from the external network, this practice checked use EIP to expose the API Server . You can choose whether to enable it according to your own situation. If you do not enable it, you need to open up the network between the operating computer and Kubernetes . For other parameters, please select .

After waiting for the Kubernetes cluster to start, copy the content connection information of the public network access $HOME/.kube/config file.

Then you can use the following command to verify the Kubernetes cluster:

$ kubectl get nodes
NAME                         STATUS   ROLES    AGE   VERSION
cn-beijing.192.168.250.13    Ready    <none>   51m   v1.20.4-aliyun.1
cn-beijing.192.168.250.185   Ready    <none>   51m   v1.20.4-aliyun.1
cn-beijing.192.168.250.89    Ready    <none>   51m   v1.20.4-aliyun.1

Install Nebula Operator dependency

Before installing Nebula Operator , you also need to install some dependencies.

Install CertManager

# 安装 CertManager
$ helm install cert-manager cert-manager --repo https://charts.jetstack.io \
    --namespace cert-manager --create-namespace --version v1.3.1 \
    --set installCRDs=true
# 稍等一会儿,检测 CertManager 是否启动正常
$ kubectl -n cert-manager get pod
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-7998c69865-jfw9x              1/1     Running   0          93s
cert-manager-cainjector-7b744d56fb-846w9   1/1     Running   0          93s
cert-manager-webhook-7d6d4c78bc-ssk4w      1/1     Running   0          93s

Install OpenKruise

# 安装 OpenKruise
$ helm install kruise \
    https://github.com/openkruise/kruise/releases/download/v0.8.1/kruise-chart.tgz
# 稍等一会儿,检测 OpenKruise 是否启动正常
$ kubectl -n kruise-system get pod
NAME                                         READY   STATUS    RESTARTS   AGE
kruise-controller-manager-6797f89d9b-ppv65   1/1     Running   0          49s
kruise-controller-manager-6797f89d9b-wlkbd   1/1     Running   0          49s
kruise-daemon-7rljq                          1/1     Running   0          49s
kruise-daemon-8kd8d                          1/1     Running   0          49s
kruise-daemon-n6tdw                          1/1     Running   0          49s

Add Nebula Operator Charts

# 添加 Nebula Operator Charts Repo
$ helm repo add nebula-operator https://vesoft-inc.github.io/nebula-operator/charts
# 更新 repo
$ helm repo update

Install Nebula Operator

gcr.io and k8s.gcr.io mirrors cannot be pulled on Alibaba Cloud, it is necessary to specify the domestic mirrors. The following replacements are made here:

Original imageMirror after replacement
gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0kubesphere/kube-rbac-proxy:v0.8.0
k8s.gcr.io/kube-scheduler:v1.18.8kubesphere/kube-scheduler:v1.18.8

You can view all the parameters that can be set with the following command:

$ helm show values nebula-operator/nebula-operator

The installation commands in this practice are as follows:

# 安装 Nebula Operator
$ helm install nebula-operator nebula-operator/nebula-operator \
    --namespace nebula-operator-system --create-namespace --version 0.1.0 \
    --set image.kubeRBACProxy.image=kubesphere/kube-rbac-proxy:v0.8.0 \
    --set image.kubeScheduler.image=kubesphere/kube-scheduler:v1.18.8
# 稍等一会儿,检测 Nebula Operator 是否启动正常
$ kubectl -n nebula-operator-system get pod
NAME                                                             READY   STATUS    RESTARTS   AGE
nebula-operator-controller-manager-deployment-6968547fff-k62b4   2/2     Running   0          19s
nebula-operator-controller-manager-deployment-6968547fff-lhpdx   2/2     Running   0          19s
nebula-operator-scheduler-deployment-7c5fc7945-hbkv8             2/2     Running   0          19s
nebula-operator-scheduler-deployment-7c5fc7945-sxc7w             2/2     Running   0          19s

If you customize the Cluster Domain of Kubernetes, you need to modify the installation command and add the setting kubernetesClusterDomain , as follows:

# 安装 Nebula Operator ,请修改 <<YourCustomCLusterDomain>>
$ helm install nebula-operator nebula-operator/nebula-operator \
    --namespace nebula-operator-system --create-namespace --version 0.1.0 \
    --set image.kubeRBACProxy.image=kubesphere/kube-rbac-proxy:v0.8.0 \
    --set image.kubeScheduler.image=kubesphere/kube-scheduler:v1.18.8 \
    --set kubernetesClusterDomain=<<YourCustomCLusterDomain>>

Deploy Nebula Cluster

At this point, Nebula Operator is ready, then install Nebula Cluster to experience the graph data!

First, you need to get StorageClass , which will be used to set the storage used by Nebula Cluster

$ kubectl get sc
NAME                       PROVISIONER                       RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
alicloud-disk-available    diskplugin.csi.alibabacloud.com   Delete          Immediate              true                   100m
alicloud-disk-efficiency   diskplugin.csi.alibabacloud.com   Delete          Immediate              true                   100m
alicloud-disk-essd         diskplugin.csi.alibabacloud.com   Delete          Immediate              true                   100m
alicloud-disk-ssd          diskplugin.csi.alibabacloud.com   Delete          Immediate              true                   100m
alicloud-disk-topology     diskplugin.csi.alibabacloud.com   Delete          WaitForFirstConsumer   true                   100m

From the above, 5 StorageClass . This practice will use alicloud-disk-ssd . Other cloud vendors will have a corresponding StorageClass , please choose according to the actual situation.
*Note: Each cloud vendor may have a range limit on the size of the storage application. For example, Cloud is limited to SSDs from 20 Gi to 32,768 Gi . You need to pay attention when Nebula Cluster

You can view all the parameters that can be set with the following command:

$ helm show values nebula-operator/nebula-cluster

The installation commands in this practice are as follows:

# 创建 Nebula Cluster 的名称
$ export NEBULA_CLUSTER_NAME=nebula
# 创建 Nebula Cluster 的 namespace
$ export NEBULA_CLUSTER_NAMESPACE=nebula
# 创建 Nebula Cluster 的 StorageClass 名称,这里设置为之前查找到的 alicloud-disk-ssd
$ export STORAGE_CLASS_NAME=alicloud-disk-ssd
# 创建 Nebula Cluster 中每个组建所使用存储的大小
$ export STORAGE_SIZE_GRAPHD=20Gi
$ export STORAGE_SIZE_METAD=20Gi
$ export STORAGE_SIZE_STORAGED=20Gi
# 创建 Nebula Cluster
$ helm install ${NEBULA_CLUSTER_NAME} nebula-operator/nebula-cluster \
    --namespace ${NEBULA_CLUSTER_NAMESPACE} --create-namespace --version 0.1.0 \
    --set nameOverride=${NEBULA_CLUSTER_NAME} \
    --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \
    --set nebula.graphd.storage="${STORAGE_SIZE_GRAPHD}" \
    --set nebula.metad.storage="${STORAGE_SIZE_METAD}" \
    --set nebula.storaged.storage="${STORAGE_SIZE_STORAGED}"
# 稍等一会儿,检测 Nebula Cluster 是否启动正常
$ kubectl -n ${NEBULA_CLUSTER_NAMESPACE} get nebulacluster
NAME     GRAPHD-DESIRED   GRAPHD-READY   METAD-DESIRED   METAD-READY   STORAGED-DESIRED   STORAGED-READY   AGE
nebula   2                2              3               3             3                  3                4m10s
$ kubectl -n ${NEBULA_CLUSTER_NAMESPACE} get pod
NAME                READY   STATUS    RESTARTS   AGE
nebula-graphd-0     1/1     Running   0          96s
nebula-graphd-1     1/1     Running   0          96s
nebula-metad-0      1/1     Running   0          97s
nebula-metad-1      1/1     Running   0          97s
nebula-metad-2      1/1     Running   0          97s
nebula-storaged-0   1/1     Running   0          97s
nebula-storaged-1   1/1     Running   0          97s
nebula-storaged-2   1/1     Running   0          97s

Of course, you can also upgrade the Storaged 5 , and execute the command as follows:

# 升级 Nebula Cluster
$ helm upgrade ${NEBULA_CLUSTER_NAME} nebula-operator/nebula-cluster \
    --namespace ${NEBULA_CLUSTER_NAMESPACE} --create-namespace --version 0.1.0 \
    --set nameOverride=${NEBULA_CLUSTER_NAME} \
    --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \
    --set nebula.graphd.storage="${STORAGE_SIZE_GRAPHD}" \
    --set nebula.metad.storage="${STORAGE_SIZE_METAD}" \
    --set nebula.storaged.storage="${STORAGE_SIZE_STORAGED}" \
    --set nebula.storaged.replicas=5
# 稍等一会儿,检测 Nebula Cluster 是否启动正常
$ kubectl -n ${NEBULA_CLUSTER_NAMESPACE} get nebulacluster
NAME     GRAPHD-DESIRED   GRAPHD-READY   METAD-DESIRED   METAD-READY   STORAGED-DESIRED   STORAGED-READY   AGE
nebula   2                2              3               3             5                  5                6m12s
$ kubectl -n ${NEBULA_CLUSTER_NAMESPACE} get pod
NAME                READY   STATUS    RESTARTS   AGE
nebula-graphd-0     1/1     Running   0          2m30s
nebula-graphd-1     1/1     Running   0          2m30s
nebula-metad-0      1/1     Running   0          2m30s
nebula-metad-1      1/1     Running   0          2m30s
nebula-metad-2      1/1     Running   0          2m30s
nebula-storaged-0   1/1     Running   0          2m30s
nebula-storaged-1   1/1     Running   0          2m30s
nebula-storaged-2   1/1     Running   0          2m30s
nebula-storaged-3   1/1     Running   0          52s
nebula-storaged-4   1/1     Running   0          52s

For detailed installation instructions, please see: Use Helm to install Nebula Operator .

Visit Nebula Cluster

Finally, Nebula Cluster started successfully, let's start accessing the cluster!

Kubernetes internal access

First, start a Nebula Graph Console , and execute the command as follows:

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nebula-console
spec:
  containers:
    - name: nebula-console
      image: vesoft/nebula-console:v2-nightly
      command:
      - sleep
      - "1000000"
EOF

Then access the cluster through the Nebula Graph Console just created, as follows:

$ kubectl exec -it nebula-console -- \
    nebula-console -u u -p p --addr ${NEBULA_CLUSTER_NAME}-graphd-svc.${NEBULA_CLUSTER_NAMESPACE}.svc --port 9669
2021/06/23 06:21:22 [INFO] connection pool is initialized successfully
Welcome to Nebula Graph!
(u@nebula) [(none)]> show hosts
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| Host                                                                  | Port | Status   | Leader count | Leader distribution  | Partition distribution |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-0.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0            | "No valid partition" | "No valid partition"   |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-1.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0            | "No valid partition" | "No valid partition"   |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-2.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0            | "No valid partition" | "No valid partition"   |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-3.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0            | "No valid partition" | "No valid partition"   |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-4.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0            | "No valid partition" | "No valid partition"   |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "Total"                                                               |      |          | 0            |                      |                        |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
Got 4 rows (time spent 7669/9367 us)
Wed, 23 Jun 2021 06:21:26 UTC

Kubernetes external access

To access the services inside Kubernetes outside the cluster, you can use hostPort , hostNetwork , Ingress , LoadBalancer etc. Here we use the convenience of cloud vendors to directly use LoadBalancer to access the cluster.

Note: This method will expose your Nebula cluster, please do not use it in a production environment.

First, change Graphd Service of type to LoadBalancer , and then check EXTERNAL-IP .

# 将 service 的 type 改成 LoadBalancer
$ kubectl patch -n ${NEBULA_CLUSTER_NAMESPACE} svc ${NEBULA_CLUSTER_NAME}-graphd-svc \
    -p '{"spec": {"type": "LoadBalancer"}}'
# 获取 EXTERNAL-IP ,如果为 pending ,请稍等一会儿再重试
$ kubectl -n ${NEBULA_CLUSTER_NAMESPACE} get svc nebula-graphd-svc
NAME                TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                          AGE
nebula-graphd-svc   LoadBalancer   172.16.85.222   x.x.x.x         9669:31460/TCP,19669:32579/TCP,19670:31481/TCP   27m

Now, you can EXTERNAL-IP , for example, x.x.x.x here.

$ export EXTERNAL_IP=x.x.x.x
$ docker run -it --rm vesoft/nebula-console:v2-nightly -u u -p p --addr ${EXTERNAL_IP} --port 9669
2021/06/23 06:42:17 [INFO] connection pool is initialized successfully
Welcome to Nebula Graph!
(u@nebula) [(none)]> show hosts
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| Host                                                                  | Port | Status   | Leader count | Leader distribution  | Partition distribution |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-0.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0            | "No valid partition" | "No valid partition"   |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-1.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0            | "No valid partition" | "No valid partition"   |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-2.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0            | "No valid partition" | "No valid partition"   |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-3.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0            | "No valid partition" | "No valid partition"   |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-4.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0            | "No valid partition" | "No valid partition"   |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "Total"                                                               |      |          | 0            |                      |                        |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
Got 4 rows (time spent 3747/60433 us)
Wed, 23 Jun 2021 06:42:21 UTC

Enjoy time

You're done!

1611cb707caa61 Let's gallop in Nebula Graph to your


NebulaGraph
169 声望684 粉丝

NebulaGraph:一个开源的分布式图数据库。欢迎来 GitHub 交流:[链接]