笔者在开发laf(https://github.com/lafjs/laf)的过程中依赖了mongo minio这些组件,本文就如何对这些组件最佳实践来做个介绍。

顺带提一嗓子laf这个写代码像写博客一样简单的函数计算平台,写完代码,点击发布,关机走人,什么docker 什么k8s 什么CI/CD 我一个写业务的关心这些干嘛~ Laf是一个被业务逼出来的框架,让前端秒变全栈。

Life is short, you need laf :)

image.png

Laf依赖mongo minio ingress,而为了整个东西可以更完美的云原生化,我们引入openebs来管理存储。

| 最佳拍档

image.png

sealos从来不让用户痛苦,laf的需求,sealos只需要:

sealos run \
   -e openebs-basedir=/data -e mongo-replicaCount=3 \
   fanux/kubernetes:v1.23.5 \
   fanux/openebs:latest \
   fanux/mongo:latest \
   laf-js/laf:latest \
   -m 192.168.0.2 -n 192.168.0.3

然后就没有然后了,这样的东西你能不喜欢?只需要两个环境变量指定存储目录和mongo副本数即可,我们很清楚用户想要的简单是什么样的,当然最牛的地方是让用户简单且不会牺牲功能,这就是大道至简,是sealos最引以为傲的地方。

| 工作量不饱满教程

下面来看看你不用sealos需要经历怎样痛苦的人生,当然以下教程很适合你在工作量不饱满的时候实践,当然我更推荐你用sealos自动化完成了,然后用下面的文档告诉老板你做了很多事,老板很开心,说这小伙真能干,而你舒舒服服撸了一天农药。。。

| 安装openebs

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml

openebs有很多种存储模式,块存储cStore和local PV,本地目录存储, 临时存储等,生产环境推荐使用块,要求没那么严格可以用本地目录存储,临时存储只用于测试。

创建storage class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-hostpath
  annotations:
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      - name: StorageType
        value: hostpath
      - name: BasePath
        value: /var/local-hostpath # Host path storage dir
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

这里的BasePath就配置你想把数据存储到哪个目录了

使用存储

创建PVC

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: local-hostpath-pvc
spec:
  storageClassName: openebs-hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5G
kubectl get pvc local-hostpath-pvc
NAME                 STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS       AGE
local-hostpath-pvc   Pending                                      openebs-hostpath   3m7s

容器中使用PVC:

apiVersion: v1
kind: Pod
metadata:
  name: hello-local-hostpath-pod
spec:
  volumes:
  - name: local-storage
    persistentVolumeClaim:
      claimName: local-hostpath-pvc
  containers:
  - name: hello-container
    image: busybox
    command:
       - sh
       - -c
       - 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done'
    volumeMounts:
    - mountPath: /mnt/store
      name: local-storage
kubectl apply -f local-hostpath-pod.yaml
kubectl exec hello-local-hostpath-pod -- cat /mnt/store/greet.txt
kubectl get pvc local-hostpath-pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
local-hostpath-pvc   Bound    pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425   5G         RWO            openebs-hostpath   28m

Checkout the bound pvc
kubectl get pv pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425 -o yaml

清理

kubectl delete pod hello-local-hostpath-pod
kubectl delete pvc local-hostpath-pvc
kubectl delete sc local-hostpath

kubectl get pv

追踪数据

教你如何查询数据最终存在哪里,让你放心~

获取 pod pvc name:

[root@iZ2ze0qiwmjj4p5rncuhhrZ openebs]# kubectl get pod hello-local-hostpath-pod-4 -oyaml|grep claimName
      claimName: local-hostpath-pvc-4

获取 pv nodename and pvname

[root@iZ2ze0qiwmjj4p5rncuhhrZ openebs]# kubectl get pvc local-hostpath-pvc-4 -oyaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
...
    volume.kubernetes.io/selected-node: iz2ze0qiwmjj4p5rncuhhoz
...
  name: local-hostpath-pvc-4
...
  storageClassName: local-hostpath
  volumeName: pvc-056c7781-c9b3-46f6-aa6e-a3a2d72456d6
...

我们拿到了节点名称: iz2ze0qiwmjj4p5rncuhhoz

storageClass是: local-hostpath

查看storageClass 详细信息:

[root@iZ2ze0qiwmjj4p5rncuhhrZ openebs]# kubectl get sc local-hostpath -oyaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    cas.openebs.io/config: |
      - name: StorageType
        value: hostpath
      - name: BasePath
        value: /data
    openebs.io/cas-type: local
...
provisioner: openebs.io/local
reclaimPolicy: Delete

所以数据目录是/data.

数据最终位置是: iz2ze0qiwmjj4p5rncuhhoz:/data/pvc-056c7781-c9b3-46f6-aa6e-a3a2d72456d6 上去ssh查看

[root@iZ2ze0qiwmjj4p5rncuhhrZ openebs]# kubectl get node -owide
NAME                      STATUS   ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
iz2ze0qiwmjj4p5rncuhhoz   Ready    <none>                 29h   v1.22.0   172.17.83.145   <none>        CentOS Linux 7 (Core)   3.10.0-693.2.2.el7.x86_64   containerd://1.4.3

ssh root@172.17.83.145
[root@iZ2ze0qiwmjj4p5rncuhhoZ pvc-056c7781-c9b3-46f6-aa6e-a3a2d72456d6]# cd /data/pvc-056c7781-c9b3-46f6-aa6e-a3a2d72456d6 && ls
greet.txt # 放心了

| mongo使用openebs提供的存储

git clone https://github.com/bitnami/charts

配置values,这里需要配置副本数,nodeport数量要一致:

architecture=replicaset
replicaCount=3
externalAccess.enabled=true
externalAccess.service.type=NodePort
externalAccess.service.nodePorts[0]='31001'
externalAccess.service.nodePorts[1]='31002'
externalAccess.service.nodePorts[1]='31003'

修改 StorageClass:

storageClass: "local-hostpath"
[root@iZ2ze0qiwmjj4p5rncuhhrZ mongodb]# cd bitnami/mongodb && helm install mongo-test .
NAME: mongo-test
LAST DEPLOYED: Tue Mar 29 16:18:08 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb
CHART VERSION: 11.1.3
APP VERSION: 4.4.13
** Please be patient while the chart is being deployed **

MongoDB&reg; can be accessed on the following DNS name(s) and ports from within your cluster:

    mongo-test-mongodb-0.mongo-test-mongodb-headless.default.svc.cluster.local:27017
    mongo-test-mongodb-1.mongo-test-mongodb-headless.default.svc.cluster.local:27017
    mongo-test-mongodb-2.mongo-test-mongodb-headless.default.svc.cluster.local:27017
    mongo-test-mongodb-3.mongo-test-mongodb-headless.default.svc.cluster.local:27017
To get the root password run:

    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongo-test-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)

To connect to your database, create a MongoDB&reg; client container:

    kubectl run --namespace default mongo-test-mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.13-debian-10-r25 --command -- bash

Then, run the following command:
    mongo admin --host "mongo-test-mongodb-0.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-1.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-2.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-3.mongo-test-mongodb-headless.default.svc.cluster.local:27017" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

To connect to your database nodes from outside, you need to add both primary and secondary nodes hostnames/IPs to your Mongo client. To obtain them, follow the instructions below:

    MongoDB&reg; nodes domain: you can reach MongoDB&reg; nodes on any of the K8s nodes external IPs.

        kubectl get nodes -o wide

    MongoDB&reg; nodes port: You will have a different node port for each MongoDB&reg; node. You can get the list of configured node ports using the command below:

        echo "$(kubectl get svc --namespace default -l "app.kubernetes.io/name=mongodb,app.kubernetes.io/instance=mongo-test,app.kubernetes.io/component=mongodb,pod" -o jsonpath='{.items[*].spec.ports[0].nodePort}' | tr ' ' '\n')"

查看 pod:

[root@iZ2ze0qiwmjj4p5rncuhhrZ mongodb]# kubectl get pod
NAME                           READY   STATUS      RESTARTS      AGE
mongo-test-mongodb-0           1/1     Running     0             49m
mongo-test-mongodb-1           1/1     Running     0             49m
mongo-test-mongodb-2           0/1     Running     1 (90s ago)   48m
mongo-test-mongodb-arbiter-0   1/1     Running     0  

查看 pvc都bound上了:

[root@iZ2ze0qiwmjj4p5rncuhhrZ mongodb]# kubectl get pvc
NAME                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
datadir-mongo-test-mongodb-0   Bound    pvc-5bddcedc-eb0c-41ed-a230-f7c953bc537f   8Gi        RWO            local-hostpath     52m
datadir-mongo-test-mongodb-1   Bound    pvc-c187a64a-c3e6-4e4b-9669-c01e30af1dc7   8Gi        RWO            local-hostpath     51m
datadir-mongo-test-mongodb-2   Bound    pvc-b845673f-2297-40ed-b013-

使用客户端pod访问mongo:

kubectl run --namespace default mongo-test-mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.13-debian-10-r25 --command -- bash

Run mongo cli:
 mongo admin --host "mongo-test-mongodb-0.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-1.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-2.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-3.mongo-test-mongodb-headless.default.svc.cluster.local:27017" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

Implicit session: session { "id" : UUID("25ae50c1-932f-416d-b164-871c9144118d") }
MongoDB server version: 4.4.13
---
The server generated these startup warnings when booting: 
        2022-03-29T08:18:28.221+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
        2022-03-29T08:18:28.460+00:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
        2022-03-29T08:18:28.460+00:00: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. We suggest setting it to 'never'
---
---
        Enable MongoDB's free cloud-based monitoring service, which will then receive and display
        metrics about your deployment (disk utilization, CPU, operation statistics, etc).

        The monitoring data will be available on a MongoDB website with a unique URL accessible to you
        and anyone you share the URL with. MongoDB may use this information to make product
        improvements and to suggest MongoDB products and deployment options to you.

        To enable free monitoring, run the following command: db.enableFreeMonitoring()
        To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
rs0:PRIMARY>

rs0:PRIMARY> help
  db.help()                    help on db methods
  db.mycoll.help()             help on collection methods
  sh.help()                    sharding helpers
  rs.help()                    replica set helpers
  help admin                   administrative help
  help connect                 connecting to a db help
  help keys                    key shortcuts
  help misc                    misc things to know
  help mr                      mapreduce

  show dbs                     show database names
  show collections             show collections in current database
  show users                   show users in current database
  show profile                 show most recent system.profile entries with time >= 1ms
  show logs                    show the accessible logger names
  show log [name]              prints out the last segment of log in memory, 'global' is default
  use <db_name>                set current database
  db.mycoll.find()             list objects in collection mycoll
  db.mycoll.find( { a : 1 } )  list objects in mycoll where a == 1
  it                           result of the last line evaluated; use to further iterate
  DBQuery.shellBatchSize = x   set default number of items to display on shell
  exit                         quit the mongo shell
rs0:PRIMARY> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB

| Minio on openebs

安装 minio plugin(看到没有,各家用的工具都不一样,helm operator plugin...):

wget https://github.com/minio/operator/releases/download/v4.4.4/kubectl-minio_4.4.4_linux_amd64 -O kubectl-minio
chmod +x kubectl-minio
mv kubectl-minio /usr/local/bin/

kubectl minio version
安装minio operator

kubectl minio init
kubectl get all --namespace minio-operator

创建 minio Cluster
你可以在UI上创建,傻瓜都会我就不教了:

kubectl minio proxy
或者使用 helm chart 创建:

# clone slow you can use proxy: git clone https://ghproxy.com/https://github.com/minio/operator/
git clone https://github.com/minio/operator
cd helm/tenant

修改storage class:

values.pools.servers[].storageClassName = 'local-hostpath'

安装集群 the cluster:

helm install my-minio .
[root@iZ2ze0qiwmjj4p5rncuhhrZ tenant]# kubectl get all -n test
NAME                  READY   STATUS    RESTARTS   AGE
pod/minio1-pool-0-0   1/1     Running   0          2m23s
pod/minio1-pool-0-1   1/1     Running   0          2m23s
pod/minio1-pool-0-2   1/1     Running   0          2m23s
pod/minio1-pool-0-3   1/1     Running   0          2m23s

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/minio            ClusterIP   10.99.196.151   <none>        443/TCP    2m23s
service/minio1-console   ClusterIP   10.105.201.40   <none>        9443/TCP   2m23s
service/minio1-hl        ClusterIP   None            <none>        9000/TCP   2m23s

NAME                             READY   AGE
statefulset.apps/minio1-pool-0   4/4     2m23s

常见问题

DNS 无法解析

[root@iZ2ze0qiwmjj4p5rncuhhnZ minio]# kubectl logs sealos-log-search-api-cb966fc87-5kmw9
2022/03/30 10:41:52 Error connecting to db: dial tcp: lookup sealos-log-hl-svc.default.svc.cluster.local on 10.96.0.10:53: no such host

大概率宿主机resolv.conf 里面有乱七八糟的配置,修改干净一些重启coreDNS即可:

[root@iZ2ze0qiwmjj4p5rncuhhnZ ~]# cat /etc/resolv.conf 
nameserver 100.100.2.136
nameserver 100.100.2.138

mini log pod 不能启动

Warning  FailedScheduling  4m29s  default-scheduler  0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
这是因为log的storageClass用的默认的,你需要设置一个默认值


kubectl patch storageclass local-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

[root@iZ2ze0qiwmjj4p5rncuhhnZ ~]# kubectl get sc
NAME                       PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-hostpath (default)   openebs.io/local   Delete          WaitForFirstConsumer   false                  2d1h
minio-local-storage        openebs.io/local   Delete          WaitForFirstConsumer   false                  3h20m

| 总结

其实每个组件本身已经做了比较好的封装,实践也不算麻烦,但是组合在一起就又是面向过程,把整个云操作系统看成整体没有做到像Docker在单机上那样开箱即用,而且每个组件使用的技术方案和依赖会有差异,需要一个更高层的抽象来解决问题。


sealyun
289 声望56 粉丝

技术填坑者,爬起来接着填