kubesphere & MinIO & Velero

Bhanv

kubesphere & MinIO & Velero

Information

nameipEIProle
minio10.51.104.10192.168.10.91docker,docker-compose,minIO
kubesphere10.51.104.13192.168.10.95etcd, master, worker,veleo-server
velero10.51.104.13192.168.10.95velero client

kubesphere: v3.1.0

k8s:v1.20.4

docker:v20.10.7

MinIO:2021-06-09T18-51-39Z

velero:v1.6.0

image-20210612120051468

Run Distributed MinIO on Docker Compose

传送门:https://docs.min.io/docs/depl...

Deployment MinIO

  • docker-compose.yaml
version: '3.7'

# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
  minio1:
    image: minio/minio:RELEASE.2021-06-09T18-51-39Z
    hostname: minio1
    volumes:
      - data1-1:/data1
      - data1-2:/data2
    expose:
      - "9000"
    environment:
      MINIO_ROOT_USER: minio
      MINIO_ROOT_PASSWORD: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio2:
    image: minio/minio:RELEASE.2021-06-09T18-51-39Z
    hostname: minio2
    volumes:
      - data2-1:/data1
      - data2-2:/data2
    expose:
      - "9000"
    environment:
      MINIO_ROOT_USER: minio
      MINIO_ROOT_PASSWORD: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio3:
    image: minio/minio:RELEASE.2021-06-09T18-51-39Z
    hostname: minio3
    volumes:
      - data3-1:/data1
      - data3-2:/data2
    expose:
      - "9000"
    environment:
      MINIO_ROOT_USER: minio
      MINIO_ROOT_PASSWORD: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio4:
    image: minio/minio:RELEASE.2021-06-09T18-51-39Z
    hostname: minio4
    volumes:
      - data4-1:/data1
      - data4-2:/data2
    expose:
      - "9000"
    environment:
      MINIO_ROOT_USER: minio
      MINIO_ROOT_PASSWORD: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  nginx:
    image: nginx:1.19.2-alpine
    hostname: nginx
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    ports:
      - "9000:9000"
    depends_on:
      - minio1
      - minio2
      - minio3
      - minio4

## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
  data1-1:
  data1-2:
  data2-1:
  data2-2:
  data3-1:
  data3-2:
  data4-1:
  data4-2:

  • nginx.conf
user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  4096;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    keepalive_timeout  65;

    # include /etc/nginx/conf.d/*.conf;

    upstream minio {
        server minio1:9000;
        server minio2:9000;
        server minio3:9000;
        server minio4:9000;
    }

    server {
        listen       9000;
        listen  [::]:9000;
        server_name  localhost;

        # To allow special characters in headers
        ignore_invalid_headers off;
        # Allow any size file to be uploaded.
        # Set to a value such as 1000m; to restrict file size to a specific value
        client_max_body_size 0;
        # To disable buffering
        proxy_buffering off;

        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;

            proxy_connect_timeout 300;
            # Default is HTTP/1, keepalive is only enabled in HTTP/1.1
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            chunked_transfer_encoding off;

            proxy_pass http://minio;
        }
    }
}

Verfy Install

url: http:192.168.10.91:9000 user:minio pass:minio123

image-20210612102109232

image-20210612102129730

Create Buckets

创建velero buckets

image-20210612103354217

Velero

Velero 传送门:https://velero.io/docs/v1.6/

Download & install velero client

资源有限,暂用master作为velero的client端

$ wget https://github.com/vmware-tanzu/velero/releases/download/v1.6.0/velero-v1.6.0-linux-amd64.tar.gz
$ tar zxvf velero-v1.6.0-linux-amd64.tar.gz
$ mv velero-v1.6.0-linux-amd64/velero /usr/local/bin/

# velero -h
Velero is a tool for managing disaster recovery, specifically for Kubernetes
cluster resources. It provides a simple, configurable, and operationally robust
way to back up your application state and associated data.

If you're familiar with kubectl, Velero supports a similar model, allowing you to
execute commands such as 'velero get backup' and 'velero create schedule'. The same
operations can also be performed as 'velero backup get' and 'velero schedule create'.

Usage:
  velero [command]

Install velero

There are two supported methods for installing the Velero server components:

Setup server

1、Create a Velero-specific credentials file (credentials-velero) in your local directory:

[default]
aws_access_key_id = minio
aws_secret_access_key = minio123

2、install

velero install \
    --image velero/velero:v1.6.0 \
    --provider aws \
    --bucket velero \
    --namespace velero \
    --secret-file ./credentials-velero \
    --velero-pod-cpu-request 200m \
    --velero-pod-mem-request 200Mi \
    --velero-pod-cpu-limit 1000m \
    --velero-pod-mem-limit 1000Mi \
    --use-volume-snapshots=false \
    --use-restic \
    --restic-pod-cpu-request 200m \
    --restic-pod-mem-request 200Mi \
    --restic-pod-cpu-limit 1000m \
    --restic-pod-mem-limit 1000Mi \
    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://10.51.104.10:9000/ \
    --plugins velero/velero-plugin-for-aws:v1.2.0

3、verify deploy

root@node1:~/velero# kubectl get pod -n velero
NAME                      READY   STATUS    RESTARTS   AGE
restic-7jljq              1/1     Running   0          12s
velero-68786bdf47-wwmzc   1/1     Running   0          12s

root@node1:~/velero# kubectl get deployments -l component=velero --namespace=velero
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
velero   1/1     1            1           55s

Backup & Restore

backup
root@node1:~/velero# velero backup create demo-backup --include-namespaces demo-1
Backup request "demo-backup" submitted successfully.
Run `velero backup describe demo-backup` or `velero backup logs demo-backup` for more details.
root@node1:~/velero# velero backup describe demo-backup
Name:         demo-backup
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  velero.io/source-cluster-k8s-gitversion=v1.20.4
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=20

Phase:  InProgress

Errors:    0
Warnings:  0

Namespaces:
  Included:  demo-1
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Storage Location:  default

Velero-Native Snapshot PVs:  auto

TTL:  720h0m0s

Hooks:  <none>

Backup Format Version:  1.1.0

Started:    2021-06-12 11:26:51 +0800 CST
Completed:  <n/a>

Expiration:  2021-07-12 11:26:51 +0800 CST

Estimated total items to be backed up:  49
Items backed up so far:                 0

Velero-Native Snapshots: <none included>
root@node1:~/velero#
  • minio

image-20210612112741822

Delete Test Namespaces
root@node1:~/velero# kubectl delete ns demo-1
namespace "demo-1" deleted
Restore NS Resource
root@node1:~/velero# velero restore create --from-backup demo-backup --wait
Restore request "demo-backup-20210612112941" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
......
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe demo-backup-20210612112941` and `velero restore logs demo-backup-20210612112941`.
root@node1:~/velero# kubectl get ns
NAME                              STATUS   AGE
default                           Active   12h
demo-1                            Active   10s
测试应用
curl :http://192.168.10.95:30080/
root@node1:~/velero# curl http://192.168.10.95:30080/
<!DOCTYPE html><html><head><meta charset=utf-8><meta name=viewport content="width=device-width,initial-scale=1"><title>Fenix's BookStore</title><link rel="shortcut icon" href=/favicon.ico><link href=/static/css/app.13440f960e43a3574b009b7352447f18.css rel=stylesheet></head><body><div id=app></div><script type=text/javascript src=/static/js/manifest.0437a7f02d3154ee1abb.js></script><script type=text/javascript src=/static/js/vendor.c2f13a2146485051ae24.js></script><script type=text/javascript src=/static/js/app.ea66dc0be78c3ed2ae63.js></script></body></html>

image-20210612113150640

helm chart安装(没搞定....)

本示例使用helm chart安装(没搞定......)

传送门:https://vmware-tanzu.github.i... https://github.com/vmware-tan...
root@node1:~/velero# helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
"vmware-tanzu" has been added to your repositories
root@node1:~/velero# helm repo ls
NAME            URL
cilium          https://helm.cilium.io/
vmware-tanzu    https://vmware-tanzu.github.io/helm-charts
  • create ns
kubectl create ns velero
  • define
helm install velero vmware-tanzu/velero \
--namespace velero \
--create-namespace \
--set-file credentials.secretContents.cloud=<FULL PATH TO FILE> \
--set configuration.provider=<PROVIDER NAME> \
--set configuration.backupStorageLocation.name=<BACKUP STORAGE LOCATION NAME> \
--set configuration.backupStorageLocation.bucket=<BUCKET NAME> \
--set configuration.backupStorageLocation.config.region=<REGION> \
--set configuration.volumeSnapshotLocation.name=<VOLUME SNAPSHOT LOCATION NAME> \
--set configuration.volumeSnapshotLocation.config.region=<REGION> \
--set initContainers[0].name=velero-plugin-for-<PROVIDER NAME> \
--set initContainers[0].image=velero/velero-plugin-for-<PROVIDER NAME>:<PROVIDER PLUGIN TAG> \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins
阅读 124
1 声望
0 粉丝
0 条评论
你知道吗?

1 声望
0 粉丝
宣传栏