1
About the Author
Wan Shaoyuan, CNCF Foundation officially certified Kubernetes CKA&CKS engineer, cloud native solution architect. In-depth research on ceph, Openstack, Kubernetes, prometheus technology and other cloud-native related technologies. Participated in the design and implementation of IaaS and PaaS platform design and application cloud native transformation guidance in multiple industries such as finance, insurance, and manufacturing.

Foreword

NeuVector is the industry's first end-to-end open source container security platform and the only solution that provides enterprise-grade zero-trust security for containerized workloads. This article will introduce how to deploy NeuVector in detail from the following 5 aspects:

  1. NeuVector overview
  2. NeuVector installation
  3. High Availability Architecture Design
  4. Multi-cloud security management
  5. Other configuration

1.NeuVector overview

NeuVector is committed to ensuring the security of enterprise-level container platforms. It can provide real-time in-depth container network visualization, east-west container network monitoring, active isolation and protection, container host security, and container internal security. The container management platform seamlessly integrates and implements application-level container security. The automation is suitable for container production environments such as various cloud environments, cross-cloud or on-premises deployments.

In 2021, NeuVector was acquired by SUSE and completed open source in January 2022. became the industry's first end-to-end open source container security platform , the only solution that provides enterprise-level zero-trust security for containerized workloads.

Project address: https://github.com/neuvector/neuvector

This article is mainly based on NeuVector's first open source version, NeuVector:5.0.0-preview.1.

1.1. Architecture Analysis

NeuVector itself contains Controller, Enforcer, Manager, Scanner and Updater modules.

  • Controller: The control module of the entire NeuVector, API entry, including configuration delivery, high availability mainly considers the HA of the Controller, usually it is recommended to deploy 3 Controller modules to form a cluster.
  • Enforcer: Mainly used for security policy deployment and execution, DaemonSet type will be deployed on each node.
  • Manager: Provides web-UI (HTTPS only) and CLI console for users to manage NeuVector.
  • Scanner: Scan for CVE vulnerabilities on nodes, containers, Kubernetes, and images
  • Updater : cronjob for regularly updating the CVE vulnerability library

1.2. Overview of main functions

  • Security Vulnerability Scan
  • Container network traffic visualization
  • Network Security Policy Definition
  • L7 firewall
  • CICD Security Scan
  • Compliance Analysis

This article focuses on installation and deployment, and specific functions will be introduced in depth in subsequent articles.

2.NeuVector install

Installation Environment
Software version:
OS:Ubuntu18.04
Kubernetes:1.20.14
Rancher:2.5.12
Docker:19.03.15
NeuVector:5.0.0-preview.1

2.1. Rapid Deployment

create namespace

kubectl create namespace neuvector

Deploy CRD (Kubernetes 1.19+ version)

kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/crd-k8s-1.19.yaml

Deploy CRD (Kubernetes 1.18 or lower)

kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/crd-k8s-1.16.yaml

Configure RBAC

kubectl create clusterrole neuvector-binding-app --verb=get,list,watch,update --resource=nodes,pods,services,namespaces
kubectl create clusterrole neuvector-binding-rbac --verb=get,list,watch --resource=rolebindings.rbac.authorization.k8s.io,roles.rbac.authorization.k8s.io,clusterrolebindings.rbac.authorization.k8s.io,clusterroles.rbac.authorization.k8s.io
kubectl create clusterrolebinding neuvector-binding-app --clusterrole=neuvector-binding-app --serviceaccount=neuvector:default
kubectl create clusterrolebinding neuvector-binding-rbac --clusterrole=neuvector-binding-rbac --serviceaccount=neuvector:default
kubectl create clusterrole neuvector-binding-admission --verb=get,list,watch,create,update,delete --resource=validatingwebhookconfigurations,mutatingwebhookconfigurations
kubectl create clusterrolebinding neuvector-binding-admission --clusterrole=neuvector-binding-admission --serviceaccount=neuvector:default
kubectl create clusterrole neuvector-binding-customresourcedefinition --verb=watch,create,get --resource=customresourcedefinitions
kubectl create clusterrolebinding  neuvector-binding-customresourcedefinition --clusterrole=neuvector-binding-customresourcedefinition --serviceaccount=neuvector:default
kubectl create clusterrole neuvector-binding-nvsecurityrules --verb=list,delete --resource=nvsecurityrules,nvclustersecurityrules
kubectl create clusterrolebinding neuvector-binding-nvsecurityrules --clusterrole=neuvector-binding-nvsecurityrules --serviceaccount=neuvector:default
kubectl create clusterrolebinding neuvector-binding-view --clusterrole=view --serviceaccount=neuvector:default
kubectl create rolebinding neuvector-admin --clusterrole=admin --serviceaccount=neuvector:default -n neuvector

Check for the following RBAC objects

kubectl get clusterrolebinding  | grep neuvector
kubectl get rolebinding -n neuvector | grep neuvector

kubectl get clusterrolebinding  | grep neuvector

neuvector-binding-admission                            ClusterRole/neuvector-binding-admission                            44h
neuvector-binding-app                                  ClusterRole/neuvector-binding-app                                  44h
neuvector-binding-customresourcedefinition             ClusterRole/neuvector-binding-customresourcedefinition             44h
neuvector-binding-nvadmissioncontrolsecurityrules      ClusterRole/neuvector-binding-nvadmissioncontrolsecurityrules      44h
neuvector-binding-nvsecurityrules                      ClusterRole/neuvector-binding-nvsecurityrules                      44h
neuvector-binding-nvwafsecurityrules                   ClusterRole/neuvector-binding-nvwafsecurityrules                   44h
neuvector-binding-rbac                                 ClusterRole/neuvector-binding-rbac                                 44h
neuvector-binding-view                                 ClusterRole/view                                                   44h
kubectl get rolebinding -n neuvector | grep neuvector
neuvector-admin         ClusterRole/admin            44h
neuvector-binding-psp   Role/neuvector-binding-psp   44h

Deploy NeuVector

The underlying Runtime is Docker

kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/neuvector-docker-k8s.yaml

The underlying Runtime is Containerd (this yaml file can be used for k3s and rke2)

kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/neuvector-containerd-k8s.yaml

Kubernetes versions below 1.21 will prompt the following error, change the yaml file download from batch/v1 to batch/v1beta1

error: unable to recognize "https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/neuvector-docker-k8s.yaml": no matches for kind "CronJob" in version "batch/v1"

1.20.x cronjob is still in beta stage, and 1.21 version of cronjob is officially GA.

The default deployment of web-ui uses the loadblance type of Service. In order to facilitate access, it can be changed to NodePort, and it can also provide external services through Ingress.

kubectl patch  svc neuvector-service-webui  -n neuvector --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"},{"op":"add","path":"/spec/ports/0/nodePort","value":30888}]'

Visit https://node_ip:30888
The default password is admin/admin

Click on the My profile page next to the avatar to enter the settings page, set the password and language

2.2. Helm deploy

add repo

helm repo add neuvector https://neuvector.github.io/neuvector-helm/
helm search repo neuvector/core

create namespace

kubectl create namespace neuvector

Create ServiceAccount

kubectl create serviceaccount neuvector -n neuvector

helm install

helm install neuvector --namespace neuvector neuvector/core  --set registry=docker.io  --set
tag=5.0.0-preview.1 --set=controller.image.repository=neuvector/controller.preview --
set=enforcer.image.repository=neuvector/enforcer.preview --set 
manager.image.repository=neuvector/manager.preview --set 
cve.scanner.image.repository=neuvector/scanner.preview --set cve.updater.image.repository=neuvector/updater.preview

Helm-chart parameter view:
https://github.com/neuvector/neuvector-helm/tree/master/charts/core

3. High Availability Architecture Design

NeuVector-HA mainly needs to consider the HA of the Controller module, as long as one Controller is open, all data will be synchronized between the 3 replicas.

Controller data is mainly stored in the /var/neuvector/ directory, but when POD rebuilds or cluster redeployment occurs, backup files are automatically loaded from this directory for cluster recovery.

3.1. Deployment strategy

NeuVector officially provides four HA deployment modes

Method 1: Without any scheduling restrictions, Kubernetes performs free scheduling management and management.

Method 2: NeuVector control components (manager, controller) + enforce, scanner components configure scheduling label restrictions and taint tolerance, and deploy together with the Kubernetes master node.

Method 3: Create a dedicated NeuVector node in the Kubernetes cluster through the Taint method, and only allow the Neuvector control component to be deployed.

Method 4: NeuVector control components (manager, controller) configure scheduling label restrictions and taint tolerance, and deploy together with the Kubernetes master node. k8s-master does not deploy the enforce and scanner components, which means that the master node is not receiving scanning and policy issuance.

Take method 2 as an example to deploy

Label the master node with a specific label

kubectl label nodes nodename nvcontroller=true

Get node Taint

kubectl get node nodename -o yaml|grep -A 5 taint

Take the node master node deployed by Rancher as an example

taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/controlplane
    value: "true"
  - effect: NoExecute
    key: node-role.kubernetes.io/etcd

Edit the deployed yaml Add nodeSelector and tolerations to NeuVector-control components (manager, controller) Add only tolerations to enforce and scanner components.

For example, take the manager component as an example:

kind: Deployment
metadata:
  name: neuvector-manager-pod
  namespace: neuvector
spec:
  selector:
    matchLabels:
      app: neuvector-manager-pod
  replicas: 1
  template:
    metadata:
      labels:
        app: neuvector-manager-pod
    spec:
      nodeSelector:
        nvcontroller: "true"
      containers:
        - name: neuvector-manager-pod
          image: neuvector/manager.preview:5.0.0-preview.1
          env:
            - name: CTRL_SERVER_IP
              value: neuvector-svc-controller.neuvector
      restartPolicy: Always
      tolerations:
      - effect: NoSchedule
        key: "node-role.kubernetes.io/controlplane"
        operator: Equal
        value: "true"
      - effect: NoExecute
        operator: "Equal"
        key: "node-role.kubernetes.io/etcd"
        value: "true"

3.2. Data persistence

Configure environment variables to enable configuration data persistence

- env:
  - name: CTRL_PERSIST_CONFIG

After configuring this environment variable, NeuVector-Controller will store data in the /var/neuvector directory by default. By default, this directory is hostpath mapped in the /var/neuvector directory of the host where the POD is located.

If a higher level of data reliability is required, it can also be connected to nfs or other storage with high read and write costs through PV.

In this way, when the three POD copies of NeuVector-Controller are destroyed at the same time, and the host is completely unrecoverable, no data configuration data will be lost.

The following takes NFS as an example.

deploy nfs

Create pv and pvc

cat <<EOF | kubectl apply -f -

apiVersion: v1
kind: PersistentVolume
metadata:
  name: neuvector-data
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany 
  nfs:
    path: /nfsdata
    server: 172.16.0.195 

EOF
cat <<EOF | kubectl apply -f -

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: neuvector-data
  namespace: neuvector
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
EOF

Modify the NeuVector-Controller deployment yaml, add pvc information, and map the /var/neuvector directory to nfs (the default is hostpath mapped to local)

spec:
  template:
    spec:
      volumes:
        - name: nv-share
#         hostPath:                        // replaced by persistentVolumeClaim
#           path: /var/neuvector        // replaced by persistentVolumeClaim
          persistentVolumeClaim:
            claimName: neuvector-data

Or mount the nfs directory directly in the NeuVector deployment yaml

volumes:
      - name: nv-share
        nfs:
          path: /opt/nfs-deployment
          server: 172.26.204.144

4. Multi-cloud security management

In actual production applications, there will be multiple clusters for security management, and NeuVector supports the cluster federation function.

The Federation Master service needs to be exposed on a cluster, and the Federation Worker service needs to be deployed on each remote cluster. For better flexibility, both Federation Master and Federation Worker services can be enabled on each cluster.

Deploy this yaml on each cluster

apiVersion: v1
kind: Service
metadata:
  name: neuvector-service-controller-fed-master
  namespace: neuvector
spec:
  ports:
  - port: 11443
    name: fed
    nodePort: 30627
    protocol: TCP
  type: NodePort
  selector:
    app: neuvector-controller-pod

---

apiVersion: v1
kind: Service
metadata:
  name: neuvector-service-controller-fed-worker
  namespace: neuvector
spec:
  ports:
  - port: 10443
    name: fed
    nodePort: 31783
    protocol: TCP
  type: NodePort
  selector:
    app: neuvector-controller-pod

Promote one of the clusters to be the primary cluster

Upgrade one of the clusters to the main cluster, and configure the connection to remotely expose the ip and the port reachable to the remote cluster.

In the main cluster, a token is generated for other remote cluster connections.

Configure to join the main cluster in the remote cluster, configure the token and connection terminal

Multiple NeuVector clusters can be managed from the interface

5. Other configuration

If the NeuVector deployed in the yaml file mode can directly update the corresponding component image tag, the upgrade can be completed. Such as

kubectl set imagedeployment/neuvector-controller-podneuvector-controller-pod=neuvector/controller:2.4.1 -n neuvector
kubectl set image -n neuvectords/neuvector-enforcer-pod neuvector-enforcer-pod=neuvector/enforcer:2.4.1

If NeuVector is deployed by Helm, you can directly execute helm update to configure the corresponding parameters.

5.2. uninstall

Remove deployed components

kubectl delete -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/neuvector-docker-k8s.yaml

Delete the configured RBAC

kubectl get clusterrolebinding  | grep neuvector|awk '{print $1}'|xargs kubectl delete clusterrolebinding
kubectl get rolebinding -n neuvector | grep neuvector|awk '{print $1}'|xargs kubectl delete rolebinding -n neuvector

Delete the corresponding CRD

kubectl delete -f  https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/crd-k8s-1.19.yaml

kubectl delete -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/waf-crd-k8s-1.19.yaml

kubectl delete -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/admission-crd-k8s-1.19.yaml

:

SUSE's open source NeuVector is a mature and stable container security management platform. In the future, NeuVector will be better integrated with Rancher products.


Rancher
1.2k 声望2.5k 粉丝

Rancher是一个开源的企业级Kubernetes管理平台,实现了Kubernetes集群在混合云+本地数据中心的集中部署与管理。Rancher一向因操作体验的直观、极简备受用户青睐,被Forrester评为“2020年多云容器开发平台领导厂商...