1

In the previous article , we learned about the basic concepts in Kubernetes, its hardware structure, different software components (such as Pod, Deployment, StatefulSet, Services, Ingress, and Persistent Volumes), and learned how to communicate between services Communicate externally.

In this article, we will learn about:

  1. Create a NodeJS backend using MongoDB database
  2. Write a Dockerfile to containerize our application
  1. Create a Kubernetes Deployment script to start the Pod
  1. Create a Kubernetes Service script to define the communication interface between the container and the outside world
  1. Deploy Ingress Controller to request routing
  1. Write a Kubernetes Ingress script to define communication with the outside world.

图片

Since our code can be redirected from one node to another node (for example, a node does not have enough memory, so the work will be rescheduled to another node with enough memory), so the data saved on the node is easy to lose, It means that MongoDB data is unstable. In the next article, we will discuss data persistence issues and how to use Kubernetes persistent volumes to safely store our persistent data.

In this article, we will use NGINX as the Ingress Controller and Azure container image repository to store our custom Docker image. All the scripts written in the article can be found in the Stupid Simple Kubernetes git repo. If necessary, please visit the link to obtain:

http://GitHub - CzakoZoltan08/StupidSimpleKubernetes-AKS

Please note: these scripts are not limited to a certain platform, so you can use other types of cloud providers or local clusters with K3s to practice this tutorial. I recommend using K3s because it is very lightweight and all dependencies are packaged in a single binary file less than 100MB. More importantly, it is a highly available, CNCF-certified Kubernetes release designed for production workloads in resource-constrained environments. For more information, you can visit the official documentation:

https://docs.rancher.cn/k3s/

Preliminary preparation

Before starting this tutorial, make sure you have Docker installed. Also install kubectl.

Kubectl installation link:

https://kubernetes.io/docs/tasks/tools/#install-kubectl-on-windows

The Kubectl commands used in this tutorial can be found in the Kubectl cheat sheet ( https://kubernetes.io/docs/reference/kubectl/cheatsheet/ ).

In this tutorial, we will use Visual Studio Code, but this is not necessary, you can also use other editors.

Create a microservice architecture that can be used in production

containerize the application

The first step is to create a Docker image of the NodeJS backend. After the image is created, we will push it to the container mirror warehouse, where it can be accessed, and it can be pulled through the Kubernetes service (in this case, Azure Kubernetes Service).

The Docker file for NodeJS:  
FROM node:13.10.1  
WORKDIR /usr/src/app  
COPY package*.json ./  
RUN npm install  
# Bundle app source  
COPY . .  
EXPOSE 3000  
CMD [ "node", "index.js" ]

In the first line, we need to define according to the image of the back-end service to be created. In this case, we will use the official node image of version 13.10.1 in Docker Hub.

In line 3, we create a directory to save the application code in the image. This will be the working directory of your application.

The image already has Node.js and NPM installed, so in the next step we need to use the npm command to install your application dependencies.

Note that to install the necessary dependencies, we don't need to copy the entire directory, but only the package.json, which allows us to take advantage of the cached Docker layer.

For more information about efficient Dockerfile, please visit the following link:

http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/

In line 9, we copy the source code to the working directory, and in line 11, we expose it on port 3000 (you can choose another port if you want, but make sure to change the Kubernetes Service script synchronously.)

Finally, on line 13, we define the command to run the application (inside the Docker container). Please note that there should be only one CMD instruction in each Dockerfile. If more than one is included, only the last one will take effect.

Now that we have defined the Dockerfile, we will use the following Docker commands to build an image from the Dockerfile (using Visual Studio Code's Terminal or using CMD on Windows):

docker build -t node-user-service:dev .

Note the small dot at the end of the Docker command, which means that we are building the image from the current directory, so make sure you are in the same folder as the Dockerfile (in this case, the root folder of the repo).

To run the mirror locally, we can use the following command:

docker run -p 3000:3000 node-user-service:dev  

To push this image to our Azure container mirror repository, we must tag it with the following format <container-registry-login-service>/<image-name>:<tag>:, in this example it looks like this:

docker tag node-user-service:dev stupidsimplekubernetescontainerregistry.azurecr.io/node-user-service:dev

The last step is to use the following Docker command to push it to our container mirror repository:

docker push stupidsimplekubernetescontainerregistry.azurecr.io/node-user-service:dev

Use the deployment script to create a Pod

NodeJs backend

Next, define the Kubernetes Deployment script, which will automatically manage the Pod for us.

apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: node-user-service-deployment  
spec:  
  selector:  
    matchLabels:  
      app: node-user-service-pod  
  replicas: 3  
  template:  
    metadata:  
      labels:  
        app: node-user-service-pod  
    spec:  
      containers:  
        - name: node-user-service-container  
          image: stupidsimplekubernetescontainerregistry.azurecr.io/node-user-service:dev  
          resources:  
            limits:  
              memory: "256Mi"  
              cpu: "500m"  
          imagePullPolicy: Always  
          ports:  
            - containerPort: 3000

The Kubernetes API can query and manipulate the state of objects in the Kubernetes cluster (such as Pod, namespace, ConfigMap, etc.). As specified in the first line, the current stable version of this API is 1.

In each Kubernetes .yml script, we must use the kind keyword to define the Kubernetes resource type (Pods, Deployments, Service, etc.). Therefore, as you can see, we defined in line 2 that we want to use the Deployment resource.

Kubernetes allows you to add some metadata to resources. This way, you can more easily identify, filter, and reference resources.

In line 5, we define the specification of the resource. In line 8, we specify that this Deployment should only be applied to resources labeled app:node-user-service-pod. In line 9, we can see that we want to create 3 copies of the same Pod.

The Template (starting on line 10) defines the Pod. Here, we add the label app:node-user-service-pod to each Pod. In this way, Deployment will recognize them. In lines 16 and 17, we define which Docker container should be run inside the pod. As you can see in line 17, we will use the Docker image in the Azure container mirror repository, which was built and pushed in the previous section.

We can also define resource limits for Pods to avoid insufficient Pod resources (when one Pod uses all resources and other Pods cannot use them). In addition, when you specify a resource request for a container in a Pod, the scheduler will use this information to decide which node to place the Pod on. When you specify resource limits for a container, kubelet enforces these limits so that the running container is not allowed to use more than the resource limit you set. Kubelet also reserves at least the amount of "requests" for this system resource. Note that if you don't have enough hardware resources (such as CPU or memory), you will never be able to schedule pods.

The last step is to define the port used for communication. In this example, we use port 3000. This port number should be the same as the port number exposed in the Dockerfile.

MongoDB

The deployment scripts for MongoDB databases are very similar. The only difference is that we must specify the volume to be mounted (data will be saved in a folder on the node).

apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: user-db-deployment  
spec:  
  selector:  
    matchLabels:  
      app: user-db-app  
  replicas: 1  
  template:  
    metadata:  
      labels:  
        app: user-db-app  
    spec:  
      containers:  
        - name: mongo  
          image: mongo:3.6.4  
          command:  
            - mongod  
            - "--bind_ip_all"  
            - "--directoryperdb"  
          ports:  
            - containerPort: 27017  
          volumeMounts:  
            - name: data  
              mountPath: /data/db  
          resources:  
            limits:  
              memory: "256Mi"  
              cpu: "500m"  
      volumes:  
        - name: data  
          persistentVolumeClaim:  
            claimName: static-persistence-volume-claim-mongo

In this example, we used the official MongoDB mirror (line 17) directly from DockerHub. The volume mount is defined in line 24. When discussing Kubernetes persistent volumes, we will explain the last four lines in the next article.

Create a service for network access

Now we have started the Pod and started to define the communication between containers and with the outside world. For this, we need to define a service. The relationship between Service and Deployment is one-to-one, so for each Deployment, we should have a Service. Deployment can also manage the life cycle of Pods and is responsible for monitoring them, while Service is responsible for enabling network access to a group of Pods.

apiVersion: v1  
kind: Service  
metadata:  
  name: node-user-service  
spec:  
  type: ClusterIP  
  selector:  
    app: node-user-service-pod  
  ports:  
    - port: 3000  
      targetPort: 3000

The important part of this .yml script is the selector, which defines how to identify the Pod (created by Deployment) to be referenced from this Service. As we can see in line 8, the Selector is app:node-user-service-pod, because the Pod in the previously defined Deployment is marked as such. Another important thing is to define the mapping between container ports and service ports. In this case, incoming requests will use port 3000 defined in line 10 and route them to the port defined in line 11.

The Kubernetes Service scripts for MongoDB pods are very similar. We only need to update the Selector and port.

apiVersion: v1  
kind: Service  
metadata:  
  name: user-db-service  
spec:  
  clusterIP: None  
  selector:  
    app: user-db-app  
  ports:  
    - port: 27017  
      targetPort: 27017

Configure external flow

In order to communicate with the outside world, we need to define an Ingress Controller and use Ingress Kubernetes resources to specify routing rules.

To configure the NGINX ingress controller, we will use the script that can be found in the following link:

https://github.com/CzakoZoltan08/StupidSimpleKubernetes-AKS/blob/master/manifest/ingress-controller/nginx-ingress-controller-deployment.yml

This is a general script that can be applied without modification (detailed explanation of NGINX Ingress Controller is beyond the scope of this article).

The next step is to define a "load balancer" that will be used to route external traffic using public IP addresses (the cloud provider provides a load balancer).

kind: Service  
apiVersion: v1  
metadata:  
  name: ingress-nginx  
  namespace: ingress-nginx  
  labels:  
    app.kubernetes.io/name: ingress-nginx  
    app.kubernetes.io/part-of: ingress-nginx  
spec:  
  externalTrafficPolicy: Local  
  type: LoadBalancer  
  selector:  
    app.kubernetes.io/name: ingress-nginx  
    app.kubernetes.io/part-of: ingress-nginx  
  ports:  
    - name: http  
      port: 80  
      targetPort: http  
    - name: https  
      port: 443  
      targetPort: https

Now that we have up and running the Ingress controller and load balancer, we can define Ingress Kubernetes resources to specify routing rules.

apiVersion: extensions/v1beta1  
kind: Ingress  
metadata:  
  name: node-user-service-ingress  
  annotations:  
    kubernetes.io/ingress.class: "nginx"  
    nginx.ingress.kubernetes.io/rewrite-target: /$2  
spec:  
  rules:  
    - host: stupid-simple-kubernetes.eastus2.cloudapp.azure.com  
      http:  
        paths:  
          - backend:  
              serviceName: node-user-service  
              servicePort: 3000  
            path: /user-api(/|$)(.*)  
          # - backend:  
          #     serviceName: nestjs-i-consultant-service  
          #     servicePort: 3001  
          #   path: /i-consultant-api(/|$)(.*)

In line 6, we define the Ingress Controller type (this is a predefined value for Kubernetes; Kubernetes currently supports and maintains GCE and nginx controllers).

In line 7, we define the rewrite target rule, and in line 10, we define the host name.

For each service that should be accessed from the outside, we should add an entry to the path list (starting on line 13). In this example, we only added an entry for the NodeJS user service backend, which can be accessed through port 3000. /user-api uniquely identifies our service, so any request starting with stupid-simple-kubernetes.eastus2.cloudapp azure.com/user-api will be routed to this NodeJS backend. If you want to add other services, you must update this script (see the commented out code).

application .yml script

To apply these scripts, we will use kubectl. The kubectl command of the application file is as follows:

kubectl apply -f

In this example, if you are in the root folder of the Stupid Simple Kubernetes repo, you need to execute the following command:

kubectl apply -f .\manifest\kubernetes\deployment.yml  
kubectl apply -f .\manifest\kubernetes\service.yml  
kubectl apply -f .\manifest\kubernetes\ingress.yml  
kubectl apply -f .\manifest\ingress-controller\nginx-ingress-controller-deployment.yml  
kubectl apply -f .\manifest\ingress-controller\ngnix-load-balancer-setup.yml  

After applying these scripts, everything is ready, and then we can call the backend from the outside (such as using Postman).

to sum up

In this tutorial, we learned how to create various resources in Kubernetes, such as Pod, Deployment, Services, Ingress, and Ingress Controller. We created a NodeJS backend using the MongoDB database, and containerized and deployed the NodeJS and MongoDB containers using 3 copies of the pod.

In the next article, we will understand the problem of persisting data and will introduce persistent volumes in Kubernetes.

About the Author

Czako Zoltan, an experienced full-stack developer, has extensive experience in front-end, back-end, DevOps, IoT, and artificial intelligence.


Rancher
1.2k 声望2.5k 粉丝

Rancher是一个开源的企业级Kubernetes管理平台,实现了Kubernetes集群在混合云+本地数据中心的集中部署与管理。Rancher一向因操作体验的直观、极简备受用户青睐,被Forrester评为“2020年多云容器开发平台领导厂商...