16
头图

At the beginning, let's take a look at the build and deployment process in ancient times. I think everyone is familiar with this:

  • Development The source code is compiled, compressed and packaged to generate a package file
  • Upload the packaged file to the server

Obviously, this process is not only cumbersome, but also inefficient. It takes a long time to deploy and build each release.

In order to solve this problem later, CI/CD appeared.

Next, let's talk about what is CI/CD?

CI/CD is Continuous Intergration/Continuous Deploy , which translates to continuous integration/continuous deployment. CD will also be interpreted as continuous delivery ( Continuous Delivery )

To be more specific is:

  • Continuous integration: When the code warehouse code changes, the code will be tested and built automatically, and the results will be fed back.
  • Continuous delivery: Continuous delivery is based on continuous integration. The integrated code can be deployed to the test environment, pre-release environment, and production environment in turn

After talking so much, I believe many students will definitely say:

  • Isn't this generally done by operation and maintenance?
  • It's not related to business, what's the use of understanding it?
  • It's all server-related things, docker , nginx , cloud server, etc. How should I learn?

I thought the same way a long time ago. I don’t think it has much to do with my business, and there is not much need to understand.

But recently, when I was working on a full-stack project (this project was to break through my own bottleneck), I encountered these problems and found that I fell into a knowledge blind zone.

There is no other way but to make up for it.

But when I learned this knowledge and practiced these processes in the project, my knowledge was greatly expanded. Have a new understanding of the operating system, the actual construction and deployment, and even the engineering.

Here also put down the architecture diagram full-stack project mentioned earlier:

This large project to low code as the core, to include editor front end, editor backend, C terminal H5, component library, components of the platform, background management system front, background management system background, Statistics Service , self-developed nine CLI systems.

The front end of the editor has been explained in detail in the article how to design and implement the H5 marketing page building system

At present, about 70% of the entire project has been done, and many problems have been encountered in the process, which have also been greatly improved. There will be a wave of articles about the small points in the project, and they are full of dry goods.

Back to the topic of this article: uses Docker Compose, Nginx, SSH, and Github Actions to implement a front-end automated deployment test machine. Based on the background management system, this article explains in detail how to automatically release a pure front-end project with the Docker , nginx , and Github CI/CD . There are two starting points for choosing this project to explain the automated release test machine:

  • Back-end management system business is relatively simple, can focus on the automated deployment process
  • Pure front-end projects are more suitable for the status quo of most front-end students, ready to use

the whole idea

The front-end code, which is packaged as a static file, can be nginx as a service by 06189498fe5f0e. Ideas:

  • Build a Docker container (there is nginx )
  • Copy the dist/ directory to the Docker container
  • Start nginx service
  • Host port, corresponding to Docker container port, you can access

Core code changes:

  • nginx.conf (to Docker container nginx use)
  • Dockerfile
  • docker-compose.yml
⚠️ This article will use a combination of theoretical knowledge and practice, that is, first tell the corresponding knowledge points, and at the same time put the project code or configuration files related to this knowledge point.

The following will explain the knowledge points of Docker , docker-compose , ssh , github actions

Docker

Docker long time ago, in an article who said that the front-end does not need to learn docker? has a detailed description. Here is a brief explanation.

docker can be regarded as a high-performance virtual machine, mainly used for the virtualization of the linux Developers can package their applications and dependent packages into a portable container, and then publish to any popular linux machine. Containers completely use the sandbox mechanism, and there will be no interfaces between them.

In a container you can do anything the server can be done, for example, there are node operating environment container npm run build packaged items in there nginx deployment project environment container and so on.

In centos installed on docker

Because of the cloud server is centos , so here mention how centos install docker :


$ sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine

$ sudo yum install -y yum-utils device-mapper-persistent-data lvm2

$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

$ sudo yum install docker-ce docker-ce-cli containerd.io

$ sudo systemctl start docker

$ sudo docker run hello-world

dockerfile

docker using Dockerfile constructed as a mirrored configuration file, a simple look node application built dockerfile :

FROM node:12.10.0

WORKDIR /usr/app

COPY package*.json ./

RUN npm ci -qy

COPY . .

EXPOSE 3000

CMD ["npm", "start"]

Explain the meaning of each keyword.

FROM

Based on this Image start

WORKDIR

Set working directory

COPY

Copy files

RUN

Execute commands in a new layer

EXPOSE

Declare the container listening port

CMD

The default value of the command executed when the container starts

Dockerfile at the 06189498fe657c file in the project:

# Dockerfile
FROM nginx

# 将 dist 文件中的内容复制到 /usr/share/nginx/html/ 这个目录下面
# 所以,之前必须执行 npm run build 来打包出 dist 目录,重要!!!
COPY dist/ /usr/share/nginx/html/

# 拷贝 nginx 配置文件
COPY nginx.conf /etc/nginx/nginx.conf

# 设置时区
RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone

# 创建 /admin-fe-access.log ,对应到 nginx.conf
CMD touch /admin-fe-access.log && nginx && tail -f /admin-fe-access.log

In this file, we did the following things:

1. We used Nginx of Docker image as base image .

2. Put all the contents of the packaged folder dist/ into the default HTML Nginx Docke r, which is /usr/share/nginx/html /inside.

3, the custom Nginx profile nginx.conf into Nginx Docker configuration folder /etc/nginx/nginx.conf in.

4. Set the time zone.

5. Create /admin-fe-access.log , start nginx and use tail -f simulate a blocking process similar to pm2

nginx.conf file is mentioned here:

#nginx进程数,通常设置成和cpu的数量相等
worker_processes auto;

#全局错误日志定义类型
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#进程pid文件
#pid        logs/nginx.pid;

#参考事件模型
events {
    #单个进程最大连接数(最大连接数=连接数+进程数)
    worker_connections  1024;
}

#设定http服务器
http {
    #文件扩展名与文件类型映射表
    include       mime.types;
    #默认文件类型
    default_type  application/octet-stream;

    #日志格式设定
    #$remote_addr与 $http_x_forwarded_for用以记录客户端的ip地址;
    #$remote_user:用来记录客户端用户名称;
    #$time_local: 用来记录访问时间与时区;
    #$request: 用来记录请求的url与http协议;
    #$status: 用来记录请求状态;成功是200,
    #$body_bytes_sent :记录发送给客户端文件主体内容大小;
    #$http_referer:用来记录从那个页面链接访问过来的;
    #$http_user_agent:记录客户浏览器的相关信息;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                     '$status $body_bytes_sent "$http_referer" '
                     '"$http_user_agent" "$http_x_forwarded_for"';

    # access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    #长连接超时时间,单位是秒
    keepalive_timeout  65;

    #gzip  on;

    #设定通过nginx上传文件的大小
    client_max_body_size   20m;

    #虚拟主机的配置
    server {
        #监听端口
        listen       80;
        #域名可以有多个,用空格隔开
        server_name  admin-fe;

        #charset koi8-r;

        #定义本虚拟主机的访问日志
        access_log  /admin-fe-access.log  main; # 注意,在 Dockerfile 中创建 /admin-fe-access.log

        #入口文件的设置
        location / {
            root   /usr/share/nginx/html;   #入口文件的所在目录
            index  index.html index.htm;    #默认入口文件名称
            try_files $uri $uri/ /index.html;
        }
        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

The core point is to monitor the 80 , define the log file as admin-fe-access.log , and the root directory of the entry file as /usr/share/nginx/html . These are one-to-one correspondence Dockerfile

After talking about Dockerfile and its related configuration files, docker at several core concepts in 06189498fe67e8.

Docker core concept

There are three very important concepts in docker

  • Image
  • Container
  • Warehouse

A picture to show the relationship:

If the container is lightweight compared to the server, then it is to create a template image, a Docker image may create multiple containers, like their relationship JavaScript in classes and example of relationship.

Common commands for image:

  • Download mirror: docker pull <image-name>:<tag>
  • View all mirrors: docker images
  • Delete the mirror: docker rmi <image-id>
  • Upload mirror: docker push <username>/<repository>:<tag>
If docker images appears repository is <none> , you can run docker image prune delete

Container (container) commonly used commands

  • Start the container: docker run -p xxx:xxx -v=hostPath:containerPath -d --name <container-name> <image-name>

    • -p port mapping
    • -v data volume, file mapping
    • -d run in the background
    • --name defines the container name
  • View all containers: docker ps (add -a show hidden containers)
  • Stop container: docker stop <container-id>
  • Delete container: docker rm <container-id> (add -f force deletion)
  • View container information (such as IP address, etc.): docker inspect <container-id>
  • View the container log: docker logs <container-id>
  • Enter the container console: docker exec -it <container-id> /bin/sh

After the image is built, it can be easily run on the current host. However, if you need to use this image on other servers, we need a centralized storage and distribution service. Docker Registry is such a service.

A Docker Registry can contain multiple warehouses ( Repository ); each warehouse can contain multiple tags ( Tag ); each tag corresponds to a mirror image. So: the mirror warehouse is Docker used to centrally store mirror files, similar to the code warehouse we used before.

docker-compose

docker-compose project is Docker official open source project, responsible for the realization of Docker fast layout container cluster. Allow users to define a group of associated application containers as a project ( project ) through a single docker-compose.yml template file (YAML format).

The biggest advantage of using compose is that you only need to define your own application stack in one file (that is, all the services that the application needs to use), and then put the YAML file in the root directory of the project, which is under version control together with the source code . Others can quickly start the service after clone

It is usually suitable for scenarios where there are many operating environments required by the project (corresponding to multiple docker containers), such as relying on nodejs , mysql , mongodb , redis etc. at the same time.

Drop the docker-compose.yml file here:

version: '3'
services:
  admin-fe:
    build:
      context: .
      dockerfile: Dockerfile
    image: admin-fe # 引用官网 nginx 镜像
    container_name: admin-fe
    ports:
      - 8085:80 # 宿主机可以用 127.0.0.1:8085 即可连接容器中的数据库

Create a mirror based on the above Dockerfile , the port mapping is 8085:80 , where 8085 is the host port, and 80 corresponds to the port 80 exposed by nginx

Common commands

  • Build the container: docker-compose build <service-name>
  • Start all servers: docker-compose up -d (start in the background)
  • Stop all services: docker-compose down
  • View service: docker-compose ps

ssh and cloud server

First of all, let's talk about the cloud server. Since we want to deploy the test machine with one click, then there must be a test machine, which is the cloud server. Here I use Cloud 16189498fe7169 CentOS 8.4 64-bit operating system.

With a server, how do I log in?

There are generally two ways to log in to the cloud server locally, password login and ssh login. However, if you log in with a password, you have to enter the password every time, which is more troublesome. Here we use ssh log in. For how to log in to the remote server without password, please refer to SSH password-free login configuration

After that, you can ssh <username>@<IP> in directly without password through 06189498fe71d1 every time you log in.

Cloud server installation specified package

Next, you need to install the basic package for the cloud server. Installing the specified package CentOS yum , which is different from npm .

docker

# Step 1: 卸载旧版本
sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
# Step 2: 安装必要的一些系统工具
sudo yum install -y yum-utils
# Step 3: 添加软件源信息,使用阿里云镜像
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 4: 安装 docker-ce
sudo yum install docker-ce docker-ce-cli containerd.io
# Step 5: 开启 docker服务
sudo systemctl start docker
# Step 6: 运行 hello-world 项目
sudo docker run hello-world

If you are like me, if you have Hello from Docker! , then Docker is installed successfully!

docker-compose

Get the latest docker-compose (for example: 1.27.4 ) by visiting https://github.com/docker/compose/releases/latest , and then execute the following command to install docker-compose

# 下载最新版本的 docker-compose 到 /usr/bin 目录下
curl -L https://github.com/docker/compose/releases/download/1.27.4/docker-compose-`uname -s`-`uname -m` -o /usr/bin/docker-compose

# 给 docker-compose 授权
chmod +x /usr/bin/docker-compose

After installation, enter docker-compose version command line to verify whether the installation is successful:

node

First make sure you can access the EPEL library, install it by running the following command:

sudo yum install epel-release

Now you can use the yum command to install Node.js :

sudo yum install nodejs

Verify it:

nginx

yum Installation nginx is very simple, just enter a command:

$ sudo yum -y install nginx   # 安装 nginx

git

Also use yum to install:

yum install git

Finally, take a look at github actions , which also connects the points mentioned above.

github actions

As you all know, continuous integration consists of many operations, such as pulling code, executing test cases, logging in to remote servers, publishing to third-party services, and so on. GitHub to these operations as actions .

Let's first understand some terms:

  • workflow (workflow): The process of continuous integration running at one time is a workflow.
  • job (task): A workflow is composed of one or more jobs, which means that a continuous integration operation can complete multiple tasks.
  • step (Steps): Each job is composed of multiple steps, which are completed step by step.
  • action (action): Each step can execute one or more commands (action) in turn.

workflow file

GitHub Actions configuration file called workflow files, stored in a code repository .github/workflows directory.

workflow file adopts the YAML , the file name can be taken arbitrarily, but the suffix name is uniformly .yml , such as deploy.yml . A library can have multiple workflow files. GitHub if they find .github/workflows directory there are .yml file, it will automatically run the file.

workflow file has many configuration fields. Here are some basic fields.

name

name field is the name workflow

If this field is omitted, the default is the current file name of workflow
name: deploy for feature_dev

on

on field specifies the conditions workflow push or pull_request .

When specifying a trigger event, you can limit branches or tags.

on:
  push:
    branches:
      - master

The above code specifies that only master branch occurs push event will trigger workflow .

jobs

jobs field indicates one or more tasks to be performed. The runs-on field specifies the virtual machine environment required for operation.

runs-on: ubuntu-latest

steps

steps field specifies Job , which can contain one or more steps. Each step can specify the following three fields.

  • jobs.<job_id>.steps.name : Step name.
  • jobs.<job_id>.steps.run : The command or action to run in this step.
  • jobs.<job_id>.steps.env : The environment variables required for this step.

.github/workflows/deploy-dev.yml file in the project below:

name: deploy for feature_dev

on:
  push:
    branches:
      - 'feature_dev'
    paths:
      - '.github/workflows/*'
      - '__test__/**'
      - 'src/**'
      - 'config/*'
      - 'Dockerfile'
      - 'docker-compose.yml'
      - 'nginx.conf'

jobs:
  deploy-dev:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2
      - name: Use Node.js
        uses: actions/setup-node@v1
        with:
          node-version: 14
      - name: lint and test # 测试
         run: |
           npm i
           npm run lint
           npm run test:local
      - name: set ssh key # 临时设置 ssh key
        run: |
          mkdir -p ~/.ssh/
          echo "${{secrets.COSEN_ID_RSA}}" > ~/.ssh/id_rsa
          chmod 600 ~/.ssh/id_rsa
          ssh-keyscan "106.xx.xx.xx" >> ~/.ssh/known_hosts
      - name: deploy
        run: |
          ssh work@106.xx.xx.xx "
            cd /home/work/choba-lego/admin-fe;
            git remote add origin https://Cosen95:${{secrets.COSEN_TOKEN}}@github.com/Choba-lego/admin-fe.git;
            git checkout feature_dev;
            git config pull.rebase false;
            git pull origin feature_dev;
            git remote remove origin;

            # 构建 prd-dev
            # npm i;
            # npm run build-dev;

            # 启动 docker
            docker-compose build admin-fe; # 和 docker-compose.yml service 名字一致
            docker-compose up -d;
          "
      - name: delete ssh key
        run: rm -rf ~/.ssh/id_rsa

Here is an overview:

1️⃣ The whole process is triggered when the push to feature_dev

2️⃣ There is only one job , running in the virtual machine environment ubuntu-latest .

3️⃣ The first step is to use the most basic action , which is actions/checkout@v2 . Its function is to allow our workflow to access our repo .

4️⃣ second step is mounted on the machine to perform the workflow node , herein action is actions/setup-node@v1 .

5️⃣ The third step is to execute lint and test .

6️⃣ The fourth step is to temporarily set ssh key , which is also to prepare for the next step to log in to the server.

7️⃣ The fifth step is deployment. First, ssh logs in to the server, pulls the latest branch code, installs dependencies, packages, and finally starts docker to generate a mirror image. Docker service is available on the test machine here.

8️⃣ The last step is to delete ssh key .

Finally, come to github see the complete process:

Among them, the deploy stage is the core:

Summarize

I wrote so much eloquently, I don’t know if you understand it 😂

If you have any questions, please leave a message in the comment area and will answer as soon as you see it😊

There will be many articles about this project in the follow-up, please pay more attention to it~


前端森林
2.4k 声望13.2k 粉丝