6
头图

This is a series of free knowledge, there are graphic version and video version , what you see now is the graphic version.

The NGINX series of courses are divided into three chapters, basic chapter, advanced chapter and enterprise practice chapter. What you are reading now is basic chapter .

The video version is published in my own community. Friends who like to watch videos can go to the community scan code login or Wechat authorization 1619c8a7c3be2a login, they can play .

Write in front

Basics Learning Purpose : Understand NGINX, and be able to do it yourself, be able to independently complete load balancing configuration, bind domain names, and realize access to back-end services through domain names.

series of courses learning objectives : understand NGINX, be able to independently complete load balancing configuration, be able to build a highly available enterprise-level production environment, and monitor NGINX.

Friends who have listened to the video class have been able to complete the load balancing configuration independently, and have also submitted homework

image.png

In the past six months, we will output a lot of public classes, including graphic and video versions. The important thing is that these are free!!! You can listen to it when you come. course list as follows

image.png

Among them, the green label is the content that has been published, and the red one is the content of the material being prepared.

NGINX Basics Graphic Version

Okay, let's get started.

If the usual contact with the back-end server, or fewer people may ask, NGINX is what ?

Regarding what it is, we can refer official website and Baidu Encyclopedia . NGINX is a high-performance HTTP server and also a reverse proxy server (the original reverse proxy server official website is called 0619c8a7c). In addition to supporting HTTP protocol, it also supports mail protocol, TCP/UDP, etc.

What can ?

In my opinion, it is actually a gateway. Function 1 request forwarding, function 2 current limiting, function 3 authentication, and function 4 load balancing. The reverse proxy reverse proxy server mentioned above can be classified as request forwarding.

Forward proxy, reverse proxy? ? ?

We won't talk about too much truth, you can read the interpretation of this issue on other platforms https://zhuanlan.zhihu.com/p/25707362

Here we briefly summarize, the object of the forward proxy proxy is client , and the object of reverse proxy proxy is server .

Friends who are crawlers, the IP proxy you usually use is the forward proxy, and the crawler program forwards the request to the backend through the proxy. The NGINX reverse proxy we mentioned forwards the client's request to the backend. Borrow a few pictures from the article mentioned above

image.png

image.png

Are there many companies using NGINX?

Most companies use NGINX, as large as Google Meta (Facebook) Amazon Alibaba Tencent HUAWEI, as small as 70%+ (I guess, actually more than this) Internet companies in the world. The community also uses NGINX.

Install NGINX

The installation is based on Ubuntu20.04, cloud server. The basic chapter first passes through the quick installation, so that we can operate and learn some basics, and the follow-up advanced chapter will be compiled and installed.

Open Terminal, execute sudo apt install nginx -y , and wait for the command to execute to complete the installation. After the installation is complete, it will start on its own. You can visit your server's address. For example, my server IP is 101.42.137.185 , then I visit http://101.42.137.185

image.png

If the page displays Welcome to nginx , the service is normal. If not, please check the error message output by Terminal during installation, or check your own firewall, security group policy, etc. (If you don’t understand or how to operate it is not , you can learn from the 1619c8a7c3c1e7 Linux Cloud Server Open Course

A brief description of the basic working principle of NGINX and the relationship between modules

NGINX has a main process and multiple worker processes. The main process is used to maintain its own operation, such as reading the configuration, analyzing the configuration, maintaining the work process, reloading the configuration, etc.; the work process is the process that specifically responds to the request.

The number of working processes can be adjusted in the configuration file.

composed of modules, which are controlled by the configuration in the configuration file, which means that the configuration file determines how NGINX works.

I'm still quoting other articles here, so I won't write them all by myself. For the principle and architecture of https://zhuanlan.zhihu.com/p/133257100 1619c8a7c3c285. In fact, there is only one area we need to pay attention to in the initial stage, that is, the module part. Just take a look and get a general understanding. No need to go deep.

NGINX signals

Signal, here refers to the control signal. Signal is a module that controls the working status of NGINX. The signal syntax format is

nginx -s signal

Commonly used signals are

stop 快速关停
quit 正常关停
reload 重新载入配置
reopen 重新打开日志文件

The correct shutdown of nginx -s quit which allows NGINX to exit after processing the work that has already started.

NGINX configuration instructions

Based on the previous community public class, we can take a look at its application management configuration before officially talking about NGINX configuration. Find the NGINX Server configuration file through the status command

> systemctl status nginx

View Server configuration of NGINX

[Unit]
Description=A high performance web server and a reverse proxy server
Documentation=man:nginx(8)
After=network.target

[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'
ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'
ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid
TimeoutStopSec=5
KillMode=mixed

[Install]
WantedBy=multi-user.target

Seeing the 0619c8a7c3c35f option, you can confirm ExecStart /usr/sbin/nginx . This configuration file Linux cloud server open class mention it here.

Find the default main configuration file

The configuration file part officially started

has 1619c8a7c3c3a5 main configuration files and auxiliary configuration files . The default name of the main configuration file is nginx.conf and is stored in /etc/nginx/nginx.conf default. The path of the auxiliary configuration file is controlled by the main configuration file. The specific path is set by the main configuration file. The file name and path of the auxiliary configuration can be changed. The file name usually ends conf

After the installation is complete, if you don’t know where the main configuration file is, you can search through the default path or find command.

> sudo find / -name nginx.conf
/etc/nginx/nginx.conf

The basic structure and function of the main configuration file. Use cat /etc/nginx/nginx.conf to list the contents of the file. If you don't understand, then you can learn the specific Linux file viewing instructions Linux cloud server open class

user www-data;  # 用户
worker_processes auto; 工作进程数
pid /run/nginx.pid; # 进程文件
include /etc/nginx/modules-enabled/*.conf; # 插件模块配置

events {
        worker_connections 768;  # 允许同时连接的连接数
        # multi_accept on;
}

http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;
        include /etc/nginx/conf.d/*.conf;  # 辅助配置文件路径
        include /etc/nginx/sites-enabled/*;
}


# 示例
#mail {
  ...
#}

I made appropriate adjustments to the configuration files listed here, deleted the annotated content, and retained the effective content. The meanings of important items are marked in Chinese in the form of notes.

Seeing the configuration, you must be a little confused, what are these all about? Next, let's learn the basic syntax of NGINX configuration files.

Basic syntax of NGINX configuration file

The configuration item in the configuration file becomes the 1619c8a7c3c462 instruction , and the instructions are divided into simple instructions and block instructions . A simple instruction is composed of instruction name and parameter , separated by spaces and English semicolon , for example

worker_processes auto;

Among them, worker_processes is the instruction, the function of this instruction is to set the number of work processes. auto represents the number of processes, which can be a number or auto (calculated according to a fixed mathematical formula based on the number of CPUs, generally CPU+1).

The syntax format of a block instruction is similar to that of a simple instruction, and more simple instructions are wrapped in curly braces, such as

http {
  server {
    ...
  }
}

Context/context

Context is also called context in some places. If the block instruction contains other instructions , this block instruction is called context . Common context e.g.

events
http
server
location

There is a hidden context instruction, main. It does not show the statement, all instructions outermost is the main range. main is used as a reference for other contexts. For example, events and http must be in the scope of main; server must be in http; location must be in server; the above restrictions are fixed and cannot be placed randomly, otherwise the NGINX program cannot be run, but it can be in the log See the error message in it.

After talking so much, you must be exhausted, let us do it!

Use NGINX to configure the proxy for the backend program

A simple WEB service, such as the flask application below

from flask import Flask
from flask_restful import Resource, Api
​
app = Flask(__name__)
api = Api(app)
​
​
class HelloWorld(Resource):
    def get(self):
        app.logger.info("receive a request, and response '穿甲兵技术社区'")
        return {'message': '穿甲兵技术社区', "address": "https://chuanjiabing.com"}
​
​
api.add_resource(HelloWorld, '/')
​
if __name__ == '__main__':
    app.run(debug=True, host="127.0.0.1", port=6789)

Write the content to a file on the server, such as /home/ubuntu/ke.py .

Remember to install the relevant Python library pip3 install flask-restful

There is a new version of Python by default on Ubuntu 20.04, so don't worry about the environment. Run this web backend service python3 /home/ubuntu/ke.py

After completing the startup of the backend, let's configure NGINX

From the previous view of the main configuration file, we can see that the directory of the auxiliary configuration file is /etc/nginx/conf.d , so now we add a configuration file to the auxiliary configuration file directory

> sudo vim /etc/nginx/conf.d/ke.conf

server {
    listen 8000;
    server_name localhost;

    location / {
        proxy_pass http://localhost:6789;
    }
}

Check if the configuration file is correct

> sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Reload configuration

> sudo nginx -s reload

Browser access http://ip:port such as my server http://101.42.137.185:8000/

You can see the output of the backend

NGINX log files

The default is divided into normal log and internal error log, the log path can be set in the main configuration file

/var/log/nginx/access.log
/var/log/nginx/error.log

View normal logs

> cat /var/log/nginx/access.log
117.183.211.177 - - [19/Nov/2021:20:18:46 +0800] "GET / HTTP/1.1" 200 107 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36"
117.183.211.177 - - [19/Nov/2021:20:18:48 +0800] "GET /favicon.ico HTTP/1.1" 404 209 "http://101.42.137.185:8000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36"

Official document-log format http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

Default log format

log_format compression '$remote_addr - $remote_user [$time_local] '
                       '"$request" $status $bytes_sent '
                       '"$http_referer" "$http_user_agent" "$gzip_ratio"';

You can configure it yourself in the main configuration file, and refer to the official documentation for specific configuration items.

Use NGINX to configure the proxy for the front-end program

A simple HTML document

> vim /home/ubuntu/index.html
<html><meta charset="utf-8"/><body><title>穿甲兵技术社区</title><div><p>穿甲兵技术社区<p><a>https://chuanjiabing.com</a></div><body></html>

Regardless of whether it is a large front-end project or a small and medium-sized front-end project, generally speaking, it needs to be compiled into an HTML document, and then an application like NGINX is used to provide accessible services.

Note: Some Vue/React services may be deployed for server-side rendering, but most of them are still compiled into HTML. There is no difference in configuration between the simple example here and those front-end engineering projects. As an example, don't worry, learning NGINX is what matters.

> sudo vim /etc/nginx/conf.d/page.conf

server {
    listen 1235;
    server_name localhost;
    charset utf-8;
    
    location / {
        root /home/ubuntu/;
        index index.html index.htm;
    }
}

Load balancing based on NGINX

Imagine a scenario. For example, the back-end service on your server is mainly used for formatting time. There are many crawlers that need to call it, and you also need to ensure that the service is stable and available.

Scenario extension: Suppose you reverse engineered a JS algorithm, and now all crawlers need to call this algorithm to generate the sign value before making a request, and request it with the value. If you put the JS code in Python/Golang code for local call execution, then you need to modify/re-deploy all crawler programs when you change the algorithm, but to make it into a WEB service, you only need to modify/restart the WEB service. .

There are 2 obvious shortcomings in the current situation of 1 back-end service:

1. Insufficient service performance, too many requests will cause the program to freeze, the response speed will be slow, and the overall efficiency will be affected;

2. The overall service is unstable. Once the process exits or the server crashes, the service will be inaccessible;

Benefits of using load balancing

1. Start multiple back-end services, configure load balancing, and forward requests to other gates for processing on-demand (for example, in turn), then you can take on more work requirements;

2. One NGINX loads multiple back-end services. When a service or several services exit the process, there are other services working;

proxy_pass only needs to introduce the 0619c8a7c3c7ea instruction and the corresponding upstream context to achieve load balancing. A simple load balancing configuration example

⚠️ 1619c8a7c3c816 experimenting, please start multiple back-end programs . You can copy the Flask code just now to another file (such as /home/ubuntu/main.py , but remember to change the port number , it is recommended to change to 6799 as the tutorial). If you want to see the load effect on the web page, you can The response content uses 6789/6799 to distinguish the specific back-end program.

# /etc/nginx/conf.d/ke.conf 内容改为
upstream backend{
    server localhost:6789;
    server localhost:6799;
}

server {
    listen 8000;
    server_name localhost;

    location / {
        proxy_pass http://backend;
    }
}

Reload the configuration after saving

> sudo nginx -s reload

After visiting http://101.42.137.185:8000/ multiple times, you can see that the content displayed on the page is that the two back-end services, 6789 and 6799, alternately return information, which shows that the load balancing configuration succeeded .

image.png

image.png

Domain name resolution and configuration practice

Open the cloud service provider console (Tencent Cloud is used as an example in the follow-up, because Tencent Cloud lightweight server is used when the tutorial is recorded). There are differences in the interfaces of other cloud service providers. Please act accordingly.

Search for domain name resolution in the search box (Tencent’s DNSPOD)

image.png

Enter to find the domain name to be resolved (the premise here is that you have already bought the domain name and file it. If not, then you can see my operation), click resolve

image.png

Click to add record

Enter the subdomain name in the host record (for example, ke ), enter the server IP address in the record value, and then select Save. Other options are default.

image.png

After completing the settings of the cloud server console, it is still not possible to access the applications on our server through the domain name

Go to the server to change the NGINX auxiliary configuration file, change the port, and bind the domain name

> sudo vim/etc/nginx/conf.d/ke.conf
# 改动 server 上下文中的 listen 和 server_name
listen 80;
server_name ke.chuanjiabing.com;

Remember to reload the configuration

> sudo nginx -s reload

Then you can access the service through the domain name http://ke.chuanjiabing.com/

image.png

Homework : 3 screenshots of the NGINX load balancing configuration of the back-end program under the community course post. One is the configuration screenshot; the other two are the screenshots of the load configuration in effect when the browser is accessed.

The syllabus of the follow-up advanced and enterprise practice articles is as follows. The learning purpose of the follow-up courses: to be able to apply NGINX well at work, complete enterprise-level production environment deployment and monitoring alarms

Advanced

NGINX load balancing strategy theory

Compile and install NGINX

Implement authorization verification based on NGINX

Realize access current limit based on NGINX

Simple anti-crawler based on NGINX

Realize non-stop updates based on NGINX

Enterprise-level Practice

NGINX's HTTPS configuration practice

NGINX plugin installation

NGINX data monitoring combat

NGINX production environment high availability deployment practice


今日长剑在握
840 声望869 粉丝