1
头图

Table of contents

Nginx

Nginx is a high-performance http and reverse proxy web server, and also provides IMAP/POP3/SMTP services

The main functions are

  • reverse proxy
  • Clustering and load balancing through configuration files
  • Static resource virtualization

What is a reverse proxy

forward proxy

Forward proxy → proxy client : the client cannot directly send the request to the target server, it can forward the request to the target server through the proxy server that can request to the target server, and return the content obtained from the target server to the client, which is It is a forward proxy.

The typical use of forward proxy is to provide a way for LAN clients in the firewall to access the Internet. For example, when we work from home, we need to access the intranet environment through VPN. The role of VPN is forward proxy.

Forward proxies can use caching features to reduce network usage.

reverse proxy

Reverse proxy → proxy server : The client does not know the actual address of the server, and it forwards the request to the corresponding server by accessing the proxy server.

The role of reverse proxy

  1. Protect and hide origin servers

    A reverse proxy can prevent the client from directly accessing the original resource server

  2. load balancing

    The reverse proxy server can distribute multiple client requests to different servers through a load balancing algorithm to reduce the pressure on a single server. Of course, there can also be multiple reverse proxy servers to form a proxy server cluster.

  3. cache

    The reverse proxy server can have the function of caching like the forward proxy server. It can cache the static resources and other data returned by the original resource server to improve the request efficiency, which is also the core of CDN technology.

  4. routing

    Requests can be distributed to different servers through routing and other information in the domain name. This is a bit like load balancing, but the main purpose of load balancing is to balance the pressure on each server. Routing is to distribute requests with different needs to different servers. server.

Nginx process model

Nginx can be divided into master process (master) and worker process (worker), master process is mainly used to manage woker to carry out, there can be multiple worker processes, but only 1 by default indivual.

 [root@VM-24-13-centos nginx-1.22.0]# ps aux|grep nginx
root      2868  0.0  0.0  22292  1456 ?        Ss   Jul25   0:00 nginx: master process ./nginx
nobody   18991  0.0  0.0  24376  1768 ?        S    20:24   0:00 nginx: worker process

The amount of work performed can be specified via the worker_processes configuration

 #user  nobody;
worker_processes  2;

[root@VM-24-13-centos nginx-1.22.0]# ps aux|grep nginx
root      2868  0.0  0.0  22292  1456 ?        Ss   Jul25   0:00 nginx: master process ./nginx
nobody   23812  0.0  0.0  24376  1492 ?        S    20:49   0:00 nginx: worker process
nobody   23813  0.0  0.0  24376  1492 ?        S    20:49   0:00 nginx: worker process

master The process mainly sends the following commands to worker The process stops, restarts, etc.

 Worker抢占

Nginx is a multi-process and single-threaded, which can not only improve the concurrency efficiency of nginx, but also prevent the processes from affecting each other. Even if one worker process hangs, it will not affect other processes.

Worker preemption mechanism

The listen socket created by the main process needs to be shared by the subprocesses from the fork, but in order to avoid multiple subprocesses competing for shared resources at the same time, nginx adopts a strategy: make multiple subprocesses, at the same time, only one subprocess can obtain resources , there is no contention for shared resources.

Those who successfully acquire the lock can acquire a certain amount of resources, while other sub-processes that do not acquire the lock successfully cannot acquire the resources. They can only wait for the process that successfully acquires the lock to release the lock, and then the nginx multi-process re-enters the lock competition link.

Nginx event handling

The master process manages multiple worker processes, and then the processing of messages by each process uses Linux's epoll model to multiplex only from io

 events {
    #不写 默认也是epoll
    use  epoll;
    #一个worker进程最大的连接数
    worker_connections  1024;
}

configuration file

Configuration structure

Main configuration

📌Global configuration
  • user the execution user of the worker process

     [root@VM-24-13-centos ~]# ps aux|grep nginx
    root      2868  0.0  0.0  22292  1456 ?        Ss   Jul25   0:00 nginx: master process ./nginx
    nobody   23812  0.0  0.0  24376  1492 ?        S    20:49   0:00 nginx: worker process
    nobody   23813  0.0  0.0  24376  1676 ?        S    20:49   0:00 nginx: worker process
  • woker_processes Number of worker processes
  • error_log Error log and log level

    Log level: debug→info→notice-warn→error→crit

    The default log level is error

     error_log  logs/error.log;
    error_log  logs/error.log  notice;
    error_log  logs/error.log  info;
  • pid Process ID of the log

     #默认会配置到以下路径 - 我们打开对应的nginx.pid文件,里面是对应的pid = 2868
    pid        logs/nginx.pid;
http network module
  • include Import external configuration file → Generally used to simplify configuration
  • defalut_type default type
  • log_format Log format
  • access_log request log
  • senfile on Enable efficient file transfer
  • tcp_nopush on data packets are accumulated to a certain number and then sent, the premise of which is to enable sendfile
  • keepalive_timeout The survival time of http connection, the unit is second, the default value is 65 seconds
  • gzip on Whether the file is compressed during the transmission process. After compression, the transmission efficiency will be improved, but it will also cause certain pressure on the CPU.
Virtual host configuration

Virtual hosts can be configured into multiple blocks

  • listen represents the listening port
  • server_name represents the domain name
  • location routing

    • root is the file path
    • index represents the default access file
 server {
    listen       80;
    server_name  eacape.top;

    location / {
        root   html;
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}
server {
    listen  80;
    server_name error.eacape.top;

    location / {
        root    html;
        index 50x.html;
    }
}

As configured above, we set up multiple virtual hosts, and the listening ports are the same. We match the route of the default page through the matching of the domain name, that is, the default page for accessing the service through the eacape.top domain name is index.html, through error.eacape. top domain name access service We route the default page to 50x.html, and then configure the domain name-ip in hosts on our computer

 82.156.2.77 error.eacape.top
82.156.2.77 eacape.top

Then restart nginx, and the results by accessing the two domain names are as follows

Common commands

 -v 可查看nginx的版本。
-V 可查看nginx的详细信息,包括编译的参数。
-t 可用来测试nginx的配置文件的语法错误。
-T 可用来测试nginx的配置文件语法错误,同时还可以通过重定向备份nginx的配置文件。
-q 如果配置文件没有错误信息时,不会有任何提示,如果有错误,则提示错误信息,与-t配合使用。
-s 发送信号给master处理:
  stop 立刻停止nginx服务,不管请求是否处理完
  quit 优雅的退出服务,处理完当前的请求退出
  reopen 重新打开日志文件,原日志文件要提前备份改名。
  reload 重载配置文件
-p 设置nginx家目录路径,默认是编译时的安装路径
-c 设置nginx的配置文件,默认是家目录下的配置文件
-g 设置nginx的全局变量,这个变量会覆盖配置文件中的变量。

log splitting

timed task

Linux crontab is a command used to execute programs periodically.

When the operating system is installed, this task scheduling command is started by default.

The crond command periodically checks every minute for work to be performed, and if there is work to be performed, it automatically executes the work.

Note: The newly created cron task will not be executed immediately. It will take at least 2 minutes. Of course, you can restart cron to execute it immediately.

The Linux task scheduling work is mainly divided into the following two categories:

  • 1. Work performed by the system: Work performed by the system periodically, such as backing up system data and clearing caches
  • 2. Work performed by individuals: work that a user does on a regular basis, such as checking the mail server for new letters every 10 minutes, these tasks can be set by each user

grammar

 crontab [ -u user ] { -l | -r | -e }

illustrate:

Crontab is used to allow users to execute programs at fixed times or at regular intervals, in other words, it is similar to the user's schedule.

-u user means to set the schedule of the specified user, the premise is that you must have the authority (for example, root) to be able to specify the schedule of others. If you don't use -u user, it means to set your own schedule.

Parameter description :

  • -e : Execute a text editor to set the schedule. The default text editor is VI. If you want to use another text editor, please set the VISUAL environment variable to specify which text editor to use (for example, setenv VISUAL joe)
  • -r : delete the current schedule
  • -l : list the current schedule
f1 f2 f3 f4 f5 program
  • Where f1 is the minute, f2 is the hour, f3 is the day of the month, f4 is the month, and f5 is the day of the week. program represents the program to be executed.
  • When f1 is *, it means that the program should be executed every minute, and when f2 is *, it means that the program should be executed every hour, and so on.
  • When f1 is ab, it means to execute from the ath minute to the bth minute, and when f2 is ab, it means to execute from the ath to the bth hour, and so on.
  • When f1 is */n, it means that it will be executed once every n minutes, and if f2 is */n, it means that it will be executed every n hours, and so on.
  • When f1 is a, b, c,..., it means a, b, c,... minutes to be executed, and when f2 is a, b, c,..., it means a, b, c... hours to be executed, the rest and so on
 *    *    *    *    *
-    -    -    -    -
|    |    |    |    |
|    |    |    |    +----- 星期中星期几 (0 - 6) (星期天 为0)
|    |    |    +---------- 月份 (1 - 12) 
|    |    +--------------- 一个月中的第几天 (1 - 31)
|    +-------------------- 小时 (0 - 23)
+------------------------- 分钟 (0 - 59)

Users can also store all settings in a file first, and use crontab file to set the execution time.

example

Execute every minute /bin/ls

 * * * * /bin/ls

During December, every 3 hours and 0 minutes from 6:00 am to 12:00 pm /usr/bin/backup

 0 6-12/3 * 12 * /usr/bin/backup

Send a letter to alex@domain.name at 5:00pm every day from Monday to Friday:

 0 17 * * 1-5 mail -s "hi" alex@domain.name < /tmp/maildata

Every day of the month at 0:20 midnight, 2:20 pm, 4:20 pm....do echo "haha":

 20 0-23/2 * * * echo "haha"

split log

Below is a bash script to split the log file by minutes

 #!/bin/bash
#日志路径
LOG_PATH="/usr/local/nginx-1.22.0/logs"
#时间格式
RECORD_TIME=$(date -d "today" '+%Y-%m-%d')
#进程id
PID="/usr/local/nginx-1.22.0/logs/nginx.pid"
mv $LOG_PATH/access.log $LOG_PATH/access-$RECORD_TIME.log
mv $LOG_PATH/error.log $LOG_PATH/error-$RECORD_TIME.log
kill -USR1 `cat $PID`

Then you can use the linux scheduled task crontab to cut the log file regularly every day

Use crontab -e to edit the scheduled task, and then add the following content to it, which means to cut the log file at 23:59 every day

 59 23 * * * /usr/local/nginx-1.22.0/sbin/cut_nginx_log.sh

Then restart the scheduled task and log segmentation will take effect

 systemctl restart crond

Configure a static file

  1. Add an include file to the main configuration file and configure the relevant server in the subfile
  2. Configure the simplest route and route the default home page of port 90 to /app/static/ecape.html

     server {
            listen          90;
            server_name     localhost;
    
            location / {
                    root    /app/static/html;
                    index   eacape.html;
            }
    }
  3. Use the alias configuration to set the img file corresponding to the /pic routing configuration, that is, set the alias of /app/static/img to pic

     location /pic {
            alias   /app/static/img;
    }
  4. Effect

Compress with GZIP

  • Turn on the gzip compression function, the purpose: improve the transmission efficiency and save the bandwidth - but it will increase the server io pressure

     gzip on;
  • Limit the minimum compression, files smaller than 1 byte will not be compressed

     gzip_min_length 1;
  • Define the compression level (compression ratio, the larger the file, the more compression)

     gzip_comp_level 3;
  • Type of compression for image compression

     gzip_types image/jpg image/png image/jpeg;
  • Set the HTTP protocol version for gzip compression

     gzip_http_version 1.1;

Location matching rules

  • \~ tilde means to perform a regular match , case sensitive
  • \~* means to perform a regular match, case insensitive
  • ^\~ means common character matching, if this option matches, only this option is matched, not other options, generally used to match directories
  • \= does an exact match of ordinary characters
  • @ defines a named location, used when targeting internally, such as error\_page, try\_files
 location  = / {
  # 只匹配"/".
  [ configuration A ] 
}
location  / {
  # 匹配任何请求,因为所有请求都是以"/"开始
  # 但是更长字符匹配或者正则表达式匹配会优先匹配
  [ configuration B ] 
}
location ^~ /images/ {
  # 匹配任何以 /images/ 开始的请求,并停止匹配 其它location
  [ configuration C ] 
}
location ~* \.(gif|jpg|jpeg)$ {
  # 匹配以 gif, jpg, or jpeg结尾的请求. 
  # 但是所有 /images/ 目录的请求将由 [Configuration C]处理.   
  [ configuration D ] 
}

error_page 404 = @fetch;
 
location @fetch(
proxy_pass http:/是完全/fetch;
)

cross domain

Same Origin Policy

The Same Origin Policy is an important security policy that restricts a document of one origin or a script it loads from interacting with resources of another origin. It can reduce malicious documents and reduce possible attack vectors. If the protocol, domain name, and port number of the two URLs are the same, the two URLs are said to be of the same origin, and the two parties can interact with resources; otherwise, they cannot.

The purpose of the browser's same-origin policy is to protect the user's information security. In order to prevent malicious websites from stealing the user's data on the browser, if it is not a same-origin site, the following operations cannot be performed:

  • Cookies, LocalStorage and IndexDB cannot be read or written
  • DOM and Js objects cannot be obtained
  • AJAX request cannot be sent

For example, our front-end is deployed on the localhost:80 server, but the back-end deployment port is localhost:8080 Then because the ports of the two origins are different, the front-end and back-end are different sources, that is, in the front-end The data requested by the backend in the script is not recognized by the browser due to the same-origin policy.

Cross-origin resource sharing

The full name of CORS is Cross-Origin resource sharing, which allows browsers to issue XMLHttpRequest requests to cross-origin servers, thus overcoming the limitation that AJAX can only be used by the same origin .

Once the browser returns AJAX to send cross-domain, it will add some header information to the request.

Add the Origin field to indicate which source (protocol + domain name + port) this request comes from. Based on this value, the server decides whether to agree to the request .

Taking SpringBoot for example, we often add the following code to the interceptor on the server side, indicating that we agree to share the returned resources with these origins.

 response.setHeader("Access-Control-Allow-Origin","http://120.28.12.33:80");
response.setHeader("Access-Control-Allow-Credentials", "true");
response.setHeader("Access-Control-Allow-Headers", "X-Requested-With,Content-Type,Authorization");
response.setHeader("Access-Control-Allow-Methods", "*");
  • Access-Control-Allow-Origin (represents a list of origin addresses allowed by the server ) This field is required. Its value is either the value of the field at the time of request Origin , or a value of * , indicating that requests for any domain name are accepted. In the above configuration, only requests with an origin of http://120.28.12.33:80 are accepted
  • Access-Control-Allow-Credentials (indicates whether the server that issued the response allows the browser to send cookies ) This field is optional. Its value is a boolean value indicating whether or not to allow cookies to be sent. By default (without this field), cookies are not included in CORS requests . Set to true , which means that the server explicitly allows the cookie to be included in the request and sent to the server together. This value can only be set . If the server does not want the browser to send cookies, just delete this field .

     response.setHeader("Access-Control-Allow-Origin",reqs.getHeader("Origin"));

    If you want to send the cookie to the server, on the one hand, you need the server's consent, and specify the Access-Control-Allow-Credentials field.

     Access-Control-Allow-Credentials: true

    On the other hand, the developer must turn on the withCredentials attribute in the AJAX request.

     var xhr = new XMLHttpRequest();
    xhr.withCredentials = true;

    Otherwise, the browser will not send the cookie even if the server agrees to send it. Alternatively, the server asks for a cookie to be set, and the browser doesn't process it.

    It should be noted that if you want to send a cookie, Access-Control-Allow-Origin cannot be set as an asterisk, and you must specify a clear domain name that is consistent with the requested webpage. At the same time, cookies still follow the same-origin policy, only cookies set with the server domain name will be uploaded, cookies of other domain names will not be uploaded, and the (cross-origin) original webpage code document.cookie cannot be read either. Cookies under the server domain name. So setting this property in springboot is generally as follows

     response.setHeader("Access-Control-Allow-Origin",request.getHeader("Origin"));
  • Access-Control-Expose-Headers This field is optional. CORS请求时, XMLHttpRequest getResponseHeader() 6个基本字段: Cache-ControlContent-LanguageContent-Type , Expires , Last-Modified , Pragma . If you want to get other fields, you must specify them in Access-Control-Expose-Headers . The above example specifies that getResponseHeader('FooBar') can return the value of the FooBar field.
  • Access-Control-Allow-Methods indicates methods that support cross-domain requests, such as GET POST OPTION etc.

Summarize

  1. It is found that it is a cross-domain request, and the browser automatically adds the Origin field to the request header for back-end server verification.
  2. The server will return the response normally, and the status code cannot determine whether the cross-domain request is successful. The browser needs to automatically determine whether the response header returned by the server contains the Access-Control-Allow field.
  3. If you need to use cookies in CORS requests, you need to set Credentials in both the front-end and the back-end, and the front-end will normally set cookies and send cookies, and the back-end will normally set and receive cookies for the front-end.
  4. If you want to send a cookie, Access-Control-Allow-Origin can't be set as an asterisk, you must specify a clear domain name that is consistent with the requested page .
  5. Cookies still follow the same-origin policy , and only cookies set with the domain name of this server will be sent by the browser.

reverse proxy

In addition to using CORS, you can also use Nginx as a reverse proxy. The specific steps are as follows:

  1. Change the domain name port of the AJAX access backend to the domain name port of the front-end server

    For example, the address of the front end is a.domain.com , and the address of the back end is b.domain.com

    The corresponding back-end interface address in the original js is http://b.domain.com/api/addUser

    Now change to http://a.domain.com/api/addUser

  2. In the nginx service that publishes the front end, it is used as a reverse proxy according to route matching

     server {
      listen    80;
      server_name  localhost;
        #默认前端页面地址
        location / {
            root   /app/static/;
        }
        #根据匹配规则做反向代理
        location ^~ /api{
            proxy_pass http://b.domain.com;
        }
    }

Compare the two

Item CORS Nginx reverse proxy
Code configuration -- front end credentials=true none
Code configuration -- background setHeader: ACA-Origin, ACA-Method, ACA-Credentials, etc. none
server configuration none Nginx configuration
Migration flexibility High, no additional configuration required Low, each environment configuration may be different
safety Source controllable and direct traceability X-Forwarded-For traces multi-level sources
new project extension Black and white list control Update the configuration, the cross-domain model will change

Configure anti-leech

There is a parameter Referer in the HTTP request header to describe which domain name the request comes from

It is often used for anti- leech. For example, before the failure of the gitee image bed, it was used referer for anti-leech, that is, requests for image resources from third-party websites will be blocked.

For example, click from Baidu to enter a website

 Referer:https://www.baidu.com/link?url=-aT3sJswSZKZvdnJOqj7o8Egpgn1o5AdYAKrP1hvZuWHkV1bFV_SRdWB3VVDBItyqtFukyMlS8bI6ifhUlP6aa&wd=&eqid=dd059cef000a3ef20000000462d55fcc

Explain that this page jumped from Baidu's page. I visited Mr. Chen Hao's blog before. If you enter his website through Baidu search, you will be reminded that programmers should not use Baidu. I think this function is referer realized.

Anti-leech can be implemented in the backend or nginx. Next, we use nginx to implement this function. We allow only requests from www.yuanyong.com to go through the reverse proxy, otherwise it will return 404, and we are on our own Configure two hosts locally

 82.156.2.77 www.xiaoming.com
82.156.2.77 www.yuanyong.com

Then make the following configuration under nginx, which means that only requests from www.yuanyong.com can be forwarded to the proxy

 server {
  listen    8081;
  server_name  localhost;

  location / {
    root   /app/static/jk;
  }

    location ^~ /api {
        #对源站点验证
        valid_referers www.yuanyong.com;
        #非法引入回返回 invalid_referer = true验证通过
        if ($invalid_referer){
            return 404;
        }
        proxy_pass http://120.48.87.20:9000;
    }
}

valid\_referers

Format: valid_referers none|blocked|server_names|string

  • nginx will automatically match the content behind valid\_referers by looking at the referer;
  • If there is a match, the invalid\_referer variable is set to 0. If there is no match, the invalid\_referer variable is set to 1;
  • The matching process is not case-sensitive;

Introduction of other parameters:

  • none: If the Referer in the Header is empty, access is allowed;
  • blocked: The Referer in the Header is not empty, but the value has been disguised by the firewall or proxy. For example, resources without protocol headers such as "http\://" and "https\://" are allowed to access;
  • server\_names: Specify a specific domain name or IP;
  • string: A string that can support regular expressions and *. If it is a regular expression, it needs to start with \~

The final result will be as follows, the request from www.xiaoming.com was intercepted, www.yuanyong.com did not

load balancing

load balancing algorithm

The implementation method of load balancing is the reverse proxy we introduced in the previous chapter. Distribute (reverse proxy) client requests to a group of different servers through nginx. This group of servers is called a service pool (upstream server). Each server in the pool is called a unit. The pool will perform request rotation for each unit to achieve load balancing

 指令: upstream
语法: upstream name {...}
环境: http含义:定义一组 HTTP服务器,这些服务器可以监听不同的端口,以及 TCP和 UNIX套接字。
      在同一个 upstream中可以混合使用不同的端口、 TCP和 UNIX套接字。

指令: server
语法: server address [parameters];
环境: upstream
含义: 配置后端服务器,参数可以是不同的 IP地址、端口号,甚至域名。

But the way to request distribution is not just polling, there are the following ways

  • Round robin is the default scheduling algorithm of round robin and the static scheduling algorithm. The client request sequence allocates the client's requests to different back-end node servers one by one, and the down server will be automatically removed from the node server pool, so that the client's user access will not be affected. New requests are allocated to the live server.

     upstream  分组名称{
        server 127.0.0.1:8081;
        server 127.0.0.1:8082; 
     }
  • Weight round robin is weight weight round robin, a static scheduling algorithm. Adding weights on the basis of the rr round-robin algorithm is the weighted round-robin algorithm. When this algorithm is used, the weight is proportional to user access. The larger the weight value, the more requests are forwarded. The weight value can be specified according to the configuration and performance of the server to effectively solve the problem of request allocation caused by the uneven performance of the old and new servers.

     upstream  分组名称{
        server 127.0.0.1:8081 weight=1;
        server 127.0.0.1:8082 weight=2; 
     }
  • IP hash ip\_hash is a static scheduling algorithm. Each request is allocated according to the hash result of the client IP. When a new request arrives, its client IP is first hashed to a value through the hash algorithm, and then the client IP is hashed to a value. In the client request, as long as the hash value of the client IP is the same, it will be assigned to the same server. This scheduling algorithm can solve the problem of session sharing of dynamic web pages, but sometimes it will lead to uneven distribution of requests, that is, it cannot guarantee a 1:1 ratio. Load balancing, because most companies in China use NAT Internet access mode, and multiple clients will correspond to an external IP. Therefore, these clients will be assigned to the same node server, resulting in uneven request distribution.

     upstream  分组名称{
        ip_hash;
        server 127.0.0.1:8081;
        server 127.0.0.1:8082; 
     }
  • The minimum number of connections least\_conn is a dynamic scheduling algorithm, which will determine the allocation situation according to the number of connections of the back-end nodes, whichever machine has the fewest connections will be distributed.

     upstream  分组名称{
        least_conn;
        server 127.0.0.1:8081;
        server 127.0.0.1:8082; 
     }
  • The shortest response time The shortest response time (fair) scheduling algorithm is a dynamic scheduling algorithm, which will allocate requests according to the response time of the backend node server, and give priority to the response time side. This is a smarter scheduling algorithm. This algorithm can only perform load balancing according to page size and loading time, that is, allocate requests according to the response time of the back-end server, and give priority to those with short response time. Nginx itself does not support the fair scheduling algorithm. If you need to use this scheduling algorithm, you must download the Nginx related module upstream\_fair.

     upstream  分组名称{
        fair;
        server 127.0.0.1:8081;
        server 127.0.0.1:8082; 
     }
  • url\_hash algorithm The url\_hash algorithm is a dynamic scheduling algorithm, which allocates requests according to the hash result of the access URL, so that each URL is directed to the same back-end server, which can further improve the efficiency hit rate of the back-end cache server. (Mostly used when the backend server is a cache) Nginx itself does not support url\_hash. If you need to use this scheduling algorithm, you must install the Nginx hash module package.

     upstream  分组名称{
        fair;
        server 127.0.0.1:8081;
        server 127.0.0.1:8082; 
        hash $request_uri;
        hash_method crc32;
     }

load balancing instance

The following is an example of a load balancer and the role of parameters

 upstream baidu_cluster {
    #可以是域名 或 ip+端口
    server x1.baidu.com;
    server x2.baidu.com;
    server x3.baidu.com;
    
    #下面的这些参数放在server地址的后面
    #如:server x3.baidu.com down;
    
    #down 不参与负载均衡
    #weight=5; 权重,越⾼分配越多 - 用于加权轮询
    #backup; 预留的备份服务器
    
    #下面这两参数配合使用
    #max_fails=number:设置允许请求代理服务器失败的次数,默认为1。
    #fail_timeout=time:设置经过max_fails失败后,服务暂停的时间,默认是10秒。

    
    #max_coons 用来设置代理服务器同时活动链接的最大数量,默认为0,表示不限制,
    #使用该配置可以根据后端服务器处理请求的并发量来进行设置,防止后端服务器被压垮。
    
    
    #根据服务器性能不同,配置适合的参数
    #server 106.xx.xx.xxx; 可以是ip
    #server 106.xx.xx.xxx:8080; 可以带端⼝号
    #server unix:/tmp/xxx; ⽀出socket⽅式
}
server {
  listen    80;
  server_name  localhost;

  location / {
        proxy_pass http://baidu_cluster;
  }
}

Long connection optimization

In Nginx, short connections are used by default for backend access using upstream, but this will increase the consumption of network resources. You can configure a long connection to reduce the overhead caused by establishing a connection and improve performance. Configuration examples related to persistent connections are as follows:

 keepalive_requests 1024;
keepalive_timeout 60;

upstream baidu_cluster {
    server x1.baidu.com;
    server x2.baidu.com;
    server x3.baidu.com;
    keepalive 100;
}
server {
  listen    80;
  server_name  localhost;

  location / {
        proxy_pass http://baidu_cluster;
  }
}
instruction effect Configuration Environment
keepalive\_requests Set the maximum number of requests per connection, and the connection will be closed if the number of requests exceeds this number. http,server,location
keepalive\_timeout Timeout for long connections http,server,location
keepalive The maximum number of idle connections between the worker process and the backend server. If the number of idle connections exceeds this number, it will be closed to this value. upstream

nginx cache

Control browser cache

 location /{
  # expires 10s;  //10秒后
  # expires @22h30m;  //每天22点30分
  # expires -1h;  //当前时间的提前一个小时过期
  # expires epoch;  //不使用缓存
  # expires off;  //使用浏览器默认缓存配置,在header看不到
  
  expires max;  //设置最大的过期时间  //2037年过期

}

reverse proxy cache

 proxy_cache_path /usr/local/nginx-1.22.0/upstream_cache 
keys_zone=eacape_cache:32m 
max_size=1g 
inactive=1m 
use_temp_path=off;

server {
  listen    8081;
  server_name  localhost;

  location / {
    root   /app/static/jk;
  }

    location ^~ /api {
        proxy_pass http://120.48.87.20:9000;
        proxy_cache     eacape_cache;
        proxy_cache_valid   200 304 8h;
        proxy_cache_valid   any 10m;
        proxy_cache_key $host$uri$is_args$args;
    }
}
  • proxy_cache_path specified buffer path is configured in http module

    parameter

    • levels highest cache directory level is 3 layers, each layer is represented by 1-2 characters, for example: 1:1:2 three layers
    • keys_zone cache block name and cache block size keys\_zone=eacape\_cache:32m The cache block is 32m, and the earliest one that exceeds 32m is cleared
    • max_size The maximum value of the hard disk occupied by the cache area, if it exceeds the limit, it will be cleared
    • inactive time
    • use_temp_path whether to enable temporary path
  • proxy_cache Specified cache block (configured for sever or location module) corresponds to the value set in keys_zone
  • proxy_cache_valid Cache time for different status codes

Configure SSL certificate

Certificate Installation (Tencent Cloud)

  1. Please select the certificate you need to install in the SSL Certificate Management Console and click Download .
  2. In the pop-up "Certificate Download" window, select Nginx for the server type, click Download and unzip the cloud.tencent.com certificate file package to a local directory.

    After unzipping, the relevant type of certificate file is available. Which contains the cloud.tencent.com_nginx folder:

    • Folder name : cloud.tencent.com_nginx
    • Folder contents :

      • cloud.tencent.com_bundle.crt certificate file
      • cloud.tencent.com_bundle.pem certificate file (this file can be ignored)
      • cloud.tencent.com.key Private key file
      • cloud.tencent.com.csr CSR file is uploaded by you or generated online by the system when applying for a certificate, and provided to the CA. This file can be ignored during installation.
  3. Log in to the Nginx server using "WinSCP" (that is, a tool for copying files between local and remote computers).
  4. Copy the obtained cloud.tencent.com_bundle.crt certificate file and cloud.tencent.com.key private key file from the local directory to the /usr/local/nginx/conf directory of the Nginx server (here is the Nginx default installation directory, Please operate according to the actual situation).
  5. Log in to the Nginx server remotely. For example, log in using the "PuTTY" tool .
  6. Edit the conf/nginx.conf file in the Nginx root directory. The modifications are as follows:

    Note: If you cannot find the following content, you can add it manually.

    This action edits the file by executing the vim /usr/local/nginx/conf/nginx.conf command line. Due to version problems, configuration files may be written differently. For example: Nginx version is nginx/1.15.0 above, please use listen 443 ssl instead of listen 443 and ssl on .

     server {
    
      #SSL 默认访问端口号为 443
      listen 443 ssl; 
    
      #请填写绑定证书的域名
      server_name [cloud.tencent.com](http://cloud.tencent.com); 
    
      #请填写证书文件的相对路径或绝对路径
      ssl_certificate cloud.tencent.com_bundle.crt; 
    
      #请填写私钥文件的相对路径或绝对路径
      ssl_certificate_key cloud.tencent.com.key; 
      ssl_session_timeout 5m;
    
      #请按照以下协议配置
      ssl_protocols TLSv1.2 TLSv1.3; 
    
      #请按照以下套件配置,配置加密套件,写法遵循 openssl 标准。
      ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE; 
      ssl_prefer_server_ciphers on;
    
      location / {
        #网站主页路径。此路径仅供参考,具体请您按照实际目录操作。
        root html; 
        index  index.html index.htm;
      }
    }
  7. From the Nginx root directory, verify the configuration file issue by executing the following command.

    ./sbin/nginx -t

    • If it exists, please reconfigure or modify the problem according to the prompts.
    • If it does not exist, go to step 8 .
  8. In the Nginx root directory, restart Nginx by executing the following command.

    ./sbin/nginx -s reload

  9. After the restart is successful, you can use https://cloud.tencent.com to access.

HTTP automatic jump to HTTPS security configuration (optional)

If you need to automatically redirect HTTP requests to HTTPS. You can set by doing the following:

  1. According to actual needs, choose the following configuration methods:

    • Add JS script to the page.
    • Add redirection in backend program.
    • Jump through the web server.
    • Nginx supports rewrite function. If you do not remove pcre when compiling, you can add return 301 https://$host$request_uri; to the HTTP server to redirect the default port 80 request to HTTPS. Modify the following:

      Note: For configuration statements without comments, you can configure them as follows. Due to version problems, configuration files may be written differently. For example: Nginx version is nginx/1.15.0 above, please use listen 443 ssl instead of listen 443 and ssl on .

       server {
      
        #SSL 默认访问端口号为 443
        listen 443 ssl;
      
        #请填写绑定证书的域名
        server_name [cloud.tencent.com](http://cloud.tencent.com); 
      
        #请填写证书文件的相对路径或绝对路径
        ssl_certificate  cloud.tencent.com_bundle.crt; 
      
        #请填写私钥文件的相对路径或绝对路径
        ssl_certificate_key cloud.tencent.com.key; 
        ssl_session_timeout 5m;
      
        #请按照以下套件配置,配置加密套件,写法遵循 openssl 标准。
        ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
      
        #请按照以下协议配置
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_prefer_server_ciphers on;
      
        location / {
          #网站主页路径。此路径仅供参考,具体请您按照实际目录操作。 
          root html;
          index index.html index.htm;
        }
      }
      
      server {
      
        listen 80;
      
        #请填写绑定证书的域名
        server_name [cloud.tencent.com](http://cloud.tencent.com); 
      
        #把http的域名请求转成https
        return 301 https://$host$request_uri; 
      
      }
  2. From the Nginx root directory, verify the configuration file issue by executing the following command.

    ./sbin/nginx -t

    • If it exists, please reconfigure or modify the problem according to the prompts.
    • If it does not exist, go to step 3 .
  3. In the Nginx root directory, restart Nginx by executing the following command.

    ./sbin/nginx -s reload

  4. After the restart is successful, you can use http://cloud.tencent.com to access.

refer to

Tencent Cloud Nginx server SSL certificate installation and deployment

MOOC practical course


eacape
205 声望8 粉丝

JAVA 攻城狮