Nginx
1. Basic Concepts
Overview
- Nginx is a high performance HTTP and reverse proxy server
- It is characterized by less memory occupation and strong concurrency capability.
- Developed for performance optimization, performance is the most important consideration
reverse proxy
- Before talking about the reverse proxy, let's take a look at the forward proxy. For example, now we want to access
www.google.com
, which cannot be accessed directly in Google Mainland, so we can access it by configuring the proxy server in the browser (as shown below) - Next, we introduce the reverse proxy, the client is unaware of the proxy, the client does not need any configuration, we only need to send the request to the reverse proxy server, and then the reverse proxy server selects the target server to obtain the data, and then returns to the client. At this time, the reverse proxy server and the target server are external servers. Obviously, the reverse proxy exposes the reverse proxy server, hiding the real server
- For example, in the figure below, when we want the target server to send a request, the reverse proxy server selects port 8001 of the tomcat server, obtains the data, and then returns the data to the client
load balancing
- The client sends multiple requests to the server, and the server interacts with the database to process the requests. After processing, the results are returned to the client.
- This architectural pattern is suitable for the case where the early system is single and simple and has less concurrency. With the continuous growth of the amount of information, the rapid growth of the number of visits and requests, the server performance will reach a bottleneck. How should we solve it at this time?
- The first thing that comes to mind is to improve the configuration of the server, but with the increasing failure of Moore's Law, the performance improvement of hardware can no longer meet the increasing performance requirements. Another method is that we can increase the number of servers and distribute requests to each server, which is what we call load balancing
- As shown in the figure below, 15 requests are distributed to three servers, and the ideal state is 3 requests per server
Dynamic and static separation
- In order to speed up the parsing speed of the website, dynamic pages and static pages can be parsed by different servers to speed up the parsing speed and reduce the pressure on the original single server
2. Nginx installation, commands and configuration files
Install Nginx in Linux
Install PCRE
The role of PCRE is to enable Nginx to support the Rewrite function.
cd /usr/src/ wget http://downloads.sourceforge.net/project/pcre/pcre/8.37/pcre-8.37.tar.gz
tar zxvf pcre-8.37.tar.gz
cd pcre-8.37/ ./configure make && make install
pcre-config --version
Install compilation tools and library files
yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel
Install Nginx
cd /usr/src/ wget http://nginx.org/download/nginx-1.12.2.tar.gz
tar zxvf nginx-1.12.2.tar.gz
cd nginx-1.12.2/ ./configure make && make install
/usr/local/nginx/sbin/nginx -v
- After the installation is successful, there is an extra folder
local/nginx
inusr
, and the startup script is/nginx/sbin/nginx
test
cd /usr/local/nginx/sbin ./nginx ps -ef | grep nginx
You can see the following interface by directly accessing the IP addressFirewall related operations
- View open port numbers:
firewall-cmd --list-all
- Set the open port number:
firewall-cmd --add-service=http --permanent
eg:sudo firewall-cmd --add-port=80/tcp --permanent
- Restart the firewall:
firewall-cmd --reload
- Turn off firewall:
systemctl stop firewalld
- Permanently turn off the firewall:
systemctl disable firewalld
- View open port numbers:
Nginx common commands
Before using the nginx command, enter thenginx
directory:cd /usr/local/nginx/sbin
- View nginx version number:
./nginx -v
- Start nginx:
./nginx
- Turn off nginx:
./nginx -s stop
- Reload nginx:
./nginx -s reload
configure systemctl
How to start after configuring systemctl
- Status:
ssytemctl status nginx
- Start:
systemctl start nginx
- Close:
systemctl stop nginx
- Reboot:
systemctl restart nginx
Configuration method
nginx.service
cd /usr/lib/systemd/system vim nginx.service
Copy the following content into <font color="red">(Do not put the content of comments!!!)</font>
[Unit] //对服务的说明 Description=nginx - high performance web server //描述服务 After=network.target remote-fs.target nss-lookup.target //描述服务类型 [Service] //服务的一些具体运行参数的设置 Type=forking //后台运行的形式 PIDFile=/usr/local/nginx/logs/nginx.pid //PID文件的路径 ExecStartPre=/usr/local/nginx/sbin/nginx -t -c /usr/local/nginx/conf/nginx.conf //启动 准备 ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf //启动命令 ExecReload=/usr/local/nginx/sbin/nginx -s reload //重启命令 ExecStop=/usr/local/nginx/sbin/nginx -s stop //停止命令 ExecQuit=/usr/local/nginx/sbin/nginx -s quit //快速停止 PrivateTmp=true //给服务分配临时空间 [Install] WantedBy=multi-user.target //服务用户的模式
- Grant permission:
chmod +x /usr/lib/systemd/system/nginx.service
start the service
// 启动服务之前,需要先重载 systemctl 命令 systemctl daemon-reload systemctl start nginx.service
If the error is reported
kill -9
can be killed and then pulled up again.
- Status:
Nginx configuration file
Nginx configuration file location: /usr/local/nginx/conf/nginx.conf
The configuration file consists of three parts
Part 1: Global block (from the start of the configuration file to the events block)
- Mainly set some configuration instructions that affect the overall operation of the Nginx server, mainly including configuring the user (group) running the Nginx server, the number of worker processes allowed to be generated, the process PID storage path, the log storage path and type, and the introduction of configuration files, etc.
worker_processes 1
: The value of concurrent processing by the Nginx server. The larger the worker_processes value, the more concurrent processing can be supported.
Part 2: The events block
events { worker_connections 1024; }
- The instructions involved in the events block mainly affect the network connection between the Nginx server and the user. For example,
worker_connections 1024;
indicates that the maximum number of connections supported by Nginx is 1024
- The instructions involved in the events block mainly affect the network connection between the Nginx server and the user. For example,
The third part: http block (http block can also include http global block, server block)
- http global block : http global block configuration instructions include file import, MIME-TYPE type, log customization, connection timeout, upper limit of single-link requests, etc.
server block : This block is closely related to virtual hosts, each http block can include multiple server blocks, each server block is equivalent to a virtual host, each server block is also divided into global server blocks, and can be simultaneously Contains multiple location blocks
- global server block : The most common configuration is the listening configuration of this virtual host and the name or IP configuration of this virtual host
- location block : The main function is based on the request string (eg: server_name/uri-string) received by the Nginx server, for strings other than the virtual host name (which can also be an IP alias) (eg: /uri- string) to match and process a specific request. Functions such as address orientation, data caching and response control, as well as the configuration of many third-party modules are also performed here
3. Nginx configuration example
<0> Preparations
Install tomcat, use the default port number 8080
cd /usr/src wget https://mirrors.cnnic.cn/apache/tomcat/tomcat-8/v8.5.73/bin/apache-tomcat-8.5.73.tar.gz tar zxvf apache-tomcat-8.5.73.tar.gz
- Install jdk: You can refer to this tutorial Linux installation and configuration of JDK13
start tomcat
cd /usr/src/apache-tomcat-8.5.73/bin/ ./startup.sh
Access tomcat (the server should open port 8080)
<1> Reverse proxy • Example 1
- Achievement effect: Open the browser, enter the address
www.123.com
in the browser address, and jump to the main page of the Linux system tomcat - Process analysis: Windows browsers cannot directly access tomcat, and reverse proxy to tomcat through nginx
Specific configuration
- Configure the corresponding relationship between domain name and ip in the host file of Windows system
C:\Windows\System32\drivers\etc\hosts
Configuration of request forwarding in nginx (reverse proxy)
cd /usr/local/nginx/conf/ vim nginx.conf
test
//启动 nginx cd /usr/local/nginx/sbin ./nginx
- Configure the corresponding relationship between domain name and ip in the host file of Windows system
<2> Reverse proxy • Example 2
Implementation effect: use nginx reverse proxy, jump to services with different ports according to the access path, nginx listens on port 9001
- Visit
http://118.195.179.192:9001/edu/
and jump directly to127.0.0.1:8080
- Visit
http://118.195.179.192/vod/
and jump directly to127.0.0.1:8081
- Visit
Ready to work
Prepare two tomcat servers, one on port 8080 and one on port 8081
cd /usr/src mkdir tomcat8008 mkdir tomcat8081 cp apache-tomcat-8.5.73.tar.gz tomcat8080 cp apache-tomcat-8.5.73.tar.gz tomcat8081 ps -ef | grep tomcat kill -9 端口号 cd tomcat8080 tar zxvf apache-tomcat-8.5.73.tar.gz cd .. cd tomcat8081 tar zxvf apache-tomcat-8.5.73.tar.gz cd .. cd tomcat8080 cd apache-tomcat-8.5.73/ cd bin/ ./startup.sh cd .. cd .. cd .. cd tomcat8081 cd apache-tomcat-8.5.73/ cd conf vim server.xml
Just change the shutdown port, I changed it to 8015 here
Change the port to 8081
Start tomcat8081./startup.sh
You can see that we have started two tomcats, one is 8080 and the other is 8081
we can visit
Create folders and test pages
- Create an edu folder under
/usr/src/tomcat8080/apache-tomcat-8.5.73/webapps
, and create a.html file under the folder with the content<h1>8080!!!</h1>
- Similarly, create a vod folder under
/usr/src/tomcat8081/apache-tomcat-8.5.73/webapps
, and create a.html file under the folder with the content<h1>8081!!!</h1>
- Create an edu folder under
Specific configuration
Find the nginx configuration file for reverse proxy configuration
cd /usr/local/nginx/conf vim nginx.conf
add a server
server { listen 9001; server_name 118.195.179.192; location ~ /edu/ { proxy_pass http://127.0.0.1:8080; } location ~ /vod/ { proxy_pass http://127.0.0.1:8081; } }
Open port numbers 9001, 8080, 8081
reload nginxcd /usr/local/nginx/sbin ./nginx --s stop ./nginx
- final test
location directive description
This directive is used to match URLs, the syntax is as follows:
location [ = | ~ | ~* | ^~] uri { }
=
: The request string is required to strictly match the uri before it is used for a uri without regular expressions. If the match is successful, it will stop continuing to search down and process the request immediately.~
: used to indicate that the url contains a regular expression and is case sensitive~*
: used to indicate that the uri contains a regular expression and is not case sensitive^~
: Before being used for URIs without regular expressions, the nginx server is required to find the location with the highest match between the identifier uri and the request string, and then use this location to process the request immediately instead of using the regular uri and request characters in the location block. string to match
<3> Load balancing
- Achievement effect: enter the address
http://118.195.179.192/edu/a.html
through the browser address bar, load balancing effect, average port 8080 and 8081 Ready to work
- Prepare two tomcat servers, one 8080 and one 8081 (the previous instance has been prepared)
- In the webapps directory of the two tomcats, create a folder named edu, and create a.html page in the edu folder for testing
Load balancing configuration in nginx configuration file
cd /usr/local/nginx/conf vim nginx.conf
- test
load balancing strategy
- Polling (default): Each request is allocated to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated
weight
- weight represents the weight, the default is 1, the higher the weight, the more clients will be allocated
Specify the polling probability, the weight is proportional to the access ratio, and it is used when the performance of the backend server is uneven.
upstream myserver { server 118.195.179.192:8080 weight=5; server 118.195.179.192:8081 weight=10; }
ip_hash: Each request is allocated according to the hash result of the access ip, so that each access fixedly accesses a backend server, which can solve the session problem, eg
upstream myserver { ip_hash; server 118.195.179.192:8080; server 118.195.179.192:8081; }
fair (third party): allocate requests according to the response time of the backend server, and give priority to those with short response time
upstream myserver { server 118.195.179.192:8080; server 118.195.179.192:8081; fair; }
<4> Dynamic and static separation
Dynamic and static separation can be roughly divided into two types from the current realization point of view
- One is to simply separate static files into separate domain names and place them on separate servers
- Another method is to mix and publish dynamic and static files together and separate them through nginx
Different request forwarding can be achieved by specifying different suffix names by location. By setting the expires parameter, the browser cache can be expired to reduce requests and traffic before the server. The specific definition of expires: It is to set an expiration time for a resource, that is to say, it does not need to go to the server for verification, and can directly confirm whether it expires through the browser itself, so no additional traffic will be generated. This approach is ideal for resources that change infrequently. (For frequently updated files, Expires is not recommended for caching). For example: set 3d, which means to visit this URL within 3 days, send a request, compare the server with the last update time of the file, it will not be fetched from the server, return the status code 304, if there is any modification, directly from the server The server downloads again and returns status code 200
Ready to work
Prepare static resources in Linux systems for access
cd / mkdir static cd static mkdir www mkdir image
Create a file
a.html
in thewww
folder with the content<h1>test html</h1>
Just put a picture in theimage
folder, I put01.jpg
here
- The effect: the browser accesses
a.html
under www and01.jpg
under image (not through tomcat, but through nginx static resource configuration) Specific configuration
cd /usr/local/nginx/conf vim nginx.conf
start/restart nginxcd /usr/local/nginx/sbin ./nginx
test
- Browser input address
http://118.195.179.192/image/01.jpg
- Browser input address
http://118.195.179.192/www/a.html
- Browser input address
<5> High availability cluster
What is high availability?
- There is a main server and a backup server. Generally, requests are sent according to the main server. When the Nginx of the main server hangs, it will automatically switch to the backup server and access through the backup server. At this time, the backup server is used as the main server. location to handle requests, this is high availability
Ready to work
- Requires two Nginx servers
- Install nginx on two servers (there are tutorials earlier)
- Install keepalived on both servers
yum install keepalived -y
The installation directory is:/etc/keepalived
Check if it is installed:rpm -q -a keepalived
Complete the high-availability configuration (active-standby configuration)
cd /etc/keepalived vim keepalived.conf
main server
global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_script chk_http_port { script "/usr/local/src/nginx_check.sh" interval 2 # 检测脚本执行的间隔 weight 2 } vrrp_instance VI_1 { state MASTER # 备份服务器上将 MASTER 改为 BACKUP interface eth0 # 网卡 virtual_router_id 51 # 主、备机的 virtual_router_id 必须相同 priority 100 # 主、备机取不同的优先级,主机值较大,备机值较小 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.200.16 # VRRP H 虚拟地址 } }
backup server
global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_script chk_http_port { script "/usr/local/src/nginx_check.sh" interval 2 # 检测脚本执行的间隔 weight 2 } vrrp_instance VI_1 { state BACKUP # 备份服务器上将 MASTER 改为 BACKUP interface eth0 # 网卡 virtual_router_id 51 # 主、备机的 virtual_router_id 必须相同 priority 90 # 主、备机取不同的优先级,主机值较大,备机值较小 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.200.16 # VRRP H 虚拟地址 } }
Write file
nginx_check.sh
under primary and backup server/usr/local/src
#!/bin/bash A=`ps -C nginx -no-header |wc -1` if [ $A -eq 0 ];then /usr/local/nginx/sbin/nginx # Nginx 启动脚本位置 sleep 2 if [ `ps -C nginx --no-header |wc -1` -eq 0 ];then killall keepalived fi fi
Start nginx and keepalived for both servers
cd /usr/local/nginx/sbin ./nginx systemctl start keepalived
- Test: The two servers should be on the same LAN, I am here just to demonstrate the process
Detailed explanation of keepalived configuration file
global_defs
: global configuration (mainlyrouter_id server name add host name in
/etc/host
)vrrp_script
: Script configurationscript "xxx.sh"
interval 2
: Interval to detect script execution (2s)weight -20
: weight (when the condition in the script is true, the weight of the current host is reduced by 20)
vrrp_instance
: Virtual IP configurationstate BACKUP
: Indicates whether the server is a master server (MASTER) or a backup server (BACKUP)interface eth0
: the network card bound (viewed byifconfig
)virtual_router_id 51
: The id value of the active and standby machines, equivalent to the unique identifierpriority 90
: Priority, the larger the value, the higher the priority (generally the master server is set to 100, the slave server is less than 100, such as 90)advert_int 1
: how often to send a heartbeat to check whether the server or, by default, send a heartbeat every 1 sauthentication {auth_type PASS auth_pass 1111}
: Permission verification method (password: 1111)vritual_ipaddress
: Bind virtual IP (multiple can be bound)
4. Nginx principle
- After nginx starts, there are two processes: master and worker
- The master is equivalent to a leader, does not do specific work, assigns tasks to workers, and workers do specific tasks
How do workers work?
- When a client (client) sends a request to nginx, it first goes to the master. After the master gets the request, it shares the request to the workers. There are many workers under a master. The worker's acquisition task is not evenly distributed or polled, but
scramble mechanism
- When a client (client) sends a request to nginx, it first goes to the master. After the master gets the request, it shares the request to the workers. There are many workers under a master. The worker's acquisition task is not evenly distributed or polled, but
Client sends request -> master -> worker (scramble mechanism) -> specific operation
What are the benefits of one master and multiple workers?
- You can use
nginx -s reload
hot deployment, which is beneficial for nginx to perform hot deployment operations - Each worker is an independent process and does not need to be locked. After one process exits, other processes can work normally, reducing risks
- You can use
How many workers should be set up?
- Similar to redis, Nginx adopts the IO multiplexing mechanism. Each worker is an independent process, and each process has only one main thread, which processes requests in an asynchronous non-blocking manner. Each worker thread can maximize the performance of a CPU. Therefore, the number of
workers and the number of CPUs of the server are the most suitable. Setting too little will waste the CPU, setting too much will cause the loss of the CPU frequently switching contexts
- Similar to redis, Nginx adopts the IO multiplexing mechanism. Each worker is an independent process, and each process has only one main thread, which processes requests in an asynchronous non-blocking manner. Each worker thread can maximize the performance of a CPU. Therefore, the number of
Number of connections worker_connection (the maximum number of connections that each worker process can establish)
- How many connections does a worker have to send a request? Answer: 2 (client accesses static resources, worker returns after receiving the request) or 4 (worker itself does not support Java, worker should forward the request to tomcat, and tomcat returns)
nginx has one master and 4 workers. The maximum number of connections supported by each worker is 1024. What is the maximum concurrent number of supported by ?
worker 最大支持的连接数 4*1024 worker 支持的最大并发 4*1024/2=2048 或 4*1024/4=1024
The maximum concurrent number of ordinary static access is: worker_connections * worker_processes / 2
As a reverse proxy, the maximum concurrent number is: worker_connections * worker_processes / 4
[Note] worker_connections: the maximum number of connections supported by each worker (ie 1024 mentioned above)
worker_processes: the number of workers
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。