6
头图

This is an interview question brought back by a reader

How does Nginx achieve concurrency? Why doesn't Nginx use multithreading? What are the common optimization methods of Nginx? What are the possible causes of the 502 error?

Interviewer psychoanalysis

It mainly depends on whether the applicant is familiar with the basic principles of NGINX, because most people know a little bit of NGINX, but there may be very few who really understand the principles. Only when you understand the principle can you make optimizations, otherwise you can only follow the same pattern, and there is no way to start if there is a problem.

Interview question: How does Nginx achieve high concurrency? What are the common optimization methods?

People who understand the fur will generally be a Web Server and build a Web site; primary operation and maintenance may set up HTTPS and configure a reverse proxy; intermediate operation and maintenance define an upstream and write a regular judgment; old birds do a performance optimization, Write an ACL, and it is possible to change the source code (the editor says that there is no ability to change the source code).

Analysis of Interview Questions

How does Nginx achieve high concurrency?

Asynchronous, non-blocking, using epoll and a lot of low-level code optimization.

If a server uses a process in charge of a request, then the number of processes is the number of concurrent. Under normal circumstances, there will be many processes waiting.

And nginx uses a master process, multiple worker processes.

  • The master process is mainly responsible for collecting and distributing requests. Whenever a request comes, the master pulls up a worker process to handle the request.
  • At the same time, the master process is also responsible for monitoring the status of workers to ensure high reliability
  • The worker process is generally set to be the same as the number of cpu cores. The number of requests that the worker process of nginx can handle at the same time is only limited by memory and can handle multiple requests.
  • Nginx's asynchronous non-blocking working method is taking advantage of the waiting time. When there is a need to wait, these processes are idle and stand by, so it appears that a few processes solve a large number of concurrency problems.

Every request comes in, there will be a worker process to handle it. But it's not the whole process, to what extent? Processing to the place where blocking may occur, such as forwarding the request to the upstream (back-end) server, and waiting for the request to return. Then, this worker is very smart. After sending the request, he will register an event: "If the upstream returns, tell me, I will continue." So he went to rest.

At this point, if another request comes in, he can quickly deal with it in this way again. Once the upstream server returns, this event will be triggered, the worker will take over, and the request will go down.

Why doesn't Nginx use multithreading?

Apache: Create multiple processes or threads, and each process or thread will allocate cpu and memory for it (threads are much smaller than processes, so workers support higher concurrency than perfork). Excessive concurrency will consume server resources.

Nginx: Single thread is used to process requests asynchronously and non-blockingly (the administrator can configure the number of worker processes of the Nginx main process) (epoll), without allocating cpu and memory resources for each request, saving a lot of resources and reducing it at the same time A lot of CPU context switching. That's why Nginx supports higher concurrency.

What are the common optimized configurations of Nginx?

1) Adjust worker_processes

Refers to the number of workers to be generated by Nginx. The best practice is to run one worker process per CPU.

To understand the number of CPU cores in the system, enter

$ grep processor / proc / cpuinfo | wc -l
2) Maximize worker_connections

The number of clients that the Nginx web server can provide services at the same time. When combined with worker_processes, get the maximum number of clients that can be served per second

Maximum number of clients/second = worker process * number of worker connections

In order to maximize the full potential of Nginx, worker connections should be set to the maximum number of processes that the core can run at a time of 1024.

3) Enable Gzip compression

Compress the file size, reduce the client http transmission bandwidth, so improve the page loading speed

An example of the recommended gzip configuration is as follows: (in the http section)

4) Enable caching for static files

To enable caching for static files to reduce bandwidth and improve performance, you can add the following command to limit the computer's caching of static files on web pages:

location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {

expires 365d;

}
5)Timeouts

The keepalive connection reduces the CPU and network overhead required to open and close the connection. The variables that need to be adjusted to obtain the best performance can be referred to:

6) Disable access_logs

Access logging, which records every nginx request, therefore consumes a lot of CPU resources, thereby reducing nginx performance.

Disable access logging completely

access_log off;

If you must have access log records, enable access log buffering

access_log /var/log/nginx/access.log主缓冲区= 16k
What are the possible reasons for the 502 error?

  • 1) Whether the FastCGI process has been started
  • 2) Is the number of FastCGI worker processes insufficient?
  • 3) FastCGI execution time is too long
  • 4) FastCGI Buffer is not enough

Like apache, nginx has front-end buffering restrictions and buffer parameters can be adjusted

fastcgi_buffer_size 32k;
fastcgi_buffers 8 32k;
  • 5) Not enough Proxy Buffer

If you use Proxying, adjust

proxy_buffer_size 16k;
proxy_buffers 4 16k;
  • 6) PHP script execution time is too long

<value name="request_terminate_timeout">0s</value> the 0s of 06125c055724fd in php-fpm.conf to a time

Source: toutiao.com/i6698255904053133827/


民工哥
26.4k 声望56.7k 粉丝

10多年IT职场老司机的经验分享,坚持自学一路从技术小白成长为互联网企业信息技术部门的负责人。2019/2020/2021年度 思否Top Writer