1

前言

Nginx 大名鼎鼎,天下谁人不识君,就不用介绍了,号称百万级的连接。👍👍👍

Gunicorn,Python 下的 wsgi server。是 Python web 生态的重要组成部分,相当于 java 下的 tomcat。💪💪💪

今天我们就定量的来看看,这两玩意的 RPS 的具体数值。

当然,比得不是这两玩意谁跟强,而是拿出来个重量级选手更有代表性

测试平台介绍

不用的平台,对结论肯定有很大的区别,所以列出此处的硬件平台:

使用的硬件测试平台都是重量级,避免瓶颈出现在硬件平台上

图片.png

客户端:

  • Linux 操作系统
  • AMD 5700g 8核心16线程 👊👊👊👊👊👊
  • 32GB 的 RAM

图片.png

服务端:

  • Linux 操作系统
  • AMD 4800u 8核心16线程 👏👏👏👏👏
  • 4GB 的 RAM

压力测试工具介绍

使用名为 wrk 的压力测试工具,这个工具有 31kstar ,足够说明权威性和专业性了

图片.png

参考文章:

ubuntu20.04 安装 wrk 压力测试工具以及简单使用

纯 Nginx

服务端准备工作

我们需要安装 Nginx 和 Guicorn 在服务端,下面就先讲讲 Nginx

步骤一:安装 Nginx

sudo apt install nginx

步骤二:替换默认的介绍页

location / {
    #设置content type
    default_type text/html ;
    # HTTP Status Code 和 内容 
    return 200  "hello world! ";
}
木桶效应,系统上限由系统的短板决定:
为什么要替换介绍页?因为 Nginx 默认那个介绍页面又臭又长,如果不替换,测试瓶颈就在网络上了。你想想,一个 html 页面 300KB,一张千兆网卡传 400 下就被打满了。所以我们换成 "hello world! ",撑死算 1 KB,也能提供 125x1000=12w 的上限了,12w 可比 400 好看多了。

参考文章:
【Nginx】输出/返回 HelloWorld

步骤三:重启 Nginx

sudo service nginx restart

步骤四:查看 Nginx 状态:

Nginx 是 master-slave 多进程模型,slave 即 work 进程数目等于 CPU 逻辑核心数目, 4800u 是8核心16线程,所以就有 16 个 work 进程

─$ sudo service nginx status 
● nginx.service - A high performance web server and a reverse proxy server
     Loaded: loaded (/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
     Active: active (running) since Tue 2022-01-04 22:44:13 CST; 1s ago
       Docs: man:nginx(8)
    Process: 11215 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0>
    Process: 11216 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
   Main PID: 11217 (nginx)
      Tasks: 17 (limit: 3391)
     Memory: 17.6M
        CPU: 95ms
     CGroup: /system.slice/nginx.service
             ├─11217 "nginx: master process /usr/sbin/nginx -g daemon on; master_process on;"
             ├─11218 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11219 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11220 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11221 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11222 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11223 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11224 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11226 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11227 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11228 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11229 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11230 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11231 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11232 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             ├─11233 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">
             └─11234 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ">

1月 04 22:44:13 kali systemd[1]: Starting A high performance web server and a reverse proxy server...
1月 04 22:44:13 kali systemd[1]: Started A high performance web server and a reverse proxy server.

开始测试

测试1

来吧,客户端出击!🏄🏼‍♂️🏄🏼‍♂️🏄🏼‍♂️

wrk http://192.168.31.95 -t 16 -c 64 -d 10
  • 参数 t 表示 thread,即线程个数
  • 参数 c 表示 connect,即连接个数
  • 参数 d 表示持续几秒
压力测试就是要高并发的请求,以洪水猛兽的姿态冲烂服务器 🤯🤯🤯

测试结果

─➤  ./wrk http://192.168.31.95  -t16 -c64  -d 10                                                                                 
Running 10s test @ http://192.168.31.95
  16 threads and 64 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     5.85ms    3.72ms  48.92ms   82.33%
    Req/Sec   708.88    348.74     1.48k    67.47%
  113734 requests in 10.09s, 17.35MB read
Requests/sec:  11271.96
Transfer/sec:      1.72MB

我们可以看到 Requests/sec: 11271.96,这是啥意思,就是客户端的 wrk 平均每秒完成了 1.1w 个请求,对应的 RPS:1.1w
同时可以看到 Transfer/sec: 1.72MB 说明只占用了 1.72MB 的网络带宽,距离网络的承载能力还远着呢!

当然,我并不清楚这个 1.72MB 描述的是应用层、网络层、数据链路层、物理层中的哪一个的占用,我猜大概率就是应用层的。

测试2

下面我们调整一下 wrk 的参数,看看不同参数下的测试结果的不同

wrk http://192.168.31.95  -t32 -c160  -d 10

我们提高线程数目和连接数目,做更猛烈的并发请求 🤯🤯🤯🤯


测试结果

─➤  ./wrk http://192.168.31.95  -t32 -c160  -d 10
Running 10s test @ http://192.168.31.95
  32 threads and 160 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.00ms    6.31ms  69.27ms   84.35%
    Req/Sec   701.97    395.33     1.29k    47.11%
  225052 requests in 10.07s, 34.33MB read
Requests/sec:  22349.79
Transfer/sec:      3.41MB

这个时候,RPS 提高到了 2.2w

测试3

wrk http://192.168.31.95  -t64 -c320  -d 10

翻倍吧!!!!!!!继续提高线程数目和连接数目,做更猛烈的并发请求 🤯🤯🤯🤯


测试结果

─➤  ./wrk http://192.168.31.95  -t64 -c320  -d 10
Running 10s test @ http://192.168.31.95
  64 threads and 320 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    14.08ms    9.75ms  84.81ms   87.53%
    Req/Sec   384.02    157.25     1.03k    66.21%
  246408 requests in 10.07s, 37.59MB read
Requests/sec:  24458.27
Transfer/sec:      3.73MB

好吧,这个时候,RPS 就没有翻倍了,说明我们基本找到 Nginx 的上限了:RPS 2.5w

对 Nginx 的总结

我们可以看到,单机对单机的压力测试,Nginx 的上限差不多就是 2.5w 的 RPS

其实这个时候,服务端的 CPU 的占用率很低

下面就来看看 Gunicorn 的好戏吧

纯 Gunicoen

下面是测试代码,python 写的,你要跑的话,找个 Python 版本大于等于 3.6 的就能跑起来

from flask import Flask
app = Flask(__name__)


@app.route('/', methods=['GET'])
def home():
    success: bool = False
    return {
        'status': success
    }

@app.route('/upload/', methods=['POST'])
def hello():
    success: bool = False
    return {
        'status': success
    }

Gunicorn 有多种运行模式:

  • 方式一:Gunicorn 和 Nginx 一样,也支持 master-slave 的架构,即多进程 👍
  • 方式二:单进程多线程,即纯多线程 👍👍
  • 方式三:多进程和多线程相结合,即结合方式一和二,也还是 master-slave 的架构,即多进程,但是每个进程下面又是多线程的 👍👍👍

下面我们挨个测试 🤪

纯多进程模式

gunicorn.conf.py 文件的配置

import multiprocessing

bind = "0.0.0.0:63000"
workers = 32

参考文章:
Configuration File


运行 Gunicorn

─$ gunicorn fapi:app -c gunicorn.conf.py
[2022-01-04 22:53:08 +0800] [11519] [INFO] Starting gunicorn 20.1.0
[2022-01-04 22:53:09 +0800] [11519] [INFO] Listening at: http://0.0.0.0:63000 (11519)
[2022-01-04 22:53:09 +0800] [11519] [INFO] Using worker: sync
[2022-01-04 22:53:09 +0800] [11520] [INFO] Booting worker with pid: 11520
[2022-01-04 22:53:09 +0800] [11521] [INFO] Booting worker with pid: 11521
[2022-01-04 22:53:09 +0800] [11522] [INFO] Booting worker with pid: 11522
[2022-01-04 22:53:09 +0800] [11523] [INFO] Booting worker with pid: 11523
[2022-01-04 22:53:09 +0800] [11524] [INFO] Booting worker with pid: 11524
[2022-01-04 22:53:09 +0800] [11525] [INFO] Booting worker with pid: 11525
[2022-01-04 22:53:09 +0800] [11526] [INFO] Booting worker with pid: 11526
[2022-01-04 22:53:09 +0800] [11527] [INFO] Booting worker with pid: 11527
[2022-01-04 22:53:09 +0800] [11528] [INFO] Booting worker with pid: 11528
[2022-01-04 22:53:09 +0800] [11529] [INFO] Booting worker with pid: 11529
[2022-01-04 22:53:09 +0800] [11530] [INFO] Booting worker with pid: 11530
[2022-01-04 22:53:09 +0800] [11531] [INFO] Booting worker with pid: 11531
[2022-01-04 22:53:09 +0800] [11532] [INFO] Booting worker with pid: 11532
[2022-01-04 22:53:09 +0800] [11533] [INFO] Booting worker with pid: 11533
[2022-01-04 22:53:09 +0800] [11534] [INFO] Booting worker with pid: 11534
[2022-01-04 22:53:09 +0800] [11535] [INFO] Booting worker with pid: 11535
[2022-01-04 22:53:09 +0800] [11536] [INFO] Booting worker with pid: 11536
[2022-01-04 22:53:09 +0800] [11537] [INFO] Booting worker with pid: 11537
[2022-01-04 22:53:10 +0800] [11538] [INFO] Booting worker with pid: 11538
[2022-01-04 22:53:10 +0800] [11539] [INFO] Booting worker with pid: 11539
[2022-01-04 22:53:10 +0800] [11540] [INFO] Booting worker with pid: 11540
[2022-01-04 22:53:10 +0800] [11541] [INFO] Booting worker with pid: 11541
[2022-01-04 22:53:10 +0800] [11542] [INFO] Booting worker with pid: 11542
[2022-01-04 22:53:10 +0800] [11543] [INFO] Booting worker with pid: 11543
[2022-01-04 22:53:10 +0800] [11544] [INFO] Booting worker with pid: 11544
[2022-01-04 22:53:10 +0800] [11545] [INFO] Booting worker with pid: 11545
[2022-01-04 22:53:10 +0800] [11546] [INFO] Booting worker with pid: 11546
[2022-01-04 22:53:10 +0800] [11547] [INFO] Booting worker with pid: 11547
[2022-01-04 22:53:10 +0800] [11548] [INFO] Booting worker with pid: 11548
[2022-01-04 22:53:10 +0800] [11549] [INFO] Booting worker with pid: 11549
[2022-01-04 22:53:10 +0800] [11550] [INFO] Booting worker with pid: 11550
[2022-01-04 22:53:10 +0800] [11551] [INFO] Booting worker with pid: 11551

进行压力测试

wrk http://192.168.31.95:63000  -t64 -c320  -d 10

这次我们就不一点一点调整 wrk 的参数了,直接撑爆上 Nginx 的同款配置参数

测试结果

─➤  ./wrk http://192.168.31.95:63000  -t64 -c320  -d 10                                                                                             1 ↵
Running 10s test @ http://192.168.31.95:63000
  64 threads and 320 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    76.09ms   37.75ms 664.24ms   71.68%
    Req/Sec    33.70     17.98   191.00     75.18%
  21342 requests in 10.09s, 3.30MB read
Requests/sec:   2115.13
Transfer/sec:    334.65KB

可以看到,RPS 有点丢人,才 2000
🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡

纯多线程模式

gunicorn.conf.py 文件的配置

import multiprocessing

bind = "0.0.0.0:63000"
threads = 32

运行 Gunicorn

─$ gunicorn fapi:app -c gunicorn.conf.py
[2022-01-04 22:57:38 +0800] [11607] [INFO] Starting gunicorn 20.1.0
[2022-01-04 22:57:38 +0800] [11607] [INFO] Listening at: http://0.0.0.0:63000 (11607)
[2022-01-04 22:57:38 +0800] [11607] [INFO] Using worker: gthread
[2022-01-04 22:57:38 +0800] [11608] [INFO] Booting worker with pid: 11608

进行压力测试

wrk http://192.168.31.95:63000  -t64 -c320  -d 10

测试结果

─➤  ./wrk http://192.168.31.95:63000  -t64 -c320  -d 10
Running 10s test @ http://192.168.31.95:63000
  64 threads and 320 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   122.28ms  109.01ms   2.00s    98.58%
    Req/Sec    42.33     22.31   303.00     75.49%
  8888 requests in 10.08s, 1.42MB read
  Socket errors: connect 0, read 0, write 0, timeout 73
Requests/sec:    881.68
Transfer/sec:    143.80KB

更加拉夸了!!!才 900RPS 呢!💩💩💩💩

多线程和多进程混合模式

从上面的测试可以看到,不管是多线程还是多进程的效果都比较感人,单单是一个 hello world 都这么拉夸.

下面看看混合模式的表现吧!

我准备了多组测试,每组测试的进程数和每个进程的线程数都是不一样的,冲冲冲!

测试一:

gunicorn.conf.py 文件的配置

import multiprocessing

bind = "0.0.0.0:63000"

workers = 8
threads = 8

第一轮是 8 进程,每个进程 8 个线程的选手

运行 Gunicorn

─$ gunicorn fapi:app -c gunicorn.conf.py                                                               130 ⨯
[2022-01-04 22:59:45 +0800] [11668] [INFO] Starting gunicorn 20.1.0
[2022-01-04 22:59:45 +0800] [11668] [INFO] Listening at: http://0.0.0.0:63000 (11668)
[2022-01-04 22:59:45 +0800] [11668] [INFO] Using worker: gthread
[2022-01-04 22:59:45 +0800] [11669] [INFO] Booting worker with pid: 11669
[2022-01-04 22:59:45 +0800] [11670] [INFO] Booting worker with pid: 11670
[2022-01-04 22:59:45 +0800] [11671] [INFO] Booting worker with pid: 11671
[2022-01-04 22:59:45 +0800] [11672] [INFO] Booting worker with pid: 11672
[2022-01-04 22:59:46 +0800] [11673] [INFO] Booting worker with pid: 11673
[2022-01-04 22:59:46 +0800] [11674] [INFO] Booting worker with pid: 11674
[2022-01-04 22:59:46 +0800] [11675] [INFO] Booting worker with pid: 11675
[2022-01-04 22:59:46 +0800] [11676] [INFO] Booting worker with pid: 11676

进行压力测试

wrk http://192.168.31.95:63000  -t64 -c320  -d 10

测试结果

─➤  ./wrk http://192.168.31.95:63000  -t64 -c320  -d 10
Running 10s test @ http://192.168.31.95:63000
  64 threads and 320 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    74.25ms   24.12ms 304.79ms   72.75%
    Req/Sec    67.53     17.83   171.00     77.02%
  43368 requests in 10.10s, 6.91MB read
Requests/sec:   4293.70
Transfer/sec:    700.34KB

可以看到,RPS:4000,比如之前的两种方式都有很大的进步哦!👏

测试二:

gunicorn.conf.py 文件的配置

import multiprocessing

bind = "0.0.0.0:63000"

workers = 16
threads = 16

第二轮是 16 进程,每个进程 16 个线程的选手

进行压力测试

wrk http://192.168.31.95:63000  -t64 -c320  -d 10

测试结果

─➤  ./wrk http://192.168.31.95:63000  -t64 -c320  -d 10
Running 10s test @ http://192.168.31.95:63000
  64 threads and 320 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    63.56ms   32.04ms 242.04ms   80.12%
    Req/Sec    82.27     29.26   180.00     69.46%
  51927 requests in 10.08s, 8.27MB read
Requests/sec:   5150.42
Transfer/sec:    840.00KB

可以看到,RPS:5000!👏👏

测试三:

gunicorn.conf.py 文件的配置

import multiprocessing

bind = "0.0.0.0:63000"

workers = 8
threads = 32

第三轮是 8 进程,每个进程 32 个线程的选手

进行压力测试

wrk http://192.168.31.95:63000  -t64 -c320  -d 10

测试结果

─➤  ./wrk http://192.168.31.95:63000  -t64 -c320  -d 10
Running 10s test @ http://192.168.31.95:63000
  64 threads and 320 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    80.44ms   31.75ms 515.60ms   90.45%
    Req/Sec    63.91     16.65   232.00     87.49%
  40686 requests in 10.07s, 6.48MB read
Requests/sec:   4039.57
Transfer/sec:    658.99KB

可以看到,RPS:4000!👏👏

有下降,我没有三轮测试取平均,所以,概率误差,很正常

测试四:

gunicorn.conf.py 文件的配置

import multiprocessing

bind = "0.0.0.0:63000"

workers = 16
threads = 32

第四轮是 16 进程,每个进程 32 个线程的选手

进行压力测试

wrk http://192.168.31.95:63000  -t64 -c320  -d 10

测试结果

─➤  ./wrk http://192.168.31.95:63000  -t64 -c320  -d 10
Running 10s test @ http://192.168.31.95:63000
  64 threads and 320 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    67.43ms   35.35ms 529.30ms   82.06%
    Req/Sec    76.93     27.15   160.00     74.54%
  49099 requests in 10.10s, 7.82MB read
Requests/sec:   4861.03
Transfer/sec:    792.80KB

可以看到,RPS:5000!👏👏👏

Gunicorn 总结

可以看到,比起 Nginx 这个专业中的专业 HTTP 服务器,Gunicorn 还是要逊色不少的。

战绩是 25k5k

为什么 Nginx 可以比 Gunicorn 强这么多呢!

  • Nginx 是 c 写的,比纯 Python 的 Gunicorn 必然更加快,这是语言优势。(下次可以引入 uwsgi 看看)
  • Nginx 是优化到极致的产物,毕竟 Nginx 是一个连 CPU 亲和性都用上了的高玩!
  • Nginx 采用了 IO 多路复用,相比于 Gunicorn 的多进程多线程模型,效率要高很多。(下次可以引入 uvicorn

Nginx 和 Gunicorn 混合模式

todo


universe_king
3.4k 声望677 粉丝