11 月 7 日更新:我们已经修复了 G-WAN v4.11.7 中的空文件问题,并且 G-WAN 现在在这个游戏中也比 Nginx 快两倍(禁用 www 缓存)。
最近发布的 G-WAN 比 Nginx 更快,无论文件大小, G-WAN 缓存默认禁用,以便人们更容易将 G-WAN 与 Nginx 等其他服务器进行比较。
Nginx 有一些缓存功能(用于跳过 stat() 调用的 fd cahe 和基于 memcached 的模块),但两者都必然比 G-WAN 的本地缓存慢得多。
对于 CDN 等某些应用程序,也需要禁用缓存。其他应用程序(如 AJAX 应用程序)极大地受益于 G-WAN 缓存功能,因此可以随意重新启用缓存,即使是在每个请求的基础上也是如此。
希望这能澄清这个问题。
“再现性能声明”
首先,标题具有误导性,因为上面记录不充分的*测试没有使用相同的工具,也没有使用 G-WAN 测试获取的 HTTP 资源。
[*] 你的nginx.conf
文件在哪里?两台服务器的 HTTP 响应标头是什么?您的“裸机” 8 核 CPU 是什么?
G-WAN 测试基于ab.c,这是 G-WAN 团队为weighttp(Lighttpd 服务器团队制作的测试工具)编写的包装器,因为 ab.c 披露的信息信息量更大。
其次,被测试的文件"null.html"
是……一个空文件。
我们不会浪费时间讨论这种测试的不相关性(您的网站服务了多少空 HTML 文件?),但这很可能是观察到的“性能不佳”的原因。
G-WAN 不是为了服务空文件而创建的(我们从未尝试过,也从未要求过这样做)。但我们肯定会添加此功能以避免此类测试造成的混乱。
当您想“检查声明”时,我鼓励您使用weighttp
(测试中最快的 HTTP 加载工具)和100.bin
文件(具有不可压缩 MIME 类型的 100 字节文件:此处不涉及 Gzip)。
使用非空文件Nginx 比 G-WAN 慢得多,即使在独立测试中也是如此。
wrk
到目前为止我们还不知道,但它似乎是Nginx 团队制作的工具:
“wrk 是专门为尝试将 nginx 推向极限而编写的,在第一轮测试中被推高到 0.5Mr/s。”
更新(一天后)
由于您没有费心发布更多数据,我们做到了:
wrk weighttp
----------------------- -----------------------
Web Server 0.html RPS 100.html RPS 0.html RPS 100.html RPS
---------- ---------- ------------ ---------- ------------
G-WAN 80,783.03 649,367.11 175,515 717,813
Nginx 198,800.93 179,939.40 184,046 199,075
就像在您的测试中一样,我们可以看到它wrk
比weighttp
.
我们还可以看到,G-WAN 在使用这两种 HTTP 加载工具时都比 Nginx 更快。
以下是详细结果:
广域网
./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/0.html"
Running 3s test @ http://127.0.0.1:8080/0.html
6 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 3.87ms 5.30ms 80.97ms 99.53%
Req/Sec 14.73k 1.60k 16.33k 94.67%
248455 requests in 3.08s, 55.68MB read
Socket errors: connect 0, read 248448, write 0, timeout 0
Requests/sec: 80783.03
Transfer/sec: 18.10MB
./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/100.html"
Running 3s test @ http://127.0.0.1:8080/100.html
6 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 263.15us 381.82us 16.50ms 99.60%
Req/Sec 115.55k 14.38k 154.55k 82.70%
1946700 requests in 3.00s, 655.35MB read
Requests/sec: 649367.11
Transfer/sec: 218.61MB
weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/0.html"
progress: 100% done
finished in 1 sec, 709 millisec and 252 microsec, 175515 req/s, 20159 kbyte/s
requests: 300000 total, 300000 started, 300000 done, 150147 succeeded, 149853 failed, 0 errored
status codes: 150147 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 35284545 bytes total, 35284545 bytes http, 0 bytes data
weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/100.html"
progress: 100% done
finished in 0 sec, 417 millisec and 935 microsec, 717813 req/s, 247449 kbyte/s
requests: 300000 total, 300000 started, 300000 done, 300000 succeeded, 0 failed, 0 errored
status codes: 300000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 105900000 bytes total, 75900000 bytes http, 30000000 bytes data
Nginx
./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/100.html"
Running 3s test @ http://127.0.0.1:8080/100.html
6 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.54ms 1.16ms 11.67ms 72.91%
Req/Sec 34.47k 6.02k 56.31k 70.65%
539743 requests in 3.00s, 180.42MB read
Requests/sec: 179939.40
Transfer/sec: 60.15MB
./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/0.html"
Running 3s test @ http://127.0.0.1:8080/0.html
6 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.44ms 1.15ms 9.37ms 75.93%
Req/Sec 38.16k 8.57k 62.20k 69.98%
596070 requests in 3.00s, 140.69MB read
Requests/sec: 198800.93
Transfer/sec: 46.92MB
weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/0.html"
progress: 100% done
finished in 1 sec, 630 millisec and 19 microsec, 184046 req/s, 44484 kbyte/s
requests: 300000 total, 300000 started, 300000 done, 300000 succeeded, 0 failed, 0 errored
status codes: 300000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 74250375 bytes total, 74250375 bytes http, 0 bytes data
weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/100.html"
progress: 100% done
finished in 1 sec, 506 millisec and 968 microsec, 199075 req/s, 68140 kbyte/s
requests: 300000 total, 300000 started, 300000 done, 300000 succeeded, 0 failed, 0 errored
status codes: 300000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 105150400 bytes total, 75150400 bytes http, 30000000 bytes data
Nginx 配置文件试图匹配 G-WAN 的行为
# ./configure --without-http_charset_module --without-http_ssi_module
# --without-http_userid_module --without-http_rewrite_module
# --without-http_limit_zone_module --without-http_limit_req_module
user www-data;
worker_processes 6;
worker_rlimit_nofile 500000;
pid /var/run/nginx.pid;
events {
# tried other values up to 100000 without better results
worker_connections 4096;
# multi_accept on; seems to be slower
multi_accept off;
use epoll;
}
http {
charset utf-8; # HTTP "Content-Type:" header
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
keepalive_requests 10; # 1000+ slows-down nginx enormously...
types_hash_max_size 2048;
include /usr/local/nginx/conf/mime.types;
default_type application/octet-stream;
gzip off; # adjust for your tests
gzip_min_length 500;
gzip_vary on; # HTTP "Vary: Accept-Encoding" header
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
# cache metadata (file time, size, existence, etc) to prevent syscalls
# this does not cache file contents. It should helps in benchmarks where
# a limited number of files is accessed more often than others (this is
# our case as we serve one single file fetched repeatedly)
# THIS IS ACTUALY SLOWING-DOWN THE TEST...
#
# open_file_cache max=1000 inactive=20s;
# open_file_cache_errors on;
# open_file_cache_min_uses 2;
# open_file_cache_valid 300s;
server {
listen 127.0.0.1:8080;
access_log off;
# only log critical errors
#error_log /usr/local/nginx/logs/error.log crit;
error_log /dev/null crit;
location / {
root /usr/local/nginx/html;
index index.html;
}
location = /nop.gif {
empty_gif;
}
location /imgs {
autoindex on;
}
}
}
欢迎评论——尤其是来自 Nginx 专家的评论——基于这个完整记录的测试进行讨论。