5

所以,我有一个简单的 Flask API 应用程序,它在运行 tornado 工作者的 gunicorn 上运行。gunicorn 命令行是:

gunicorn -w 64 --backlog 2048 --keep-alive 5 -k tornado -b 0.0.0.0:5005 --pid /tmp/gunicorn_api.pid api:APP

当我从另一台服务器直接针对 gunicorn 运行 Apache Benchmark 时,以下是相关结果:

ab -n 1000 -c 1000 'http://****:5005/v1/location/info?location=448&ticket=55384&details=true&format=json&key=****&use_cached=true'
Requests per second:    2823.71 [#/sec] (mean)
Time per request:       354.144 [ms] (mean)
Time per request:       0.354 [ms] (mean, across all concurrent requests)
Transfer rate:          2669.29 [Kbytes/sec] received

所以我们的性能接近 3k reqs/sec。

现在,我需要 SSL。所以我将 nginx 作为反向代理运行。以下是同一台服务器上针对 nginx 的相同基准测试:

ab -n 1000 -c 1000 'https://****/v1/location/info?location=448&ticket=55384&details=true&format=json&key=****&use_cached=true'
Requests per second:    355.16 [#/sec] (mean)
Time per request:       2815.621 [ms] (mean)
Time per request:       2.816 [ms] (mean, across all concurrent requests)
Transfer rate:          352.73 [Kbytes/sec] received

这意味着性能下降了 87.4%。但是对于我的生活,我无法弄清楚我的 nginx 设置有什么问题。这是:

upstream sdn_api{
    server 127.0.0.1:5005;

    keepalive 100;
}

server {
        listen [::]:443;

    ssl on;
    ssl_certificate /etc/ssl/certs/api.sdninja.com.crt;
    ssl_certificate_key /etc/ssl/private/api.sdninja.com.key;
    ssl_protocols SSLv3 TLSv1;
    ssl_ciphers ALL:!kEDH:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM;
    ssl_session_cache shared:SSL:10m;

    server_name api.*****.com;
    access_log  /var/log/nginx/sdn_api.log;

    location / {
        proxy_pass http://sdn_api;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        client_max_body_size 100M;
        client_body_buffer_size 1m;
        proxy_intercept_errors on;
        proxy_buffering on;
        proxy_buffer_size 128k;
        proxy_buffers 256 16k;
        proxy_busy_buffers_size 256k;
        proxy_temp_file_write_size 256k;
        proxy_max_temp_file_size 0;
        proxy_read_timeout 300;
    }

}

还有我的 nginx.conf:

user www-data;
worker_processes 8;
pid /var/run/nginx.pid;

events {
    worker_connections 2048;
    # multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    # server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip off;
    gzip_disable "msie6";

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    ##
    # nginx-naxsi config
    ##
    # Uncomment it if you installed nginx-naxsi
    ##

    #include /etc/nginx/naxsi_core.rules;

    ##
    # nginx-passenger config
    ##
    # Uncomment it if you installed nginx-passenger
    ##

    #passenger_root /usr;
    #passenger_ruby /usr/bin/ruby;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

那么有没有人知道为什么它在这个配置下运行得这么慢?谢谢!

4

1 回答 1

2

HTTPS 开销的很大一部分是在握手中。将 -k 传递给 ab 以启用持久连接。您将看到基准测试现在明显更快。

于 2013-05-05T18:49:58.030 回答