96

所以根据haproxy作者,谁知道一两件事关于http:

发明 Keep-alive 是为了在 CPU 慢 100 倍时减少服务器上的 CPU 使用率。但是没有说的是持久连接会消耗大量内存,而除了打开它们的客户端之外,任何人都无法使用。在 2009 年的今天,CPU 非常便宜,内存仍然受限于架构或价格的几 GB。如果一个站点需要保持活动,那就是一个真正的问题。高负载站点通常禁用保持活动以支持最大数量的同时客户端。没有 keep-alive 的真正缺点是获取对象的延迟略有增加。浏览器将非保活站点上的并发连接数加倍以弥补这一点。

(来自http://haproxy.1wt.eu/

这符合其他人的经验吗?即没有keep-alive - 结果现在几乎不明显吗?(可能值得注意的是,使用 websockets 等 - 无论保持活动状态如何,连接都保持“打开” - 对于响应速度非常快的应用程序)。对于远离服务器的人来说效果是否更大 - 或者在加载页面时从同一主机加载许多工件?(我认为 CSS、图像和 JS 之类的东西越来越多地来自缓存友好的 CDN)。

想法?

(不确定这是否是 serverfault.com 的问题,但在有人告诉我把它移到那里之前,我不会交叉发布)。

4

4 回答 4

146

Hey since I'm the author of this citation, I'll respond :-)

There are two big issues on large sites : concurrent connections and latency. Concurrent connection are caused by slow clients which take ages to download contents, and by idle connection states. Those idle connection states are caused by connection reuse to fetch multiple objects, known as keep-alive, which is further increased by latency. When the client is very close to the server, it can make an intensive use of the connection and ensure it is almost never idle. However when the sequence ends, nobody cares to quickly close the channel and the connection remains open and unused for a long time. That's the reason why many people suggest using a very low keep-alive timeout. On some servers like Apache, the lowest timeout you can set is one second, and it is often far too much to sustain high loads : if you have 20000 clients in front of you and they fetch on average one object every second, you'll have those 20000 connections permanently established. 20000 concurrent connections on a general purpose server like Apache is huge, will require between 32 and 64 GB of RAM depending on what modules are loaded, and you can probably not hope to go much higher even by adding RAM. In practice, for 20000 clients you may even see 40000 to 60000 concurrent connections on the server because browsers will try to set up 2 to 3 connections if they have many objects to fetch.

If you close the connection after each object, the number of concurrent connections will dramatically drop. Indeed, it will drop by a factor corresponding to the average time to download an object by the time between objects. If you need 50 ms to download an object (a miniature photo, a button, etc...), and you download on average 1 object per second as above, then you'll only have 0.05 connection per client, which is only 1000 concurrent connections for 20000 clients.

Now the time to establish new connections is going to count. Far remote clients will experience an unpleasant latency. In the past, browsers used to use large amounts of concurrent connections when keep-alive was disabled. I remember figures of 4 on MSIE and 8 on Netscape. This would really have divided the average per-object latency by that much. Now that keep-alive is present everywhere, we're not seeing that high numbers anymore, because doing so further increases the load on remote servers, and browsers take care of protecting the Internet's infrastructure.

This means that with todays browsers, it's harder to get the non-keep-alive services as much responsive as the keep-alive ones. Also, some browsers (eg: Opera) use heuristics to try to use pipelinining. Pipelining is an efficient way of using keep-alive, because it almost eliminates latency by sending multiple requests without waiting for a response. I have tried it on a page with 100 small photos, and the first access is about twice as fast as without keep-alive, but the next access is about 8 times as fast, because the responses are so small that only latency counts (only "304" responses).

I'd say that ideally we should have some tunables in the browsers to make them keep the connections alive between fetched objects, and immediately drop it when the page is complete. But we're not seeing that unfortunately.

For this reason, some sites which need to install general purpose servers such as Apache on the front side and which have to support large amounts of clients generally have to disable keep-alive. And to force browsers to increase the number of connections, they use multiple domain names so that downloads can be parallelized. It's particularly problematic on sites making intensive use of SSL because the connection setup is even higher as there is one additional round trip.

What is more commonly observed nowadays is that such sites prefer to install light frontends such as haproxy or nginx, which have no problem handling tens to hundreds of thousands of concurrent connections, they enable keep-alive on the client side, and disable it on the Apache side. On this side, the cost of establishing a connection is almost null in terms of CPU, and not noticeable at all in terms of time. That way this provides the best of both worlds : low latency due to keep-alive with very low timeouts on the client side, and low number of connections on the server side. Everyone is happy :-)

Some commercial products further improve this by reusing connections between the front load balancer and the server and multiplexing all client connections over them. When the servers are close to the LB, the gain is not much higher than previous solution, but it will often require adaptations on the application to ensure there is no risk of session crossing between users due to the unexpected sharing of a connection between multiple users. In theory this should never happen. Reality is much different :-)

于 2010-11-10T06:49:17.247 回答
23

自从写这篇文章(并在 stackoverflow 上发布)以来的几年里,我们现在拥有像 nginx 这样的服务器,它们越来越受欢迎。

例如,nginx 可以在一个只有 2.5 MB(兆字节)RAM 的进程中保持打开的 10,000 个保持连接。事实上,用很少的 RAM 很容易保持打开数千个连接,并且您会遇到的唯一限制是其他限制,例如打开的文件句柄或 TCP 连接的数量。

Keep-alive 是一个问题,不是因为 keep-alive 规范本身有任何问题,而是因为 Apache 的基于进程的扩展模型和 keep-alives 侵入了一个架构不是为适应它而设计的服务器。

特别有问题的是 Apache Prefork + mod_php + keep-alives。这是一个模型,其中每个连接都将继续占用 PHP 进程占用的所有 RAM,即使它完全空闲并且仅作为保持活动状态保持打开状态。这是不可扩展的。但是服务器不必以这种方式设计 - 没有特别的理由服务器需要将每个保持活动连接保持在单独的进程中(尤其是当每个这样的进程都有完整的 PHP 解释器时)。PHP-FPM 和 nginx 等基于事件的服务器处理模型优雅地解决了这个问题。

2015 年更新:

SPDY 和 HTTP/2 用更好的东西取代了 HTTP 的 keep-alive 功能:不仅可以保持连接并在其上发出多个请求和响应,而且可以多路复用,因此可以以任何顺序发送响应,并且是并行的,而不仅仅是按照请求的顺序。这可以防止缓慢的响应阻塞更快的响应,并消除浏览器保持打开多个并行连接到单个服务器的诱惑。这些技术进一步突出了 mod_php 方法的不足之处,以及基于事件(或至少是多线程)的 Web 服务器与 PHP-FPM 之类的东西分开耦合的好处。

于 2013-12-26T06:27:47.293 回答
2

如果您使用 CloudFront 或 CloudFlare 等“来源拉取”CDN,那么非常长的保活会很有用。事实上,这比没有 CDN 更快,即使您提供的是完全动态的内容。

如果您长期保持活动状态,以至于每个 PoP 基本上都与您的服务器建立了永久连接,那么当用户第一次访问您的站点时,他们可以与本地 PoP 进行快速 TCP 握手,而不是与您进行慢速握手。(Light本身通过光纤绕半个地球大约需要100ms,建立TCP连接需要来回传递三个数据包 。SSL需要三个往返。)

于 2013-04-11T14:25:19.743 回答
2

我的理解是,它与 CPU 无关,而是打开重复套接字到世界另一端的延迟。即使您有无限带宽,连接延迟也会减慢整个过程。如果您的页面有几十个对象,则放大。即使是持久连接也有请求/响应延迟,但是当你有 2 个套接字时它会减少,因为平均而言,一个应该是流数据,而另一个可能是阻塞的。此外,在让您写入之前,路由器永远不会假设套接字已连接。它需要完整的往返握手。再说一次,我不声称自己是专家,但这就是我一直以来的看法。真正酷的是一个完全异步的协议(不,不是一个完全病态的协议)。

于 2010-11-09T22:50:53.213 回答