1

我已经为这个问题搜索了几个小时,但没有找到任何解决方案。

我目前正在开发这个基于Meteor的应用程序。

现在的场景是,在网站打开并在浏览器中加载所有资产后,浏览器不断对服务器进行递归 xhr 调用。这些调用以 25 秒的固定间隔进行。

这可以在浏览器控制台的网络选项卡中看到。请参阅图像中最后一行的待处理请求。

在此处输入图像描述

我无法弄清楚它的来源,以及为什么即使在用户空闲时也会自动调用它。

现在的问题是,如何禁用这些自动请求?我想手动调用请求,即在选择菜单项时等。

任何帮助都会得到帮助。

[更新]

回应 Jan Dvorak 的评论:

当我在搜索框中输入“e”时,将显示名称以字母“e”开头的事件列表。

该请求带有所有有效参数和有效负载,如下所示:

["{\"msg\":\"sub\",\"id\":\"8ef5e419-c422-429a-907e-38b6e669a493\",\"name\":\"event_Coll_Search_by_PromoterName\",\"params\":[\"e\"]}"]

这是有效的回应。

a["{\"msg\":\"data\",\"subs\":[\"8ef5e419-c422-429a-907e-38b6e669a493\"]}"]

此操作的代码发布在此处

但是在自动递归请求的情况下,请求没有负载,响应只是一个字母“h”,这很奇怪。不是吗?我怎样才能摆脱这个。?

4

1 回答 1

6

Meteor has a feature called

Live page updates.

Just write your templates. They automatically update when data in the database changes. No more boilerplate redraw code to write. Supports any templating language.

To support this feature, Meteor needs to do some server-client communication behind the scenes.


Traditionally, HTTP was created to fetch dead data. The client tells the server it needs something, and it gets something. There is no way for the server to tell the client it needs something. Later, it became needed to push some data to the client. Several alternatives came to existence:

polling:

The client makes periodic requests to the server. The server responds with new data or says "no data" immediately. It's easy to implement and doesn't use much resources. However, it's not exactly live. It can be used for a news ticker but it's not exactly good for a chat application.

If you increase the polling frequency, you improve the update rate, but the resource usage grows with the polling frequency, not with the data transfer rate. HTTP requests are not exactly cheap. One request per second from multiple clients at the same time could really hurt the server.

hanging requests:

The client makes a request to the server. If the server has data, it sends them. If the server doesn't have data, it doesn't respond until it does. The changes are picked up immediately, no data is transferred when it doesn't need to be. It does have a few drawbacks, though:

If a web proxy sees that the server is silent, it eventually cuts off the connection. This means that even if there is no data to send, the server needs to send a keep-alive response anyways to make the proxies (and the web browser) happy.

Hanging requests don't use up (much) bandwidth, but they do take up memory. Nowadays' servers can handle multiple concurrent TCP connections, so it's less of an issue than it was before. What does need to be considered is the amount of memory associated with the threads holding on to these requests - especially when the connections are tied to specific threads serving them.

Browsers have hard limits on the number of concurrent requests per domain and in total. Again, this is less of a concern now than it was before. Thus, it seems like a good idea to have one hanging request per session only.

Managing hanging requests feels kinda manual as you have to make a new request after each response. A TCP handshake takes some time as well, but we can live with a 300ms (at worst) refractory period.

Chunked response:

The client creates a hidden iFrame with a source corresponding to the data stream. The server responds with an HTTP response header immediately and leaves the connection open. To send a message, the server wraps it in a pair of <script></script> tags that the browser executes when it receives the closing tag. The upside is that there's no connection reopening but there is more overhead with each message. Moreover, this requires a callback in the global scope that the response calls.

Also, this cannot be used with cross-domain requests as cross-domain iFrame communication presents its own set of problems. The need to trust the server is also a challenge here.

Web Sockets:

These start as a normal HTTP connection but they don't actually follow the HTTP protocol later on. From the programming point of view, things are as simple as they can be. The API is a classic open/callback style on the client side and the server just pushes messages into an open socket. No need to reopen anything after each message.

There still needs to be an open connection, but it's not really an issue here with the browser limits out of the way. The browser knows the connection is going to be open for a while, so it doesn't need to apply the same limits as to normal requests.

These seem like the ideal solution, but there is one major issue: IE<10 doesn't know them. As long as IE8 is alive, web sockets cannot be relied upon. Also, the native Android browser and Opera mini are out as well (ref.).

Still, web sockets seem to be the way to go once IE8 (and IE9) finally dies.


What you see are hanging requests with the timeout of 25 seconds that are used to implement the live update feature. As I already said, the keep-alive message ("h") is used so that the browser doesn't think it's not going to get a response. "h" simply means "nothing happens".

Chrome supports web sockets, so Meteor could have used them with a fallback to long requests, but, frankly, hanging requests are not at all bad once you've got them implemented (sure, the browser connection limit still applies).

于 2012-12-26T11:34:27.023 回答