5

考虑问题:

我有 n 个带有 Web 应用程序的 Tomcat 节点,它们提供一些无状态的内容。例如,1000 个第一个请求应用程序必须使用“a”响应,接下来的 10000 个使用“b”响应,其余的使用“c”响应。

我首先考虑了消息传递:应用程序从某个存储中获取总服务计数 -> 如果它小于 n 则提供内容'a' -> 一旦提供内容,应用程序发送一条消息 -> 消息被消耗 -> 总服务计数增加一些存储 - > ...但是在这种情况下,由于消息服务事件和存储计数器增量之间的轻微(或巨大的窥视加载时间)延迟,超调的可能性很大。

然后我考虑设置 memcached-session-manager 以将计数器存储在共享会话中。但这对于我的简单案例来说似乎很重。

有人可以建议是否有任何直接的方式多个JVM实例可以相互通信(什么适用于我的情况)?

4

3 回答 3

2

如果它绝对必须是正确的并且你不想延迟,那么我认为RedisHazlecast是你最好的选择。特别是 Redis,因为它具有类似操作的原子计数。虽然理论上你可以对 memcache 做同样的事情,但 Redis 是为这个确切的用例(统计计数器)而设计的。

您也可以使用像 H2 这样的内存数据库,或者只是将 Postgres 表设置为unlogged或任何适用于您的 RDBMS 的东西,以保持内存表中的伪伪不那么安全。关于 RDBMS 的烦人之处在于,并非所有 RDBMS 都一致支持upserting aka 。MERGE

于 2013-10-05T15:05:35.250 回答
1

For starters, you can share sessions between tomcat instances. A tomcat server receiving the request basically sends a duplication of it's session to all the other tomcat services.

I can't help but think that you have some unexpressed need driving this request, but wish to only ask how to implement this without actually asking how to satisfy the need. In such circumstances, needs are not met but the request often is.

For example, instead of worrying about 1000 requests to one server, and then a rotation, a simple multiple ip address to DNS hostname configuration could distribute the requests in a round-robin fashion.

You could also coordinate your sessions against a database. Databases provide decent storage capabilities, with read consistency. With the right configuration the "next number" could simply be read by the processing node.

Finally, there are other means, leveraging distributed computing. For example, the request could be handled by an internal request relay which initiates a Paxos like protocol to guarantee that all processing nodes have the new "next" number.

All of these techniques are straightforward. However, you are quickly dismissing them because they don't seem too simple to you. Well, perhaps you are seeking an even simpler alternative, and there's no harm in that; however, getting two or more computers to agree on some item consistently, reliably, and at the same time is a bit trickier than we would all like it to be. Feel free to initiate a new effort in this field, but perhaps you will only discover that there are real reasons for the extra overhead and complexity. It is not a trivial problem.

--- An update ---

You know, if you can handle the requests round-robin style, and relax the need to have them ordered between servers, and know that you will only have N servers, you could implement N different request counters.

  • Server 1 increments by N assuring that count % N == 0
  • Server 2 increments by N assuring that count % N == 1
  • ...
  • Server N-1 increments by N assuring that count % N = N-2
  • Server N increments by N assuring that count % N = N-1

Of course, cross-server counts would probably be out of global order in short session, but you might get a bit of what you want quickly:

  • A unique count per request
  • An ordering of requests on a per-server basis
  • A count guaranteed to be unique across all servers
  • A quick way to determine which server handled the request

What you would lack

  • A true ordering of the requests across servers
于 2013-10-04T13:29:56.243 回答
0

Here are all the choices you have ordered from least effort to most effort required in order to set them up:

  1. Use memcached-session-manager to store sessions across various Tomcats
  2. Use a light weight database like sqlite and store counter in a table/collection
  3. Use a shared file system and store counter in a text file
  4. Use a light weight caching provider like Redis or Memcahed or Ehcache or Hazelcast
  5. Use Messaging such as JMS and keep passing counter
于 2013-10-04T13:50:52.667 回答