0

Hypothetical (but simpler) scenario:

  • I have many orders in my system.
  • I have external triggers that affect those orders (e.g. webhooks). They may occur in parallel, and are handled by different instances in my cluster.
  • In the scope of a single order, I would like to make sure that those events are processed in sequential order to avoid race conditions, version conflicts etc.
  • Events for different orders can (and should) be processed in parallel

I'm currently toying with the idea of leveraging RabbitMQ with a setup similar to this:

  • use a queue for each order (create on the fly)
  • if an event occurs, put it in that queue

Those queues would be short-lived, so I wouldn't end up with millions of them, but it should scale anyway (let's say lower one-digit thousands if the project grows substantially). Question is whether that's an absolute anti-pattern as far as RabbitMQ (or similar) systems goes, or if there's better solutions to ensure sequential execution anyway.

Thanks!

4

1 回答 1

1

In my opinion creating ephemeral queues might not be a great idea as there will considerable overhead of creating and delete queues. The focus should be on message consumption. I can think of following solutions:

  • You can limit the number of queues by building publishing strategy like all orders with orderId divisible by 2 goes to queue-1, divisible by 3 goes to queue-2 and so forth. That will give you parallel throughput as well as finite number queues but there is some additional publisher logic you have to handle

  • The same logic can be transferred to consumer side by using single pub-sub style queue and then onus lies on consumer to filter unwanted orderIds

  • If you are happy to explore other technologies, you can look into Kafka as well where you can use orderId as partitionKey and use multiple partitions to gain parallel throughput.

于 2021-12-14T12:38:51.877 回答