1

I've inherited a system which consists of (for simplicity's sake) 1 WCF service, which multiple clients connect to. The service maintains a data cache (among other work), and the clients also maintain a local cache of the data in order to save a call to the service, if possible. The clients are updated with the latest data from the WCF service via a Tibco RV message that gets sent out on a timer. Each client listens for this message, and then makes a call to the service to get the latest data to cache locally.

This all works seemingly well, provided there is a single WCF instance. I want to be able to run multiple instances in a load balanced environment, but I don't want clients receiving a whole bunch of messages from each service instance telling them to refresh their cache.

I've thought briefly about a broker intermediary which could consolidate the messages, but that basically fails because these messages could come at completely different times depending on when the service was started.

I think in my ideal world I'd have a MongoDB server cluster or something; the clients would just pull data from there, which takes the WCF cache aspect out. The service could reload the Mongo store as needed, but we'd still be incurring network usage. Incurring this extra network usage seems to confound the original design goals of this client cache design.

Can anyone think of a less-impactful way of scaling out service instances with this setup, and without a single-point of failure ?

4

0 回答 0