2

I have the following situation. There is one central SQL Server (2008 R2, Standard Edition) and more (say 10) technological SQL Servers (2008 R2, Express). The technological servers are located near "real" machines, and they capture data from sensors. The data are to be passed to the central SQL Server to be processed somehow. All machines are located in the same domain.The Service Broker was chosen to send the data to the central Server.

I have tried the standard MSDN tutorials Completing a Conversation Between Instances that shows how to create users without login, the certificates, etc. To summarize, it is quite a lot of things that are to be parametrized. No problem with manual settings for two servers, but...

  1. What is usual approach when working with more servers? If the servers are to be added in future, is it reasonable to insert the parameters (computer names, ports, etc.) into some configuration table or to create some stored procedures with hardwired constants... to be able to (re)construct the communication channels for newly added SQL servers?

  2. All the SQL servers are located within the same domain. Is it reasonable to simplify the deployment by not creating the users and the certificates? I have found the Remus Rusanu's answer on Service broker with only domain account on how to do that? Would you use the approach in real environment? What are the pros and cons?

Thanks, Petr

4

1 回答 1

2

When we designed Service Broker we tried to make a very clear distinction between design time (when you write the code) and deployment time (when the code is actually used in production). We took very explicit measures to be able to completely reconfigure the deployment without changing a single line in the the application code. We considered things like the message type, contract and service names used as design time. How you write your activated procedure and the RECEIVE, the code that issues BEGIN DIALOG/SEND verbs are all, again, considered design time.

Deployment time are things like routing information (which changes when the physical topology of the network changes), certificates used for security (they expire and require replacement), permissions, endpoints ports etc.

A special case is dialog security and remote service bindings: it is a mixture of design time (application code specifies WITH ENCRYPTION = ON) and deployment time (a remote service binding is present). The idea is that if an application requires dialog security it must specify ENCRYPTION = ON (which is the default if not specified) and then the administrator, at deployment time, must configure dialog security. Or, if the application does not explcilty require dialog security it can specify ENCRYPTION = OFF and then is up to the administrator, at deployment time, to configure dialog security if he chooses so.

Everything related to endpoint security (transport security) is considered deployment time.

Finally a great deal of care was put in place to make the programing experience and the available API behave identically whether your pair of services is local within a database, in two databases on the same instance or on two separate instances (ie. make sure coupling does not creep in).

I provided this explanation in hope it sheds some light and puts things better in perspective. To answer your questions: you certainly can write an application that is compeltely agnostic to the actual physical layout of your deployment, including the name, location, listening port and number of hosts involved. Basically the application code is all about services conversing with services and has 0 knowledges about routes, certificates, endpoints and so on. But that is all from the point of view of application development. In practice the deployment is often automated and is an application in itself. Even if, as often the case is, is the same person or team that codes both applciations (or both sides of the same application) is healthy to think at it as two compeltely separate tasks (or apps). If you can make this separation and keep the code clean, you're nealy there. The 'app' code has no knowledge whatsoever about where the service it talks to is, or where has the message it processes was sent from. The configuration code (or app) is all about knowing exactly where a service resides, how should a route be configured, what endpoints are listening on what ports etc etc.

As a side note, there is a thing called Service Broker Dynamic Routing which allows a configuration in which SSB itself is used to configure a large deployed site (ie. SSB can contact a central repository to locate routing info) but I would not recommend it.

于 2012-07-25T14:21:33.973 回答