1

我有一个通过 node-postgres 使用 PostgreSQL 的 Node.js 应用程序。我已经开始收到错误“抱歉,已经有太多客户”,这让我想知道我是否创建了太多客户对象,或者我是否应该手动断开它们。目前,我正在为每个访问数据库的函数调用创建一个新的客户端对象。例如:

var db {
  checkDetails : function() {
    var client = new pg.Client(conString);
    ...
  },

  amendDetails : function() {
    var client = new pg.Client(conString);
    ...
  },
...
}

这是正确的还是我应该在其他地方创建一个客户端对象?还是我应该打电话client.end()?由于我使用的回调样式的示例不包括此,我认为这是不必要的。

4

1 回答 1

0

There are a fair number of valid choices here including continuing your current approach, adding something like PgBouncer to using a singleton. These have different considerations and so the choice of which is better may depend on your specific circumstances. However I will mention the tradeoffs between those two options here.

First some basics

PostgreSQL, like all other RDBMS's has to coordinate access to data between various processes. In general the more parallelism you have on the data access side, the more you are waiting for locks. One key is thus to manage parallelism.

On one hand, a single connection on PostgreSQL will never scale beyond a single core of your database server and a single hard drive spindle. In fact it will never scale even up to this. On the other hand, unless you have 50 cores and very fast hard drives, 100 connections will end up spending most of the time waiting on other processes, so you need to understand that tradeoff before getting started.

Option 1: Singleton

A singleton database connection is the simplest approach. Note that PostgreSQL connections can only run one query at a time, so you are essentially going to serialize all database access through a single interface. This does not work well with transactions, and it places hard limits on what you can expect from your database manager however on a lower end server, you will get far better performance than you will if you allow, say, 100 concurrent connections.

Option 2: Connection pooling, perhaps with PGBouncer

A second possibility is you can connect to a connection pooler instead of the database, and hanve the connection pooler manage the connections. This is the most flexible approach because it gives you a toolkit for addressing things like transactions, if you need to, and it allows better control over parallelism than you get from a singleton. With a connection pooler you can specify how many connections to use, whether to pool transactions or not, and the like. In general, if you are looking for scalability this is probably the place to start.

The major disadvantage is that this gives you one more piece of software to manage and therefore a bit of a complexity cost. However it also gives you a layer of abstraction around your database connections you can use to manage performance.

于 2013-06-02T02:54:39.603 回答