There are a fair number of valid choices here including continuing your current approach, adding something like PgBouncer to using a singleton. These have different considerations and so the choice of which is better may depend on your specific circumstances. However I will mention the tradeoffs between those two options here.
First some basics
PostgreSQL, like all other RDBMS's has to coordinate access to data between various processes. In general the more parallelism you have on the data access side, the more you are waiting for locks. One key is thus to manage parallelism.
On one hand, a single connection on PostgreSQL will never scale beyond a single core of your database server and a single hard drive spindle. In fact it will never scale even up to this. On the other hand, unless you have 50 cores and very fast hard drives, 100 connections will end up spending most of the time waiting on other processes, so you need to understand that tradeoff before getting started.
Option 1: Singleton
A singleton database connection is the simplest approach. Note that PostgreSQL connections can only run one query at a time, so you are essentially going to serialize all database access through a single interface. This does not work well with transactions, and it places hard limits on what you can expect from your database manager however on a lower end server, you will get far better performance than you will if you allow, say, 100 concurrent connections.
Option 2: Connection pooling, perhaps with PGBouncer
A second possibility is you can connect to a connection pooler instead of the database, and hanve the connection pooler manage the connections. This is the most flexible approach because it gives you a toolkit for addressing things like transactions, if you need to, and it allows better control over parallelism than you get from a singleton. With a connection pooler you can specify how many connections to use, whether to pool transactions or not, and the like. In general, if you are looking for scalability this is probably the place to start.
The major disadvantage is that this gives you one more piece of software to manage and therefore a bit of a complexity cost. However it also gives you a layer of abstraction around your database connections you can use to manage performance.