For the past few weeks I've been tuning and messing with PostgreSQL which I'm going to use with my next project.
My Specs are:
- DigitalOcean 8 Cores 16GB SSD x2 (One for DB, another for Web)
- Centos 7
- PHP5 , Nginx
The things that I've tried:
Pgtune
PgBouncer & Pgpool (connection pooling & load balancing)
Tuning php-fpm & nginx (worker_processes, worker_connections, pm.max_children etc)
Linux file handle limits and socket tweaking.
I'm testing it by calling the webpage with ApacheBench to insert. Is it practical?
ab -n 17500 -c 1750 -r http://example.com/insert.php
So far I can only get it to handle 1700-2000 connections concurrently without dropping any transaction (usually prematurely closed connection or resource temporarily unavailable in nginx error log or Sorry, too many clients already return by PostgreSQL).
I tried both TCP/IP and unix socket for php-fpm and TCP/IP seems to be more scalable than unix socket.
Can PHP use connection pooling? Since the way I'm calling the DB from the web server are still the same(making alot of individual connections to pgpool or pgbouncer).
My goal is to handle at least 10,000 transactions concurrently. What are the deciding factors? Is it a bottleneck between web server to db (php-fpm) or PostgreSQL itself? Usually, how do big companies (PHP web application) handle such volume?