2

For the past few weeks I've been tuning and messing with PostgreSQL which I'm going to use with my next project.

My Specs are:

  • DigitalOcean 8 Cores 16GB SSD x2 (One for DB, another for Web)
  • Centos 7
  • PHP5 , Nginx

The things that I've tried:

  1. Pgtune

  2. PgBouncer & Pgpool (connection pooling & load balancing)

  3. Tuning php-fpm & nginx (worker_processes, worker_connections, pm.max_children etc)

  4. Linux file handle limits and socket tweaking.

I'm testing it by calling the webpage with ApacheBench to insert. Is it practical?

ab -n 17500 -c 1750 -r http://example.com/insert.php

So far I can only get it to handle 1700-2000 connections concurrently without dropping any transaction (usually prematurely closed connection or resource temporarily unavailable in nginx error log or Sorry, too many clients already return by PostgreSQL).

I tried both TCP/IP and unix socket for php-fpm and TCP/IP seems to be more scalable than unix socket.

Can PHP use connection pooling? Since the way I'm calling the DB from the web server are still the same(making alot of individual connections to pgpool or pgbouncer).

My goal is to handle at least 10,000 transactions concurrently. What are the deciding factors? Is it a bottleneck between web server to db (php-fpm) or PostgreSQL itself? Usually, how do big companies (PHP web application) handle such volume?

4

1 回答 1

6

最好的负载测试是使用真实世界的负载;越接近你的负载测试越好。

如果您有许多并发请求,则必须使用连接池,而 pgBouncer 是标准答案。

在答案的范围内进行性能调整是不可能的,事实上这个问题可能会因为过于宽泛而被关闭,但我会给你一些一般性的线索:

目标是找到瓶颈,即系统处于极限的资源。缩小范围:是应用程序、Web 服务器还是数据库?一旦您知道哪个组件限制了您,请找到单独的限制资源。是 I/O 吗?CPU时间?记忆?建立数据库连接所需的时间?锁?

一个重要的规则是,在你知道问题出在哪里之前,不要随意转动旋钮。这可能会给您一个配置错误的系统。找到一个理论,尝试一个解决方案,如果它没有达到预期的效果,请将设置重置为之前的值

我不明白你的设置:首先你说你有一台用于数据库的机器和一台用于应用程序的机器,然后你说你尝试了本地套接字连接。

于 2017-03-27T06:53:33.287 回答