我有一个在 Postgresql 中有超过 15m 行的表。用户可以将这些行(比如项目)保存到他们的库中,当他们请求他们的库时,系统会加载用户的库。
Postgresql 中的查询就像
SELECT item.id, item.name
FROM items JOIN library ON (library.item_id = item.id)
WHERE library.user_id = 1
,该表已经被索引和非规范化,所以我不需要任何其他 JOIN。
如果用户在库中有很多项目(如 1k 项目),则查询时间通常会增加。(例如对于 1k 个项目,查询时间为 7 秒)我的目标是减少大型数据集的查询时间。
我已经使用 Solr 进行全文搜索,我尝试过类似的查询,?q=id:1 OR id:100 OR id:345
但我不确定它在 Solr 中是否有效。
我想知道查询此数据集的替代方法。我系统的瓶颈似乎是磁盘速度。我应该购买具有超过 15gb 内存的服务器并在增加shared_memory
选项中使用 Postgresql,还是尝试使用 Mongodb 或其他基于内存的数据库,或者我应该创建一个集群系统并在 Postgresql 中复制数据?
items:
Column | Type
--------------+-------------------
id | text
mbid | uuid
name | character varying
length | integer
track_no | integer
artist | text[]
artist_name | text
release | text
release_name | character varying
rank | numeric
user_library:
Column | Type | Modifiers
--------------+-----------------------------+--------------------------------------------------------------
user_id | integer | not null
recording_id | character varying(32) |
timestamp | timestamp without time zone | default now()
id | integer | primary key nextval('user_library_idx_pk'::regclass)
-------------------
explain analyze
SELECT recording.id,name,track_no,artist,artist_name,release,release_name
FROM recording JOIN user_library ON (user_library.recording_id = recording.id)
WHERE user_library.user_id = 1;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.00..10745.33 rows=1036539 width=134) (actual time=0.168..57.663 rows=1000 loops=1)
Join Filter: (recording.id = (recording_id)::text)
-> Seq Scan on user_library (cost=0.00..231.51 rows=1000 width=19) (actual time=0.027..3.297 rows=1000 loops=1) (my opinion: because user_library has only 2 rows, Postgresql didn't use index to save resources.)
Filter: (user_id = 1)
-> Append (cost=0.00..10.49 rows=2 width=165) (actual time=0.045..0.047 rows=1 loops=1000)
-> Seq Scan on recording (cost=0.00..0.00 rows=1 width=196) (actual time=0.001..0.001 rows=0 loops=1000)
-> Index Scan using de_recording3_table_pkey on de_recording recording (cost=0.00..10.49 rows=1 width=134) (actual time=0.040..0.042 rows=1 loops=1000)
Index Cond: (id = (user_library.recording_id)::text)
Total runtime: 58.589 ms
(9 rows)