2

I've got a table (10k rows) that stores large values in a text column. The current largest is 417 MB uncompressed (85 MB toasted). The flaw in this design is that it's not possible to stream these values (e.g. over JDBC) - anything using this column must read the whole thing into memory.

Are there any tools or shortcuts available to migrate this column to large objects? Minimising the working disk and memory required.

I'll be using lo_compat_privileges if that make any difference.

4

1 回答 1

8

为什么不直接使用lo_from_bytea

例子:

SELECT 'test'::text::bytea;
   bytea    
------------
 \x74657374
(1 row)

SELECT lo_from_bytea(0, 'test'::text::bytea);
 lo_from_bytea 
---------------
        274052
(1 row)

SELECT lo_get(274052);
   lo_get   
------------
 \x74657374
(1 row)

因此,要将数据从文本实际移动(最好有备份)到 OID,您可以执行以下操作:

ALTER TABLE mytable ADD COLUMN value_lo OID;
UPDATE mytable SET value_lo = lo_from_bytea(0, value::bytea), value = NULL;
ALTER TABLE mytable DROP COLUMN value;
ALTER TABLE mytable RENAME COLUMN value_lo TO value;

...最后,由于 PostgreSQL 是一个 MVCC 数据库并且不会立即删除所有数据,因此您应该使用 aVACUUM FULL或 a进行清理CLUSTER

于 2019-03-05T18:12:21.837 回答