3

Why does the following code use 1.2GB of memory to execute? I was expecting memory usage to be relatively flat regardless of the number passed to generate_series, instead it is steadily increasing. Please - tell me I'm doing something wrong!

if (!PQsendQuery(conn, "select generate_series(1, 10000000)"))
    exit(1);

int i, value;
while (res = PQgetResult(conn)) {
    for (i = 0; i < PQntuples(res); i++) {
        value = atoi(PQgetvalue(res, i, 0));
    }
    PQclear(res);
}

printf("%d\n", value);
PQfinish(conn);

I've put the full source code for this example on pastebin.

4

2 回答 2

5

It appears that by default libpq buffers the entire result, rather than reading it in chunks.

In 9.2 there is a way to change this behaviour, see Single row mode.

I've tried this out, adding a call to PQsetSingleRowMode(conn), directly after PQsendQuery(), drops memory usage down to a few MB. Problem solved!

于 2013-03-30T11:49:40.890 回答
2

The canonical way of dealing with potentially large resultsets is to declare a CURSOR for the query and execute successive FETCH calls to retrieve them by chunks.

That's also what psql does when the FETCH_COUNT variable is set.

于 2013-03-30T12:37:07.953 回答