1

I am using python 2.7, pyodbc and mysql 5.5. I am on windows

I have query which returns millions of rows and I would like to process it in chunks. using the fetchmany function.

He a portion of the code

import pyodbc
connection = pyodbc.connect('Driver={MySQL ODBC 5.1 Driver};Server=127.0.0.1;Port=3306;Database=XXXX;User=root; Password='';Option=3;')

cursor_1 = connection.cursor()
strSQLStatement = 'SELECT x1, x2 from X'

cursor_1.execute(strSQLStatement)
# the error occurs here  

x1 = cursor_1.fetchmany(10)
print x1
connection.close()

My problem:

  1. I get the error MySQL client ran out of memory

  2. I guess that this is because the cursor_1.execute tries to read everything into memory and tried the following (one by one) but to no avail

    1. In user interface (ODBC – admin tools) I ticked the “Don't cache results of forwarding-only cursors”</li>
    2. connection.query("SET GLOBAL query_cache_size = 40000")

My question:

  1. Does pyodbc has the possibility to run the query and serve the results only on demand ?

  2. The MySQL manual suggests to invoke mysql with the --quick option. Can this be done also when not using the command line?

Thanks for your help.

P.S: suggestions for an alternative MySQL module are also welcome, but I use portable python so my choice is limited.

4

2 回答 2

0

在查询字符串上使用 LIMIT 子句。

http://dev.mysql.com/doc/refman/5.5/en/select.html

通过使用

SELECT x1, x2 from X LIMIT 0,1000 

您只会获得第一条 1k 记录,然后执行以下操作:

SELECT x1, x2 from X LIMIT 1000,2000 

您将获得接下来的 1k 条记录。

适当循环以获取所有记录。(我不知道python所以在这里帮不上忙:()

于 2012-08-23T05:48:29.333 回答
0

将 MySQLdb 与SSCursor一起使用将解决您的问题。

不幸的是,文档不是很好,但在用户指南中提到了,你可以在这个 stackoverflow question中找到一个例子。

于 2012-08-23T06:45:35.933 回答