按照此处给出的建议,我使用 ZODB 存储了我的数据,由以下代码创建:
# structure of the data [around 3.5 GB on disk]
bTree_container = {key1:[ [2,.44,0], [1,.23,0], [4,.21,0] ...[10,000th element] ], key2:[ [3,.77,0], [1,.22,0], [6,.98,0] ..[10,000th element] ] ..10,000th key:[[5,.66,0], [2,.32,0], [8,.66,0] ..[10,000th element]]}
# Code used to build the above mentioned data set
for Gnodes in G.nodes(): # Gnodes iterates over 10000 values
Gvalue = someoperation(Gnodes)
for i,Hnodes in enumerate(H.nodes()): # Hnodes iterates over 10000 values
Hvalue =someoperation(Hnodes)
score = SomeOperation on (Gvalue,Hvalue)
btree_container.setdefault(Gnodes, PersistentList()).append([Hnodes, score, 0]) # build a list corresponding to every value of Gnode (key)
if i%5000 == 0 # save the data temporarily to disk.
transaction.savepoint(True)
transaction.commit() # Flush all the data to disk
现在,我想(在一个单独的模块中)(1)修改存储的数据并(2)对其进行排序。以下是我使用的代码:
storage = FileStorage('Data.fs')
db = DB(storage)
connection = db.open()
root = connection.root()
sim_sorted = root[0]
# substitute the last element in every list of every key (indicated by 0 above) by 1
# This code exhausts all the memory, never get to the 2nd part i.e. the sorting
for x in sim_sorted.iterkeys():
for i,y in enumerate(sim_sorted[x]):
y[3] = 1
if i%5000 ==0
transaction.savepoint()
# Sort all the lists associated with every key in he reverse order using middle element as key
[sim_sorted[keys].sort(key = lambda x:(-x[1])) for keys in sim_sorted.iterkeys()]
但是,用于编辑值的代码会占用所有内存(永远不会进行排序)。我不确定这是如何工作的,但我感觉我的代码存在严重问题,ZODB 将所有内容都拉入内存,因此出现了问题。什么是实现预期效果的正确方法,即在 ZODB 中存储元素的替换和排序而不会遇到内存问题?代码也很慢,建议加快速度吗?
[注意:我没有必要将这些更改写回数据库]
编辑通过在内部循环之后添加命令似乎对内存使用有所改善connection.cacheMinimize()
,但是在一段时间后再次消耗了整个 RAM,这让我感到困惑。