我正在寻找一种方法来加快这样的文件加载:
数据包含大约 100 万行,制表符以“\t”(制表符)和 utf8 编码分隔,使用下面的代码解析完整文件大约需要 9 秒。但是,我希望几乎在一秒钟内!
def load(filename):
features = []
with codecs.open(filename, 'rb', 'utf-8') as f:
previous = ""
for n, s in enumerate(f):
splitted = tuple(s.rstrip().split("\t"))
if len(splitted) != 2:
sys.exit("wrong format!")
if previous >= splitted:
sys.exit("unordered feature")
previous = splitted
features.append(splitted)
return features
我想知道是否有任何二进制格式的数据可以加快速度?或者,如果我可以从某些NumPy
或任何其他库中受益,以加快加载速度。
也许您可以就另一个速度瓶颈给我建议?
编辑:所以我尝试了你的一些想法,谢谢!顺便说一句,我真的需要巨大列表中的元组(字符串,字符串)......这是结果,我获得了 50% 的时间 :) 现在我要处理 NumPy 二进制数据,正如我所注意到的另一个巨大的文件加载起来真的很快......
import codecs
def load0(filename):
with codecs.open(filename, 'rb', 'utf-8') as f:
return f.readlines()
def load1(filename):
with codecs.open(filename, 'rb', 'utf-8') as f:
return [tuple(x.rstrip().split("\t")) for x in f.readlines()]
def load3(filename):
features = []
with codecs.open(filename, 'rb', 'utf-8') as f:
for n, s in enumerate(f):
splitted = tuple(s.rstrip().split("\t"))
features.append(splitted)
return features
def load4(filename):
with codecs.open(filename, 'rb', 'utf-8') as f:
for s in f:
yield tuple(s.rstrip().split("\t"))
a = datetime.datetime.now()
r0 = load0(myfile)
b = datetime.datetime.now()
print "f.readlines(): %s" % (b-a)
a = datetime.datetime.now()
r1 = load1(myfile)
b = datetime.datetime.now()
print """[tuple(x.rstrip().split("\\t")) for x in f.readlines()]: %s""" % (b-a)
a = datetime.datetime.now()
r3 = load3(myfile)
b = datetime.datetime.now()
print """load3: %s""" % (b-a)
if r1 == r3: print "OK: speeded and similars!"
a = datetime.datetime.now()
r4 = [x for x in load4(myfile)]
b = datetime.datetime.now()
print """load4: %s""" % (b-a)
if r4 == r3: print "OK: speeded and similars!"
结果 :
f.readlines(): 0:00:00.208000
[tuple(x.rstrip().split("\t")) for x in f.readlines()]: 0:00:02.310000
load3: 0:00:07.883000
OK: speeded and similars!
load4: 0:00:07.943000
OK: speeded and similars!
非常奇怪的是,我注意到我在连续两次运行中几乎可以有两倍的时间(但不是每次):
>>> ================================ RESTART ================================
>>>
f.readlines(): 0:00:00.220000
[tuple(x.rstrip().split("\t")) for x in f.readlines()]: 0:00:02.479000
load3: 0:00:08.288000
OK: speeded and similars!
>>> ================================ RESTART ================================
>>>
f.readlines(): 0:00:00.279000
[tuple(x.rstrip().split("\t")) for x in f.readlines()]: 0:00:04.983000
load3: 0:00:10.404000
OK: speeded and similars!
最新编辑:好吧,我尝试修改以使用numpy.load
...这对我来说很奇怪...来自带有我的 1022860 字符串和 10 KB 的“普通”文件。做完之后,numpy.save(numpy.array(load1(myfile)))
我去了 895 MB !然后重新加载它,numpy.load()
我在连续运行中得到这种时间:
>>> ================================ RESTART ================================
loading: 0:00:11.422000 done.
>>> ================================ RESTART ================================
loading: 0:00:00.759000 done.
可能是 numpy 会做一些内存工作以避免将来重新加载吗?