4

我正在NaiveBayesclassifier根据我选择的给定主题使用 TextBlob 进行文本分析。

数据很大(大约 3000 个条目)。

尽管我能够得到结果,但如果不再次调用该函数并等待数小时直到处理完成,我就无法将其保存以备将来使用。

我尝试通过以下方法进行酸洗

ab = NaiveBayesClassifier(data)

import pickle

object = ab
file = open('f.obj','w') #tried to use 'a' in place of 'w' ie. append
pickle.dump(object,file)

我得到一个错误,如下所示:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python27\lib\pickle.py", line 1370, in dump
    Pickler(file, protocol).dump(obj)
  File "C:\Python27\lib\pickle.py", line 224, in dump
    self.save(obj)
  File "C:\Python27\lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\lib\pickle.py", line 419, in save_reduce
    save(state)
  File "C:\Python27\lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\lib\pickle.py", line 663, in _batch_setitems
    save(v)
  File "C:\Python27\lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\lib\pickle.py", line 600, in save_list
    self._batch_appends(iter(obj))
  File "C:\Python27\lib\pickle.py", line 615, in _batch_appends
    save(x)
  File "C:\Python27\lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\lib\pickle.py", line 562, in save_tuple
    save(element)
  File "C:\Python27\lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\lib\pickle.py", line 662, in _batch_setitems
    save(k)
  File "C:\Python27\lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\lib\pickle.py", line 501, in save_unicode
    self.memoize(obj)
  File "C:\Python27\lib\pickle.py", line 247, in memoize
    self.memo[id(obj)] = memo_len, obj
MemoryError

我也尝试过使用 sPickle,但它也导致了以下错误:

#saving object with function sPickle.s_dump
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python27\lib\site-packages\sPickle.py", line 22, in s_dump
    for elt in iterable_to_pickle:
TypeError: 'NaiveBayesClassifier' object is not iterable

#saving object with function sPickle.s_dump_elt
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python27\lib\site-packages\sPickle.py", line 28, in s_dump_elt
    pickled_elt_str = dumps(elt_to_pickle)
MemoryError: out of memory

谁能告诉我我必须做些什么来保存对象?

或者无论如何保存分类器的结果以供将来使用?

4

3 回答 3

5

我自己解决了这个问题。

首先使用64 位版本的 Python(适用于从 2.6 到 3.4 的所有版本)

64位版本解决所有内存问题

使用 cPickle

import cPickle as pickle

其次打开你的文件为

file = open('file_name.pickle','wb') #same as what Robert said in the above post

将对象写入文件

pickle.dump(object,file)

你的对象将被转储到一个文件中。但是你必须检查你的对象使用了什么内存。pickle-ing 也占用了内存空间,因此至少 25% 的内存可用于要腌制的对象

对我来说,我的笔记本电脑有 8 GB 的 RAM,所以内存只够其中一个对象使用。

(我的分类器非常重,有 3000 个字符串实例,每个字符串包含大约 15-30 个单词的句子。情感/主题的数量为 22。)

所以如果你的笔记本电脑死锁(或者,一般来说,停止工作)那么你可能不得不关闭它并重新开始并尝试使用较小的号码。实例数或更少。情绪/主题。

在这里,cPickle 非常有用,因为它比任何其他腌制模块都快得多,我建议使用它。

于 2014-07-02T07:26:25.913 回答
2

您需要将“wb”用于二进制格式:

file = open('f.obj','wb')
于 2014-06-26T13:13:40.483 回答
0

对于 Python > 3.0,cPickle 似乎不再存在,但默认的 pickle 可以完成这项工作,只需确保使用适合您的 python 安装的协议。对于 python > 3.4 使用这个:

import pickle
with open(r"blobClassifier.pickle",'wb') as file:
    pickle.dump(cl_Title, file, protocol=pickle.HIGHEST_PROTOCOL,fix_imports=False)
于 2019-05-27T14:02:32.373 回答