如果我们在 64 位机器上使用 joblib 序列化 randomforest 模型,然后在 32 位机器上解压,就会出现异常:
ValueError: Buffer dtype mismatch, expected 'SIZE_t' but got 'long long'
之前有人问过这个问题:Scikits-Learn RandomForrest training on 64bit python wont open on 32bit python。但自 2014 年以来,这个问题一直没有得到回答。
学习模型的示例代码(在 64 位机器上):
modelPath="../"
featureVec=...
labelVec = ...
forest = RandomForestClassifier()
randomSearch = RandomizedSearchCV(forest, param_distributions=param_dict, cv=10, scoring='accuracy',
n_iter=100, refit=True)
randomSearch.fit(X=featureVec, y=labelVec)
model = randomSearch.best_estimator_
joblib.dump(model, modelPath)
在 32 位机器上解压的示例代码:
modelPath="../"
model = joblib.load(modelPkl) # ValueError thrown here
我的问题是:如果我们必须在 64 位机器上学习并将其移植到 32 位机器进行预测,是否有任何通用的解决方法?
编辑:尝试直接使用pickle而不是joblib。仍然有同样的错误。该错误发生在核心pickle库中(对于joblib和pickle):
File "/usr/lib/python2.7/pickle.py", line 1378, in load
return Unpickler(file).load()
File "/usr/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1133, in load_reduce
value = func(*args)
File "sklearn/tree/_tree.pyx", line 585, in sklearn.tree._tree.Tree.__cinit__ (sklearn/tree/_tree.c:7286)
ValueError: Buffer dtype mismatch, expected 'SIZE_t' but got 'long long'