我有一个大型数据集,正在尝试从图像中获取 gabor 过滤器。当数据集变得太大时,就会出现内存错误。到目前为止,我有这个代码:
import numpy
from sklearn.feature_extraction.image import extract_patches_2d
from sklearn.decomposition import MiniBatchDictionaryLearning
from sklearn.decomposition import FastICA
def extract_dictionary(image, patches_size=(16,16), projection_dimensios=25, previous_dictionary=None):
"""
Gets a higher dimension ica projection image.
"""
patches = extract_patches_2d(image, patches_size)
patches = numpy.reshape(patches, (patches.shape[0],-1))[:LIMIT]
patches -= patches.mean(axis=0)
patches /= numpy.std(patches, axis=0)
#dico = MiniBatchDictionaryLearning(n_atoms=projection_dimensios, alpha=1, n_iter=500)
#fit = dico.fit(patches)
ica = FastICA(n_components=projection_dimensios)
ica.fit(patches)
return ica
当 LIMIT 很大时,会出现内存错误。在 scikit 或其他 python 包中是否有一些在线(增量)替代 ICA?