我需要从存储在巨大数据框列中的文本中获取 TF-IDF 特征矩阵,从CSV 文件(无法放入内存)加载。我正在尝试使用块迭代数据帧,但它返回的生成器对象不是TfidfVectorizer方法的预期变量类型。ChunkIterator
我想我在编写如下所示的生成器方法时做错了。
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
#Will work only for small Dataset
csvfilename = 'data_elements.csv'
df = pd.read_csv(csvfilename)
vectorizer = TfidfVectorizer()
corpus = df['text_column'].values
vectorizer.fit_transform(corpus)
print(vectorizer.get_feature_names())
#Trying to use a generator to parse over a huge dataframe
def ChunkIterator(filename):
for chunk in pd.read_csv(csvfilename, chunksize=1):
yield chunk['text_column'].values
corpus = ChunkIterator(csvfilename)
vectorizer.fit_transform(corpus)
print(vectorizer.get_feature_names())
任何人都可以建议如何修改上述方法,或使用dataframeChunkIterator
的任何其他方法。我想避免为数据框中的每一行创建单独的文本文件。以下是一些用于重新创建场景的虚拟 csv 文件数据。
id,text_column,tags
001, This is the first document .,['sports','entertainment']
002, This document is the second document .,"['politics', 'asia']"
003, And this is the third one .,['europe','nato']
004, Is this the first document ?,"['sports', 'soccer']"