0

我有一个文本分类数据集,我在其中使用 dask parquet 来节省磁盘空间,但是当我想将数据集拆分为训练并使用dask_ml.model_selection.train_test_split.

ddf = dd.read_parquet('/storage/data/cleaned')
y = ddf['category'].values
X = ddf.drop('category', axis=1).values
train, test = train_test_split(X, y, test_size=0.2)

导致 TypeError: Cannot operate on Dask array with unknown chunk sizes.

谢谢您的帮助。

4

1 回答 1

0

这是我暂时做的事情:

ddf = dd.read_parquet('/storage/data/cleaned')
ddf = ddf.to_dask_array(lengths=True)
train, test = train_test_split(ddf, test_size=0.2)

这将创建一些形状的 dask.arraydask.array<array, shape=(3937987, 2), dtype=object, chunksize=(49701, 2)>

于 2019-03-31T19:39:56.517 回答