1

在我有 5000000 行的数据集中,我想在我的数据集中添加一个名为“嵌入”的列。

dataset = dataset.add_column('embeddings', embeddings)

变量embeddings是一个大小为 (5000000, 512) 的 numpy memmap 数组。

但我得到这个错误:

----> 1 个数据集 = dataset.add_column('embeddings', embeddings) 中的 ArrowInvalidTraceback(最近一次调用最后一次)

/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 486 } 487 # 应用实际函数 --> 488 out: Union["Dataset", " DatasetDict"] = func(self, *args, **kwargs) 489 个数据集:List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 490 # 重新应用格式到输出

/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 404 # 调用实际函数 405 --> 406 out = func(self, *args, * *kwargs) 407 408 # 更新原地变换的指纹 + 更新原地变换的历史

/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint) 3346 :class: Dataset 3347 """ -> 3348 column_table = InMemoryTable.from_pydict({name : column}) 3349 # 水平连接表 3350 table = ConcatenationTable.from_tables([self._data, column_table], axis=1)

/opt/conda/lib/python3.8/site-packages/datasets/table.py in from_pydict(cls, *args, **kwargs) 367 @classmethod 368 def from_pydict(cls, *args, **kwargs): - -> 369 返回 cls(pa.Table.from_pydict(*args, **kwargs)) 370 371 @inject_arrow_table_documentation(pa.Table.from_batches)

/opt/conda/lib/python3.8/site-packages/pyarrow/table.pxi 在 pyarrow.lib.Table.from_pydict()

/opt/conda/lib/python3.8/site-packages/pyarrow/table.pxi 在 pyarrow.lib._from_pydict()

/opt/conda/lib/python3.8/site-packages/pyarrow/array.pxi 在 pyarrow.lib.asarray()

/opt/conda/lib/python3.8/site-packages/pyarrow/array.pxi 在 pyarrow.lib.array()

/opt/conda/lib/python3.8/site-packages/pyarrow/array.pxi 在 pyarrow.lib._ndarray_to_array()

/opt/conda/lib/python3.8/site-packages/pyarrow/error.pxi 在 pyarrow.lib.check_status()

ArrowInvalid:只处理一维数组

由于嵌入数组不适合 RAM,我该如何以有效的方式解决?

4

2 回答 2

0

The issue here is that you're trying to add a column, but the data you are passing is a 2d numpy array. arrow (the library used to represent datasets) only supports 1d numpy array.

You can try to add each column of your 2d numpy array one by one:

for i, column in enumerate(embeddings.T):
    ds = ds.add_column('embeddings_' + str(i), column)

How can I solve, possibly in an efficient way, since the embeddings array does not fit the RAM?

I don't think there's a work around the memory issue. huggingface datasets are backed by arrow table, which have to fit in memory.

于 2021-11-23T10:02:31.900 回答
0
from datasets import load_dataset

ds = load_dataset("cosmos_qa", split="train")

new_column = ["foo"] * len(ds)
ds = ds.add_column("new_column", new_column)

你得到一个数据集

Dataset({
    features: ['id', 'context', 'question', 'answer0', 'answer1', 'answer2', 'answer3', 'label', 'new_column'],
    num_rows: 25262
})
于 2021-11-22T12:36:04.177 回答