1

基于示例,我正在尝试为波斯语训练一个标记器和一个 T5 模型。当我尝试运行以下代码时,我使用 Google Colab pro:

import datasets

from t5_tokenizer_model import SentencePieceUnigramTokenizer


vocab_size = 32_000
input_sentence_size = None # change to 100_000 works

# Initialize a dataset
dataset = datasets.load_dataset("oscar", name="unshuffled_deduplicated_fa", split="train")

tokenizer = SentencePieceUnigramTokenizer(unk_token="<unk>", eos_token="</s>", pad_token="<pad>")

print("len dataset:", len(dataset))

# Build an iterator over this dataset
def batch_iterator(input_sentence_size=None):
    if input_sentence_size is None:
        input_sentence_size = len(dataset)
    batch_length = 100
    for i in range(0, input_sentence_size, batch_length):
        yield dataset[i: i + batch_length]["text"]


# Train tokenizer
tokenizer.train_from_iterator(
    iterator=batch_iterator(input_sentence_size=input_sentence_size),
    vocab_size=vocab_size,
    show_progress=True,
)

# Save files to disk
tokenizer.save("/content/drive/MyDrive/Pouramini/tokenizer.json")

它卡住了,train_from_iterator因为数据集的大小很大(input_sentence_size大约 8M 句子)我如何划分数据集并在每个块上运行代码,然后将它们合并到分词器输出?

4

1 回答 1

2

您是否尝试过使用可迭代数据集?

dataset = datasets.load_dataset("oscar", name="unshuffled_deduplicated_fa", split="train", streaming=True)

tokenizer = SentencePieceUnigramTokenizer(unk_token="<unk>", eos_token="</s>", pad_token="<pad>")

def batch_iterator(dataset):
    for i in dataset:
        yield i["text"]
于 2021-12-17T13:05:47.550 回答