19

看起来以前声明字段、示例和使用 BucketIterator 的范例已被弃用,并将在 0.8 中转移到旧版。但是,我似乎无法找到不使用 Field. 谁能指出我最新的例子?

弃用参考:

https://github.com/pytorch/text/releases

4

4 回答 4

7

我花了一点时间自己找到解决方案。对于预构建的数据集,新范式就像这样:

from torchtext.experimental.datasets import AG_NEWS
train, test = AG_NEWS(ngrams=3)

或者对于自定义构建的数据集类似:

from torch.utils.data import DataLoader
def collate_fn(batch):
    texts, labels = [], []
    for label, txt in batch:
        texts.append(txt)
        labels.append(label)
    return texts, labels
dataloader = DataLoader(train, batch_size=8, collate_fn=collate_fn)
for idx, (texts, labels) in enumerate(dataloader):
    print(idx, texts, labels)

我已经从源代码中复制了示例

于 2020-11-14T00:04:15.293 回答
2

浏览GitHub 存储库时,我偶然发现了legacy 目录中的 README,torchtext官方文档中没有记录。自述文件链接了一个GitHub 问题,该问题解释了更改背后的基本原理以及迁移指南

如果您只想保持现有代码在torchtext0.9.0 中运行,其中已弃用的类已移至legacy模块,则必须调整导入:

# from torchtext.data import Field, TabularDataset
from torchtext.legacy.data import Field, TabularDataset

或者,您可以按照自述文件的建议导入整个torchtext.legacy模块:torchtext

import torchtext.legacy as torchtext
于 2021-03-13T16:03:25.640 回答
2

有一篇关于这个的帖子。代替弃用FieldBucketIterator类,它使用TextClassificationDataset与整理器和其他预处理一起。它读取一个 txt 文件并构建一个数据集,然后是一个模型。在帖子中,有一个完整的工作笔记本的链接。该帖子位于:https ://mmg10.github.io/pytorch/2021/02/16/text_torch.html 。但是您需要 PyTorch 的“开发”(或夜间构建)才能使其工作。

从上面的链接:

在标记化和构建词汇表之后,您可以按如下方式构建数据集

def data_to_dataset(data, tokenizer, vocab):
    
    data = [(text, label) for (text, label) in data]
    
    text_transform = sequential_transforms(tokenizer.tokenize,
                                                  vocab_func(vocab),
                                                  totensor(dtype=torch.long)
                                          )
    label_transform = sequential_transforms(lambda x: 1 if x =='1' else (0 if x =='0' else x),
                                                  totensor(dtype=torch.long)
                                          )
    
    
    transforms = (text_transform, label_transform)
    
    dataset = TextClassificationDataset(data, vocab, transforms)
    
    return dataset

整理者如下:

    def __init__(self, pad_idx):
        
        self.pad_idx = pad_idx
        
    def collate(self, batch):
        text, labels = zip(*batch)
        labels = torch.LongTensor(labels)
        text = nn.utils.rnn.pad_sequence(text, padding_value=self.pad_idx, batch_first=True)
        return text, labels

torch.utils.data.DataLoader然后,您可以使用参数构建典型的数据加载器collate_fn

于 2021-03-20T16:10:55.657 回答
1

那么看起来管道可能是这样的:

import torchtext as TT
import torch
from collections import Counter
from torchtext.vocab import Vocab

# read the data

with open('text_data.txt','r') as f:
    data = f.readlines()
with open('labels.txt', 'r') as f:
    labels = f.readlines()


tokenizer = TT.data.utils.get_tokenizer('spacy', 'en') # can remove 'spacy' and use a simple built-in tokenizer
train_iter = zip(labels, data)
counter = Counter()

for (label, line) in train_iter:
    counter.update(tokenizer(line))
    
vocab = TT.vocab.Vocab(counter, min_freq=1)

text_pipeline = lambda x: [vocab[token] for token in tokenizer(x)]
# this is data-specific - adapt for your data
label_pipeline = lambda x: 1 if x == 'positive\n' else 0

class TextData(torch.utils.data.Dataset):
    '''
    very basic dataset for processing text data
    '''
    def __init__(self, labels, text):
        super(TextData, self).__init__()
        self.labels = labels
        self.text = text
        
    def __getitem__(self, index):
        return self.labels[index], self.text[index]
    
    def __len__(self):
        return len(self.labels)


def tokenize_batch(batch, max_len=200):
    '''
    tokenizer to use in DataLoader
    takes a text batch of text dataset and produces a tensor batch, converting text and labels though tokenizer, labeler
    tokenizer is a global function text_pipeline
    labeler is a global function label_pipeline
    max_len is a fixed len size, if text is less than max_len it is padded with ones (pad number)
    if text is larger that max_len it is truncated but from the end of the string
    '''
    labels_list, text_list = [], []
    for _label, _text in batch:
        labels_list.append(label_pipeline(_label))
        text_holder = torch.ones(max_len, dtype=torch.int32) # fixed size tensor of max_len
        processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int32)
        pos = min(200, len(processed_text))
        text_holder[-pos:] = processed_text[-pos:]
        text_list.append(text_holder.unsqueeze(dim=0))
    return torch.FloatTensor(labels_list), torch.cat(text_list, dim=0)

train_dataset = TextData(labels, data)

train_loader = DataLoader(train_dataset, batch_size=2, shuffle=False, collate_fn=tokenize_batch)

lbl, txt = iter(train_loader).next()
于 2021-04-20T18:56:12.350 回答