2

我最近一直在尝试堆叠语言模型,并注意到一些有趣的事情:BERT 和 XLNet 的输出嵌入与输入嵌入不同。例如,这个代码片段:

bert = transformers.BertForMaskedLM.from_pretrained("bert-base-cased")
tok = transformers.BertTokenizer.from_pretrained("bert-base-cased")

sent = torch.tensor(tok.encode("I went to the store the other day, it was very rewarding."))
enc = bert.get_input_embeddings()(sent)
dec = bert.get_output_embeddings()(enc)

print(tok.decode(dec.softmax(-1).argmax(-1)))

为我输出:

,,,,,,,,,,,,,,,,,

我本来希望返回(格式化的)输入序列,因为我的印象是输入和输出令牌嵌入是绑定的。

有趣的是,大多数其他模型都没有表现出这种行为。例如,如果您在 GPT2、Albert 或 Roberta 上运行相同的代码片段,它会输出输入序列。

这是一个错误吗?或者是 BERT/XLNet 的预期?

4

1 回答 1

3

不确定是否为时已晚,但我已经对您的代码进行了一些试验,它可以恢复。:)

bert = transformers.BertForMaskedLM.from_pretrained("bert-base-cased")
tok = transformers.BertTokenizer.from_pretrained("bert-base-cased")

sent = torch.tensor(tok.encode("I went to the store the other day, it was very rewarding."))
print("Initial sentence:", sent)
enc = bert.get_input_embeddings()(sent)
dec = bert.get_output_embeddings()(enc)

print("Decoded sentence:", tok.decode(dec.softmax(0).argmax(1)))

为此,您将获得以下输出:

Initial sentence: tensor([  101,   146,  1355,  1106,  1103,  2984,  1103,  1168,  1285,   117,
         1122,  1108,  1304, 10703,  1158,   119,   102])  
Decoded sentence: [CLS] I went to the store the other day, it was very rewarding. [SEP]
于 2020-12-12T16:51:45.650 回答