4

我有以下代码片段,并试图了解 BertWordPieceTokenizer 和 BertTokenizer 之间的区别。

BertWordPieceTokenizer(基于 Rust)

from tokenizers import BertWordPieceTokenizer

sequence = "Hello, y'all! How are you Tokenizer  ?"
tokenizer = BertWordPieceTokenizer("bert-base-uncased-vocab.txt")
tokenized_sequence = tokenizer.encode(sequence)
print(tokenized_sequence)
>>>Encoding(num_tokens=15, attributes=[ids, type_ids, tokens, offsets, attention_mask, special_tokens_mask, overflowing])

print(tokenized_sequence.tokens)
>>>['[CLS]', 'hello', ',', 'y', "'", 'all', '!', 'how', 'are', 'you', 'token', '##izer', '[UNK]', '?', '[SEP]']

BertTokenizer

from transformers import BertTokenizer
tokenizer = BertTokenizer("bert-base-cased-vocab.txt")
tokenized_sequence = tokenizer.encode(sequence)
print(tokenized_sequence)
#Output: [19082, 117, 194, 112, 1155, 106, 1293, 1132, 1128, 22559, 17260, 100, 136]
  1. 为什么编码在两者中的工作方式不同?在 BertWordPieceTokenizer 中,它提供 Encoding 对象,而在 BertTokenizer 中,它提供词汇的 id。
  2. BertWordPieceTokenizer 和 BertTokenizer 从根本上来说有什么区别,因为据我了解,BertTokenizer 也在底层使用 WordPiece。

谢谢

4

1 回答 1

12

They should produce the same output when you use the same vocabulary (in your example you have used bert-base-uncased-vocab.txt and bert-base-cased-vocab.txt). The main difference is that the tokenizers from the tokenizers package are faster as the tokenizers from transformers because they are implemented in Rust.

When you modify your example you will see that they produce the same ids and other attributes (encoding object) while the transformers tokenizer only have produced the a list of ids:

from tokenizers import BertWordPieceTokenizer

sequence = "Hello, y'all! How are you Tokenizer  ?"
tokenizerBW = BertWordPieceTokenizer("/content/bert-base-uncased-vocab.txt")
tokenized_sequenceBW = tokenizerBW.encode(sequence)
print(tokenized_sequenceBW)
print(type(tokenized_sequenceBW))
print(tokenized_sequenceBW.ids)

Output:

Encoding(num_tokens=15, attributes=[ids, type_ids, tokens, offsets, attention_mask, special_tokens_mask, overflowing])
<class 'Encoding'>
[101, 7592, 1010, 1061, 1005, 2035, 999, 2129, 2024, 2017, 19204, 17629, 100, 1029, 102]
from transformers import BertTokenizer

tokenizerBT = BertTokenizer("/content/bert-base-uncased-vocab.txt")
tokenized_sequenceBT = tokenizerBT.encode(sequence)
print(tokenized_sequenceBT)
print(type(tokenized_sequenceBT))

Output:

[101, 7592, 1010, 1061, 1005, 2035, 999, 2129, 2024, 2017, 19204, 17629, 100, 1029, 102]
<class 'list'>

You mentioned in the comments that your questions is more about why the produced output is different. As far as I can tell this was a design decision made by the developers and there is no specific reason for that. It is also not a the case that BertWordPieceTokenizer from tokenizers is an in-place replacement for the BertTokenizer from transformers. They still use a wrapper to make it compatible with with the transformers tokenizer API. There is a BertTokenizerFast class which has a "clean up" method _convert_encoding to make the BertWordPieceTokenizer fully compatible. Therefore you have to compare the BertTokenizer example above with the following:

from transformers import BertTokenizerFast

sequence = "Hello, y'all! How are you Tokenizer  ?"
tokenizerBW = BertTokenizerFast.from_pretrained("bert-base-uncased")
tokenized_sequenceBW = tokenizerBW.encode(sequence)
print(tokenized_sequenceBW)
print(type(tokenized_sequenceBW))

Output:

[101, 7592, 1010, 1061, 1005, 2035, 999, 2129, 2024, 2017, 19204, 17629, 100, 1029, 102]
<class 'list'>

From my perspective they have build the tokenizers library independently from the transformers library with the objective to be fast and useful.

于 2020-06-16T12:44:41.497 回答