0

我正在根据本教程在 HuggingFace 的转换器中使用 Tensorflow 的 BERT 运行代码:

在 Python 中使用 BERT Tokenizer 和 TF 2.0 进行文本分类

但是,我没有创建自己的神经网络,而是使用了转换器,并且:

tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model0 = TFBertForSequenceClassification.from_pretrained('bert-base-cased')

我能够生成以下训练数据:

(<tf.Tensor: id=6582, shape=(20, 70), dtype=int32, numpy=
 array([[  191, 19888,  1186,     0, ...,     0,     0,     0,     0],
        [ 7353,  1200,  2180,  1197, ...,     0,     0,     0,     0],
        [  164,   112, 12890,  5589, ...,     0,     0,     0,     0],
        [  164,   112, 21718, 19009, ...,     0,     0,     0,     0],
        ...,
        [ 7998,  3101,   164,   112, ...,     0,     0,     0,     0],
        [  164,   112,   154,  4746, ...,     0,     0,     0,     0],
        [  164,   112,  1842, 23228, ...,  1162,   112,   166,     0],
        [  164,   112,   140,  3814, ...,  7443,   119,   112,   166]], dtype=int32)>,
 <tf.Tensor: id=6583, shape=(20,), dtype=int32, numpy=array([0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0], dtype=int32)>)

但据我所见,词汇文件肯定存在问题,即未定义。运行时我还收到以下警告:

train2=[]
for i in range(0,train.shape[0]):
    out=tokenizer.convert_tokens_to_ids(tokenizer.tokenize(str(train.iloc[i,1])))
    print(i)
    train2.append(out)

WARNING:transformers.tokenization_utils:Token indices sequence length is longer than the specified maximum sequence length for this model (6935 > 512). Running this sequence through the model will result in indexing errors
WARNING:transformers.tokenization_utils:Token indices sequence length is longer than the specified maximum sequence length for this model (3574 > 512). Running this sequence through the model will result in indexing errors

model0成功创建:

Model: "tf_bert_for_sequence_classification"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
bert (TFBertMainLayer)       multiple                  108310272 
_________________________________________________________________
dropout_37 (Dropout)         multiple                  0         
_________________________________________________________________
classifier (Dense)           multiple                  1538      
=================================================================
Total params: 108,311,810
Trainable params: 108,311,810
Non-trainable params: 0
_________________________________________________________________

然后:

model0.fit(train_data, epochs=2, steps_per_epoch=30,validation_data=test_data, validation_steps=7)

我收到以下错误:

Train for 1 steps
Epoch 1/2
1/1 [==============================] - 21s 21s/step
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-53-61d611c37004> in <module>
----> 1 history = model0.fit(train_data, epochs=2, steps_per_epoch=1)#,validation_data=test_data, validation_steps=7)

/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    726         max_queue_size=max_queue_size,
    727         workers=workers,
--> 728         use_multiprocessing=use_multiprocessing)
    729 
    730   def evaluate(self,

/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
    322                 mode=ModeKeys.TRAIN,
    323                 training_context=training_context,
--> 324                 total_epochs=epochs)
    325             cbks.make_logs(model, epoch_logs, training_result, ModeKeys.TRAIN)
    326 

/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)
    121         step=step, mode=mode, size=current_batch_size) as batch_logs:
    122       try:
--> 123         batch_outs = execution_function(iterator)
    124       except (StopIteration, errors.OutOfRangeError):
    125         # TODO(kaftan): File bug about tf function and errors.OutOfRangeError?

/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in execution_function(input_fn)
     84     # `numpy` translates Tensors to values in Eager mode.
     85     return nest.map_structure(_non_none_constant_value,
---> 86                               distributed_function(input_fn))
     87 
     88   return execution_function

/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds)
    455 
    456     tracing_count = self._get_tracing_count()
--> 457     result = self._call(*args, **kwds)
    458     if tracing_count == self._get_tracing_count():
    459       self._call_counter.called_without_tracing()

/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds)
    518         # Lifting succeeded, so variables are initialized and we can run the
    519         # stateless function.
--> 520         return self._stateless_fn(*args, **kwds)
    521     else:
    522       canon_args, canon_kwds = \

/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in __call__(self, *args, **kwargs)
   1821     """Calls a graph function specialized to the inputs."""
   1822     graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
-> 1823     return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
   1824 
   1825   @property

/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _filtered_call(self, args, kwargs)
   1139          if isinstance(t, (ops.Tensor,
   1140                            resource_variable_ops.BaseResourceVariable))),
-> 1141         self.captured_inputs)
   1142 
   1143   def _call_flat(self, args, captured_inputs, cancellation_manager=None):

/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
   1222     if executing_eagerly:
   1223       flat_outputs = forward_function.call(
-> 1224           ctx, args, cancellation_manager=cancellation_manager)
   1225     else:
   1226       gradient_name = self._delayed_rewrite_functions.register()

/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in call(self, ctx, args, cancellation_manager)
    509               inputs=args,
    510               attrs=("executor_type", executor_type, "config_proto", config),
--> 511               ctx=ctx)
    512         else:
    513           outputs = execute.execute_with_cancellation(

/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     65     else:
     66       message = e.message
---> 67     six.raise_from(core._status_to_exception(e.code, message), None)
     68   except TypeError as e:
     69     keras_symbolic_tensors = [

~/.local/lib/python3.7/site-packages/six.py in raise_from(value, from_value)

InvalidArgumentError: 2 root error(s) found.
  (0) Invalid argument:  indices[0,624] = 624 is not in [0, 512)
     [[node tf_bert_for_sequence_classification/bert/embeddings/position_embeddings/embedding_lookup (defined at /opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
  (1) Invalid argument:  indices[0,624] = 624 is not in [0, 512)
     [[node tf_bert_for_sequence_classification/bert/embeddings/position_embeddings/embedding_lookup (defined at /opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
     [[GroupCrossDeviceControlEdges_0/Adam/Adam/Const/_867]]
0 successful operations.
0 derived errors ignored. [Op:__inference_distributed_function_36559]

Function call stack:
distributed_function -> distributed_function

我的数据由一列 2 个类组成,另一列是短语。

我能做些什么 ?

4

1 回答 1

1

我解决了这个问题:

我不得不推断数据格式和类型并进行一些调整。于是代码变成了:

tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased',max_length=2048)
model0 = TFBertForSequenceClassification.from_pretrained('bert-base-multilingual-uncased')

train2=[]
for i in range(0,train.shape[0]):
    out=tokenizer.encode(train.iloc[i,1])[0:512]
    print(i)
    train2.append(out)

optimizer = tf.keras.optimizers.RMSprop(learning_rate=1e-3)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model0.compile(optimizer=optimizer, loss=loss, metrics=[metric])

history = model0.fit(train_data.repeat(), epochs=15, steps_per_epoch=80,validation_data=test_data, validation_steps=7,                use_multiprocessing=True,workers=16,shuffle=True,class_weight=class_weight)

另外,在我得到OOM Memory Error. 另一种选择是生成一个BertConfig,以便可以将神经网络的复杂性调整为数据方差:

configuration = BertConfig(hidden_size=40, num_hidden_layers=4, num_attention_heads=4, hidden_act='gelu',
                           intermediate_size=35,hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, 
                           max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, layer_norm_eps=1e-12)
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased',max_length=2048)
model0 = TFBertForSequenceClassification(configuration)

训练

于 2020-03-06T15:28:47.233 回答