0

我有一个使用默认 softmax 损失函数的 RNN tf.contrib.seq2seq.sequence_loss()(我假设它是tf.nn.softmax()),但我想使用tf.nn.softmax_cross_entropy_with_logits(). 根据seq2seq.sequence_loss文档,可以softmax_loss_function=用来覆盖默认损失函数:

softmax_loss_function : Function (labels, logits) -> loss-batch 用来代替标准的softmax(默认为None)。请注意,为避免混淆,函数需要接受命名参数。

这是我的有效代码:

from tensorflow.python.layers.core import Dense

# Build the graph
train_graph = tf.Graph()
# Set the graph to default to ensure that it is ready for training
with train_graph.as_default():

    # Load the model inputs    
    input_data, targets, keep_prob, lr, target_sequence_length, max_target_sequence_length, source_sequence_length \
    = get_model_inputs()

    # Create the training and inference logits
    training_decoder_output, inference_decoder_output = seq2seq_model(input_data, 
                                                                      targets, 
                                                                      lr, 
                                                                      target_sequence_length, 
                                                                      max_target_sequence_length, 
                                                                      source_sequence_length,
                                                                      len(source_letter_to_int),
                                                                      len(target_letter_to_int),
                                                                      encoding_embedding_size, 
                                                                      decoding_embedding_size, 
                                                                      rnn_size, 
                                                                      num_layers,
                                                                      keep_prob)    

    # Create tensors for the training logits and inference logits
    training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')
    inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions')

    # Create the weights for sequence_loss
    masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')

    with tf.name_scope("optimization"):

        # Loss function
        cost = tf.contrib.seq2seq.sequence_loss(training_logits, targets, masks)

        # Optimizer
        optimizer = tf.train.AdamOptimizer(lr)

        # Gradient Clipping
        gradients = optimizer.compute_gradients(cost)
        capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
        train_op = optimizer.apply_gradients(capped_gradients)

        # Add variables to collection in order to load them up when retraining a saved graph
        tf.add_to_collection("cost", cost)
        tf.add_to_collection("train_op", train_op)

我更改损失函数的尝试如下(我只指出了不同的代码):

with tf.name_scope("optimization"):

    # One-hot encode targets and reshape to match logits, one row per batch_size per step
    y_one_hot = tf.one_hot(targets, len(target_letter_to_int))
    y_reshaped = tf.reshape(y_one_hot, [batch_size, len(target_letter_to_int), 30])

    # Loss function
    loss = tf.nn.softmax_cross_entropy_with_logits(logits=training_logits, labels=y_reshaped)
    loss = tf.reduce_mean(loss)
    cost = tf.contrib.seq2seq.sequence_loss(training_logits, targets, masks, softmax_loss_function=loss)

该行cost = tf.contrib.seq2seq.sequence_loss(training_logits, targets, masks, softmax_loss_function=loss)现在给我“ TypeError:'Tensor'对象不可调用。” 这是我见过的 Tensorflow 产生的最不透明的错误之一,我在互联网上没有找到太多的解释方式。任何帮助,将不胜感激。

4

0 回答 0