0

我有以下代码来计算 LER 指标:

def ctc_lambda_func(args):
    y_pred, labels, input_length, label_length = args
    return K.ctc_batch_cost(labels, y_pred, input_length, label_length)


def decode(inputs):
    y_pred, seq_len, y_true = inputs

    seq_len = tf.cast(seq_len[:, 0], tf.int32)
    y_pred = tf.transpose(y_pred, perm=[1, 0, 2])

    decoded = tf.nn.ctc_beam_search_decoder(inputs=y_pred, 
                                            sequence_length=seq_len, 
                                            beam_width=1,
                                            top_paths=1,
                                            )[0][0]

    y_true_sparse = tf.sparse.from_dense(tf.cast(y_true,dtype=tf.int64))
    diff =  tf.reduce_mean(tf.edit_distance(decoded, y_true_sparse))
    return diff

def add_ctc_loss(m):
    labels = Input(name='the_labels', shape=(None,), dtype='float32')
    input_length = Input(name='input_length', shape=(1,), dtype='int64')
    label_length = Input(name='label_length', shape=(1,), dtype='int64')

    output_length = Lambda(m.output_length)(input_length)


    decoded = Lambda(function=decode, name='decoded', output_shape=(1,))(
                    [m.output, input_length, labels])
    loss_out = Lambda(function=ctc_lambda_func, name='ctc', output_shape=(1,))(
                    [m.output, labels, output_length, label_length])

    model = Model(inputs=[m.input, labels, input_length, label_length], outputs=[loss_out,decoded])

    model.compile(loss={"ctc": lambda y_true, y_pred: y_pred,
                        "decoded": lambda y_true, y_pred: y_pred
                        },
                optimizer="adam",
                )

我想使用 CTC 损失函数来更新梯度和 LER 作为“准确度”度量的一种形式。虽然 CTC 损失工作和更新正常,但 LER (decoded_loss) 始终保持在 0.0000e+00。我不确定我做错了什么,但我已经失去了一整天的时间来通过在线示例尝试解决这个问题,但问题仍然存在。如果我在 decode 函数中打印值,我可以看到值正在正确生成,但进度条不会更新。我想看看 LER 在训练过程中是如何变化的。

Epoch 1/150
 36/683 [>.............................] - ETA: 59s - loss: 116.2132 - ctc_loss: 116.2132 - decoded_loss: 0.0000e+00
4

0 回答 0