1

我正在尝试构建一个LSTM 网络来对句子进行分类并使用显着性为分类提供解释。这个网络必须从真正的班级y_true以及他不应该注意的单词中学习Z(二进制掩码)。

这篇论文启发了我们提出我们的损失函数。这是我希望我的损失函数看起来像的样子:

在此处输入图像描述

Coût de classification在下面的代码中转换为classification_loss和转换Coût d'explication (saillance)saliency_loss(与输入的输出梯度相同)。我尝试使用 Keras 中的自定义模型来实现这一点,并使用 Tensorflow 作为后端:

loss_tracker = metrics.Mean(name="loss")
classification_loss_tracker = metrics.Mean(name="classification_loss")
saliency_loss_tracker = metrics.Mean(name="saliency_loss")
accuracy_tracker = metrics.CategoricalAccuracy(name="accuracy")

class CustomSequentialModel(Sequential):
        
    def _train_test_step(self, data, training):
        # Unpack the data
        X = data[0]["X"]
        Z = data[0]["Z"] # binary mask (1 for important words)
        y_true = data[1]
        
        # gradient tape requires "float32" instead of "int32"
        # X.shape = (None, MAX_SEQUENCE_LENGTH, EMBEDDING_DIM)
        X = tf.cast(X, tf.float32)

        # Persitent=True because we call the `gradient` more than once
        with GradientTape(persistent=True) as tape:
            # The tape will record everything that happens to X
            # for automatic differentiation later on (used to compute saliency)
            tape.watch(X)
            # Forward pass
            y_pred = self(X, training=training) 
            
            # (1) Compute the classification_loss
            classification_loss = K.mean(
                categorical_crossentropy(y_true, y_pred)
            )
 
            # (2) Compute the saliency loss
            # (2.1) Compute the gradient of output wrt the maximum probability
            log_prediction_proba = K.log(K.max(y_pred))
            
        # (2.2) Compute the gradient of the output wrt the input
        # saliency.shape is (None, MAX_SEQUENCE_LENGTH, None)
        # why isn't it (None, MAX_SEQUENCE_LENGTH, EMBEDDING_DIM) ?!
        saliency = tape.gradient(log_prediction_proba, X)
        # (2.3) Sum along the embedding dimension
        saliency = K.sum(saliency, axis=2)
        # (2.4) Sum with the binary mask
        saliency_loss = K.sum(K.square(saliency)*(1-Z))
        # =>  ValueError: No gradients provided for any variable
        loss = classification_loss + saliency_loss 
        
        trainable_vars = self.trainable_variables
        # ValueError caused by the '+ saliency_loss'
        gradients = tape.gradient(loss, trainable_vars) 
        del tape # garbage collection
        
        if training:
            # Update weights
            self.optimizer.apply_gradients(zip(gradients, trainable_vars))
        
        # Update metrics
        saliency_loss_tracker.update_state(saliency_loss)
        classification_loss_tracker.update_state(classification_loss)
        loss_tracker.update_state(loss)
        accuracy_tracker.update_state(y_true, y_pred)
        
        # Return a dict mapping metric names to current value
        return {m.name: m.result() for m in self.metrics}
    
    def train_step(self, data):
        return self._train_test_step(data, True)
    
    def test_step(self, data):
        return self._train_test_step(data, False)
    
    @property
    def metrics(self):
        return [
            loss_tracker,
            classification_loss_tracker,
            saliency_loss_tracker,
            accuracy_tracker
        ]

我设法计算classification_loss得很好saliency_loss,我得到了一个标量值。但是,这行得通:tape.gradient(classification_loss, trainable_vars)但这不起作用tape.gradient(classification_loss + saliency_loss, trainable_vars)并且 throws ValueError: No gradients provided for any variable

4

2 回答 2

1

您正在磁带上下文之外(第一次gradient调用之后)进行计算,然后尝试采用更多梯度。这不起作用;所有用于区分的操作都需要在上下文管理器中进行。我建议使用两个嵌套磁带按如下方式重组您的代码:

with GradientTape() as loss_tape:
    with GradientTape() as saliency_tape:
        # The tape will record everything that happens to X
        # for automatic differentiation later on (used to compute saliency)
        saliency_tape.watch(X)
        # Forward pass
        y_pred = self(X, training=training) 
        
        # (2) Compute the saliency loss
        # (2.1) Compute the gradient of output wrt the maximum probability
        log_prediction_proba = K.log(K.max(y_pred))
        
    # (2.2) Compute the gradient of the output wrt the input
    # saliency.shape is (None, MAX_SEQUENCE_LENGTH, None)
    # why isn't it (None, MAX_SEQUENCE_LENGTH, EMBEDDING_DIM) ?!
    saliency = saliency_tape.gradient(log_prediction_proba, X)
    # (2.3) Sum along the embedding dimension
    saliency = K.sum(saliency, axis=2)
    # (2.4) Sum with the binary mask
    saliency_loss = K.sum(K.square(saliency)*(1-Z))

    # (1) Compute the classification_loss
    classification_loss = K.mean(
        categorical_crossentropy(y_true, y_pred)
    )

    loss = classification_loss + saliency_loss 
    
trainable_vars = self.trainable_variables
gradients = loss_tape.gradient(loss, trainable_vars)

现在我们有一个磁带负责计算显着性输入的梯度。我们在它周围有另一个磁带,它跟踪这些操作,然后可以计算梯度的梯度(即显着性的梯度)。该磁带还计算分类损失的梯度。我在外部磁带上下文中移动了分类损失,因为内部磁带不需要它。还要注意,即使两个损失的相加也是在外带的上下文内——一切都必须在那里发生,否则计算图会丢失/不完整,并且无法计算梯度。

于 2020-12-14T00:21:58.073 回答
0

尝试train_step()@tf.function

于 2020-12-13T17:31:14.490 回答