1

我正在将 cnn 功能输入 gpflow 模型。我在这里从我的程序中编写代码块。我正在将 tape.gradient 与 Adam 优化器(预定 lr)一起使用。我的准确率停留在 47%,令人惊讶的是,我的损失仍在减少。它非常奇怪。我已经调试了程序。CNN 功能还可以,但 gp 模型没有学习。请你检查训练循环,让我知道我哪里错了。

def optimization_step(gp_model: gpflow.models.SVGP, image_data,labels):

 with tf.GradientTape(watch_accessed_variables=False)as tape:
    tape.watch(gp_model.trainable_variables)

    cnn_feat = cnn_model(image_data,training=False)

    cnn_feat=tf.cast(cnn_feat,dtype=default_float())
    labels=tf.cast(labels,dtype=np.int64)

    data=(cnn_feat, labels)

    loss = gp_model.training_loss(data) 

    gp_grads=tape.gradient(loss, gp_model.trainable_variables)

 gp_optimizer.apply_gradients(zip(gp_grads, gp_model.trainable_variables))


 return loss, cnn_feat

训练循环是

 def simple_training_loop(gp_model: gpflow.models.SVGP, epochs: int = 3, logging_epoch_freq: int = 10):


    total_loss = []
    features=[]


    tf_optimization_step = tf.function(optimization_step, autograph=False)

    for epoch in range(epochs):

       lr.assign(max(args.learning_rate_clip, args.learning_rate * (args.decay_rate ** epoch)))

       data_loader.shuffle_data(args.is_training)

       for b in range(data_loader.n_batches):

            batch_x, batch_y= data_loader.next_batch(b)

            batch_x=tf.convert_to_tensor(batch_x)
            batch_y=tf.convert_to_tensor(batch_y)


            loss,features_CNN=tf_optimization_step(gp_model, batch_x,batch_y)

我正在从迁移学习期间保存的检查点恢复 CNN 的权重。

随着更多的时期,损失继续减少,但准确性也开始下降。

gp模型声明如下

     kernel = gpflow.kernels.Matern32() +  gpflow.kernels.White(variance=0.01) 

     invlink = gpflow.likelihoods.RobustMax(C) 
     likelihood = gpflow.likelihoods.MultiClass(C, invlink=invlink)  

测试功能

       cnn_feat=cnn_model(test_x,training=False)

       cnn_feat = tf.cast(cnn_feat, dtype=default_float())

       mean, var = gp_model.predict_f(cnn_feat)

       preds = np.argmax(mean, 1).reshape(test_labels.shape)
       correct = (preds == test_labels.numpy().astype(int))
       acc = np.average(correct.astype(float)) * 100
4

1 回答 1

0

您能否检查一下训练循环是否正确编写

训练循环看起来不错。但是,为了清楚起见和优化,有些位应该修改。

def simple_training_loop(gp_model: gpflow.models.SVGP, epochs: int = 3, logging_epoch_freq: int = 10):
    total_loss = []
    features=[]

    @tf.function
    def compute_cnn_feat(x: tf.Tensor) -> tf.Tensor:
        return tf.cast(cnn_model(x, training=False), dtype=default_float())

    @tf.function
    def optimization_step(cnn_feat: tf.Tensor, labels: tf.Tensor):  # **Change 1.**
        with tf.GradientTape(watch_accessed_variables=False) as tape:
            tape.watch(gp_model.trainable_variables)
            data = (cnn_feat, labels)
            loss = gp_model.training_loss(data) 
        gp_grads = tape.gradient(loss, gp_model.trainable_variables)  # **Change 2.**
        gp_optimizer.apply_gradients(zip(gp_grads, gp_model.trainable_variables))
        return loss

    for epoch in range(epochs):
       lr.assign(max(args.learning_rate_clip, args.learning_rate * (args.decay_rate ** epoch)))
       data_loader.shuffle_data(args.is_training)
       for b in range(data_loader.n_batches):
            batch_x, batch_y= data_loader.next_batch(b)
            batch_x = tf.convert_to_tensor(batch_x)
            batch_y = tf.convert_to_tensor(batch_y, dtype=default_float())
            cnn_feat = compute_cnn_feat(batch_x)  # **Change 3.**
            loss = optimization_step(cnn_feat, batch_y)

更改 1.您包装的函数的签名tf.function不应具有可变对象。

更改 2.梯度磁带将跟踪上下文管理器内的所有计算,包括梯度的计算,即tape.gradient(...). 反过来,这意味着您的代码执行了不必要的计算。

变更 3。 原因与“变更 2”相同。我将 CNN 特征提取移到梯度带之外。

于 2020-06-01T08:42:03.057 回答