1

知觉损失

我正在尝试使用keras 内置训练和评估循环中描述的方法复制快速风格迁移论文(见上图)

我在理解如何使用自定义损失类来做到这一点时遇到问题(见下文)。

为了计算损失成分,我需要以下内容:

  • y_hat,生成的图像得到
(generated_content_features, generated_style_features) = VGG(y_hat)
generated_style_gram = [ utils.gram(value) for value in generated_style_features ]
  • target_style_gram这是静态的,所以我可以从中获取一次target_style_features并缓存,(_,target_style_features) = VGG(y_s)
  • x,InputImage(同y_cContentTarget)得到(target_content_features, _) = VGG(x)

我发现我正在对损失类中的很多东西进行猴子修补tf.keras.losses.Loss,以便得出这些值并最终执行损失计算。target_content_features对于需要输入图像的情况尤其如此,这是我通过的y_true,但这显然是一种黑客行为

y_pred = generated_image # y_hat from diagram, shape=(b,256,256,3)
y_true = x # hack: access the input image here

lossFn = PerceptualLosses_Loss(VGG, target_style_gram)
loss = lossFn(y_true, y_pred)


class PerceptualLosses_Loss(tf.losses.Loss):
  name="PerceptualLosses_Loss"
  reduction=tf.keras.losses.Reduction.AUTO
  RGB_MEAN_NORMAL_VGG = tf.constant( [0.48501961, 0.45795686, 0.40760392], dtype=tf.float32)

  def __init__(self, loss_network, target_style_gram, loss_weights=None):
    super(PerceptualLosses_Loss, self).__init__( name=self.name, reduction=self.reduction )
    self.target_style_gram = target_style_gram # repeated in y_true
    print("PerceptualLosses_Loss init()", type(target_style_gram), type(self.target_style_gram))
    self.VGG = loss_network

  def call(self, y_true, y_pred):

    b,h,w,c = y_pred.shape
    #???: y_pred.shape=(None, 256,256,3), need batch dim for utils.gram(value)
    generated_batch = tf.reshape(y_pred, (BATCH_SIZE,h,w,c) )

    # generated_batch: expecting domain=(+-int), mean centered
    generated_batch = tf.nn.tanh(generated_batch) # domain=(-1.,1.), mean centered

    # reverse VGG mean_center
    generated_batch = tf.add( generated_batch, self.RGB_MEAN_NORMAL_VGG) # domain=(0.,1.)
    generated_batch_BGR_centered = tf.keras.applications.vgg19.preprocess_input(generated_batch*255.)/255.
    generated_content_features, generated_style_features = self.VGG( generated_batch_BGR_centered, preprocess=False )
    generated_style_gram = [ utils.gram(value)  for value in generated_style_features ]  # list

    y_pred = generated_content_features + generated_style_gram
    # print("PerceptualLosses_Loss: y_pred, output_shapes=", type(y_pred), [v.shape for v in y_pred])
    # PerceptualLosses_Loss: y_pred, output_shapes= [
    #   TensorShape([4, 16, 16, 512]), 
    #   TensorShape([4, 64, 64]), 
    #   TensorShape([4, 128, 128]), 
    #   TensorShape([4, 256, 256]), 
    #   TensorShape([4, 512, 512]), 
    #   TensorShape([4, 512, 512])
    # ]

    if tf.is_tensor(y_true):
      # print("detect y_true is image", type(y_true), y_true.shape)
      x_train = y_true
      x_train_BGR_centered = tf.keras.applications.vgg19.preprocess_input(x_train*255.)/255.
      target_content_features, _ = self.VGG(x_train_BGR_centered, preprocess=False )
      # ???: target_content_features[0].shape=(None, None, None, 512), should be shape=(4, 16, 16, 512)
      target_content_features = [tf.reshape(v, generated_content_features[i].shape) for i,v in enumerate(target_content_features)]
    elif isinstance(y_true, tuple):
      print("detect y_true is tuple(target_content_features + self.target_style_gram)", y_true[0].shape)
      target_content_features = y_true[:len(generated_content_features)]
      if self.target_style_gram is None:
        self.target_style_gram = y_true[len(generated_content_features):]
    else:
      assert False, "unexpected result for y_true"

    # losses = tf.keras.losses.MSE(y_true, y_pred)
    def batch_reduce_sum(y_true, y_pred, weight, name):
      losses = tf.zeros(BATCH_SIZE)
      for a,b in zip(y_true, y_pred):
        # batch_reduce_sum()
        loss = tf.keras.losses.MSE(a,b)
        loss = tf.reduce_sum(loss, axis=[i for i in range(1,len(loss.shape))] )
        losses = tf.add(losses, loss)
      return tf.multiply(losses, weight, name="{}_loss".format(name)) # shape=(BATCH_SIZE,)

    c_loss = batch_reduce_sum(target_content_features, generated_content_features, CONTENT_WEIGHT, 'content_loss')
    s_loss = batch_reduce_sum(self.target_style_gram, generated_style_gram, STYLE_WEIGHT, 'style_loss')
    return (c_loss, s_loss)

我也尝试在 中预先计算y_truetf.data.Dataset但是虽然它eager executionmodel.fit()

xy_true_Dataset = tf.data.Dataset.from_generator(
    xyGenerator_y_true(image_ds, VGG, target_style_gram),
    output_types=(tf.float32, (tf.float32,  tf.float32,tf.float32,tf.float32,tf.float32,tf.float32) ),
    output_shapes=(
      (256,256,3),
      ( (16, 16, 512), (64, 64), (128, 128), (256, 256), (512, 512), (512, 512)) 
    ),
  )

# eager execution, y_true: <class 'tuple'> [TensorShape([4, 16, 16, 512]), TensorShape([4, 64, 64]), TensorShape([4, 128, 128]), TensorShape([4, 256, 256]), TensorShape([4, 512, 512]), TensorShape([4, 512, 512])]
# model.fit(), y_true: <class 'tensorflow.python.framework.ops.Tensor'> (None, None, None, None)

ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), for inputs ['output_1'] but instead got the following list of 6 arrays: [<tf.Tensor 'args_1:0' shape=(None, 16, 16, 512) dtype=float32>, <tf.Tensor 'args_2:0' shape=(None, 64, 64) dtype=float32>, <tf.Tensor 'args_3:0' shape=(None, 128, 128) dtype=float32>, <tf.Tensor 'arg...

我对这个问题有完全错误的方法吗?

4

1 回答 1

0

由于您没有显示模型,因此我不太确定问题所在。但是您可以尝试以下一些方法:

  1. 你说:

我还尝试在 tf.data.Dataset 中预先计算 y_true,但是虽然它在急切执行下运行良好,但在 model.fit() 期间会导致错误

您可以在编译模型时为 eagar 执行设置“True”,例如:model.compile(..., run_eagerly=True)

  1. 返回错误表示您将错误的形状传递给“output_1”。用于model.summary()查看整个模型并找到哪个是“output_1”。然后检查模型。

  2. 如果您想为损失函数使用其他参数,可以执行以下操作:

def other_parameters(para1):
    def loss_fn(y_true, y_pred):
        # just an example
        return y_true - para1*y_pred
    # Don't forget to return "loss_fn"
    return loss_fn

在编译模型时,执行model.compile(..., loss=other_parameters(para1)). 或者您可以定义损失的类别:

class CustomMSE(keras.losses.Loss):
    def __init__(self, regularization_factor=0.1, name="custom_mse"):
        super().__init__(name=name)
        self.regularization_factor = regularization_factor

    def call(self, y_true, y_pred):
        mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
        reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
        return mse + reg * self.regularization_factor
...
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE(0.2))
...
model.fit(...)

更多详细信息,请参见此处:Keras:使用内置方法进行训练和评估,阅读自定义损失处理不符合标准签名的损失和指标。请注意,有时您可能需要编写自己的训练循环,然后您可以看到thisthis

希望这些可以帮助你。

于 2021-06-30T14:40:46.090 回答