1

我正在我的自定义图像数据集上训练 GAN。我在 tensorflow (2.1) 鉴别器训练过程中遇到了一个奇怪的问题。当我调用评估和训练方法时,产生的度量结果是不同的。谁能告诉我为什么会有很大的不同。我假设 model.evaluate 结果是正确的结果。

discriminator.train_on_batch(generated_images, zeros)

输出:损失:0.3693015,准确度:1.0

discriminator.evaluate(generated_images, zeros)

输出:损失:0.9416157603263855,精度:0.0

discriminator.predict(generated_images)

outputs: array([[0.9242853 ],
           [0.9242752 ],
           [0.92427176],
           [0.92424864],
           [0.9242797 ],
           [0.9242201 ],
           [0.9242201 ],
           [0.92427665],
           [0.9242958 ],
           [0.9242941 ],
           [0.9243046 ],
           [0.9242498 ],
           [0.92428845],
           [0.92429376],
           [0.9242994 ],
           [0.92427266],
           [0.9242796 ],
           [0.92427236],
           [0.924284  ],
           [0.9242925 ]], dtype=float32)
    

由于我的真实标签全为零,所以我认为 0.9** 的概率应该归类为标签 1,因此准确度应该是 0%,这是由 model.evaluate() 方法正确返回的,但我的训练完全错误,因为适合和train_on_batch 方法返回准确率 100%,损失接近 0。

下面是我的 GAN 编译代码——

ones = np.ones(batch_size)
zeros = np.zeros(batch_size)
    
generator = build_generator(dims=latent_dim)
print("-------- Generator Summary --------")
generator.summary()
    
Build & compile discriminator model
discriminator = build_discriminator (image_size=image_size)
discriminator.compile(loss='binary_crossentropy', optimizer=Adam(0.0002, 0.5), metrics=['accuracy'])
print("\n\n-------- Discriminator Summary --------")
discriminator.summary()
    
#Build & compile GAN (combined) model
gan = build_gan(generator, discriminator, dims=latent_dim)
gan.compile(loss='binary_crossentropy', optimizer=Adam(0.0002, 0.5), metrics=['accuracy'])
    
print("\n\n-------- GAN Summary --------")
gan.summary()

任何帮助将不胜感激。谢谢

4

0 回答 0