2

The basic idea is, that I load a trained model (DCGAN) and make a anomaly detection with it on images. For the anomaly detection I have to do some iterations on the test phase to evaluate it, if it is a anomaly or not.

For that I have two Loss-Functions in the test setup, which should be calculating a backpropagation to the generator input and update the latent vector. But it only should update the latent vector, not the weights on the graph.

Is this possible?

Probably, if I use only a pytorch-variable of my latent vector and set the variable output of the generator to "requires_grad=False" Like in the docs --> Pytorch

4

1 回答 1

3

Yes, you're on the right track there. You can individually set the requires_grad attribute of your model parameters (more precisely of all leaf nodes in your computational graph). I am not familiar with DCGAN, but I assume the latent vector is a trainable parameter, too (else a back-propagation update makes little sense to me).

The following code snippet adapted from the PyTorch docs might be useful for your purposes:

# Load your model
model = torchvision.models.resnet18(pretrained=True)

# Disable gradient computation for all parameters
for param in model.parameters():
    param.requires_grad = False

# Enable gradient computation for a specific layer (here the last fc layer)
for param in model.fc.parameters():
    param.requires_grad = True

# Optimize only the the desired parameters (for you latent vectors)
optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9)
于 2017-07-17T09:57:13.267 回答