我正在研究SqueezeNet 修剪。我对基于论文的剪枝代码有一些疑问: PRUNING CONVOLUTIONAL NEURAL NETWORKS FOR RESOURCE EFFICIENT INFERENCE
def compute_rank(self, grad):
activation_index = len(self.activations) - self.grad_index - 1
activation = self.activations[activation_index]
values = \
torch.sum((activation * grad), dim = 0, keepdim=True).\
sum(dim=2, keepdim=True).sum(dim=3, keepdim=True)[0, :, 0, 0].data
values = \
values / (activation.size(0) * activation.size(2) * activation.size(3))
if activation_index not in self.filter_ranks:
self.filter_ranks[activation_index] = \
torch.FloatTensor(activation.size(1)).zero_().cuda()
self.filter_ranks[activation_index] += values
self.grad_index += 1
1)为什么' values '只使用in_height(2)和in_width(3)的激活?in_channels (1) 呢?
2) 为什么filter_ranks[activation_index]只依赖于 in_channels (1) ?
3)为什么激活乘以梯度?为什么要总结它们?