我正在尝试按照本文提出的方法攻击一组 Keras 模型。在第 5 节中,他们注意到攻击的形式为:
因此,我继续创建了一组预训练的 Keras MNIST 模型,如下所示:
def ensemble(models, model_input):
outputs = [model(model_input) for model in models]
y = Average()(outputs)
model = Model(model_input, y, name='ensemble')
return model
models = [...] # list of pretrained Keras MNIST models
model = ensemble(models, model_input)
model_wrapper = KerasModelWrapper(model)
attack_par = {'eps': 0.3, 'clip_min': 0., 'clip_max': 1.}
attack = FastGradientMethod(model_wrapper, sess=sess)
x = tf.placeholder(tf.float32, shape=(None, img_rows, img_cols,
nchannels))
attack.generate(x, **attack_par) # ERROR!
在最后一行,我收到以下错误:
----------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-23-1d2e22ceb2ed> in <module>
----> 1 attack.generate(x, **attack_par)
~/ri/safechecks/venv/lib/python3.6/site-packages/cleverhans/attacks/fast_gradient_method.py in generate(self, x, **kwargs)
48 assert self.parse_params(**kwargs)
49
---> 50 labels, _nb_classes = self.get_or_guess_labels(x, kwargs)
51
52 return fgm(
~/ri/safechecks/venv/lib/python3.6/site-packages/cleverhans/attacks/attack.py in get_or_guess_labels(self, x, kwargs)
276 labels = kwargs['y_target']
277 else:
--> 278 preds = self.model.get_probs(x)
279 preds_max = reduce_max(preds, 1, keepdims=True)
280 original_predictions = tf.to_float(tf.equal(preds, preds_max))
~/ri/safechecks/venv/lib/python3.6/site-packages/cleverhans/utils_keras.py in get_probs(self, x)
188 :return: A symbolic representation of the probs
189 """
--> 190 name = self._get_softmax_name()
191
192 return self.get_layer(x, name)
~/ri/safechecks/venv/lib/python3.6/site-packages/cleverhans/utils_keras.py in _get_softmax_name(self)
126 return layer.name
127
--> 128 raise Exception("No softmax layers found")
129
130 def _get_abstract_layer_name(self):
Exception: No softmax layers found
似乎要求目标模型的最后一层是 softmax 层。但是,从技术上讲,快速梯度方法不需要将其作为要求。这是 Cleverhans 为便于库实现而强制执行的吗?有没有办法绕过这个问题并使用 Cleverhans 攻击没有最终 softmax 层的模型?