0

我使用 Resnet50 进行迁移学习。我从 Keras 提供的预训练模型(“imagenet”)中创建了一个新模型。

训练我的新模型后,我将其保存如下:

# Save the Siamese Network architecture
siamese_model_json = siamese_network.to_json()
with open("saved_model/siamese_network_arch.json", "w") as json_file:
    json_file.write(siamese_model_json)
# save the Siamese Network model weights
siamese_network.save_weights('saved_model/siamese_model_weights.h5')

后来,我重新加载它如下进行一些预测:

json_file = open('saved_model/siamese_network_arch.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
siamese_network = model_from_json(loaded_model_json)
# load weights into new model
siamese_network.load_weights('saved_model/siamese_model_weights.h5')

然后我检查权重是否合理如下(来自一层):

print("bn3d_branch2c:\n",
      siamese_network.get_layer('model_1').get_layer('bn3d_branch2c').get_weights())

如果我只训练我的网络 1 个 epoch,我会在那里看到合理的值..

但是,如果我将模型训练 18 个 epoch(这需要 5-6 小时,因为我的计算机非常慢),我只会看到 NaN 值如下:

bn3d_branch2c:
 [array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
       nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
       nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
       nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
       nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
       nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
       ...

这里的诀窍是什么?

附录 1:

这是我创建模型的方法。

在这里,我有一个 Triplet_loss 函数,稍后我将需要它。

def triplet_loss(inputs, dist='euclidean', margin='maxplus'):
    anchor, positive, negative = inputs
    positive_distance = K.square(anchor - positive)
    negative_distance = K.square(anchor - negative)
    if dist == 'euclidean':
        positive_distance = K.sqrt(K.sum(positive_distance, axis=-1, keepdims=True))
        negative_distance = K.sqrt(K.sum(negative_distance, axis=-1, keepdims=True))
    elif dist == 'sqeuclidean':
        positive_distance = K.sum(positive_distance, axis=-1, keepdims=True)
        negative_distance = K.sum(negative_distance, axis=-1, keepdims=True)
    loss = positive_distance - negative_distance
    if margin == 'maxplus':
        loss = K.maximum(0.0, 2 + loss)
    elif margin == 'softplus':
        loss = K.log(1 + K.exp(loss))

    returned_loss = K.mean(loss)
    return returned_loss

这是我从头到尾构建模型的方式。我给出了完整的代码来给出确切的图片。

model = ResNet50(weights='imagenet')

# Remove the last layer (Needed to later be able to create the Siamese Network model)
model.layers.pop()

# First freeze all layers of ResNet50. Transfer Learning to be applied.
for layer in model.layers:
    layer.trainable = False

# All Batch Normalization layers still need to be trainable so that the "mean"
# and "standard deviation (std)" params can be updated with the new training data
model.get_layer('bn_conv1').trainable = True
model.get_layer('bn2a_branch2a').trainable = True
model.get_layer('bn2a_branch2b').trainable = True
model.get_layer('bn2a_branch2c').trainable = True
model.get_layer('bn2a_branch1').trainable = True
model.get_layer('bn2b_branch2a').trainable = True
model.get_layer('bn2b_branch2b').trainable = True
model.get_layer('bn2b_branch2c').trainable = True
model.get_layer('bn2c_branch2a').trainable = True
model.get_layer('bn2c_branch2b').trainable = True
model.get_layer('bn2c_branch2c').trainable = True
model.get_layer('bn3a_branch2a').trainable = True
model.get_layer('bn3a_branch2b').trainable = True
model.get_layer('bn3a_branch2c').trainable = True
model.get_layer('bn3a_branch1').trainable = True
model.get_layer('bn3b_branch2a').trainable = True
model.get_layer('bn3b_branch2b').trainable = True
model.get_layer('bn3b_branch2c').trainable = True
model.get_layer('bn3c_branch2a').trainable = True
model.get_layer('bn3c_branch2b').trainable = True
model.get_layer('bn3c_branch2c').trainable = True
model.get_layer('bn3d_branch2a').trainable = True
model.get_layer('bn3d_branch2b').trainable = True
model.get_layer('bn3d_branch2c').trainable = True
model.get_layer('bn4a_branch2a').trainable = True
model.get_layer('bn4a_branch2b').trainable = True
model.get_layer('bn4a_branch2c').trainable = True
model.get_layer('bn4a_branch1').trainable = True
model.get_layer('bn4b_branch2a').trainable = True
model.get_layer('bn4b_branch2b').trainable = True
model.get_layer('bn4b_branch2c').trainable = True
model.get_layer('bn4c_branch2a').trainable = True
model.get_layer('bn4c_branch2b').trainable = True
model.get_layer('bn4c_branch2c').trainable = True
model.get_layer('bn4d_branch2a').trainable = True
model.get_layer('bn4d_branch2b').trainable = True
model.get_layer('bn4d_branch2c').trainable = True
model.get_layer('bn4e_branch2a').trainable = True
model.get_layer('bn4e_branch2b').trainable = True
model.get_layer('bn4e_branch2c').trainable = True
model.get_layer('bn4f_branch2a').trainable = True
model.get_layer('bn4f_branch2b').trainable = True
model.get_layer('bn4f_branch2c').trainable = True
model.get_layer('bn5a_branch2a').trainable = True
model.get_layer('bn5a_branch2b').trainable = True
model.get_layer('bn5a_branch2c').trainable = True
model.get_layer('bn5a_branch1').trainable = True
model.get_layer('bn5b_branch2a').trainable = True
model.get_layer('bn5b_branch2b').trainable = True
model.get_layer('bn5b_branch2c').trainable = True
model.get_layer('bn5c_branch2a').trainable = True
model.get_layer('bn5c_branch2b').trainable = True
model.get_layer('bn5c_branch2c').trainable = True

# Used when compiling the siamese network
def identity_loss(y_true, y_pred):
    return K.mean(y_pred - 0 * y_true)  

# Create the siamese network

x = model.get_layer('flatten_1').output # layer 'flatten_1' is the last layer of the model
model_out = Dense(128, activation='relu',  name='model_out')(x)
model_out = Lambda(lambda  x: K.l2_normalize(x,axis=-1))(model_out)

new_model = Model(inputs=model.input, outputs=model_out)

anchor_input = Input(shape=(224, 224, 3), name='anchor_input')
pos_input = Input(shape=(224, 224, 3), name='pos_input')
neg_input = Input(shape=(224, 224, 3), name='neg_input')

encoding_anchor   = new_model(anchor_input)
encoding_pos      = new_model(pos_input)
encoding_neg      = new_model(neg_input)

loss = Lambda(triplet_loss)([encoding_anchor, encoding_pos, encoding_neg])

siamese_network = Model(inputs  = [anchor_input, pos_input, neg_input], 
                        outputs = loss) # Note that the output of the model is the 
                                        # return value from the triplet_loss function above

siamese_network.compile(optimizer=Adam(lr=.0001), loss=identity_loss)

需要注意的一件事是,我使所有批量标准化层“可训练”,以便可以使用我的训练数据更新与 BN 相关的参数。这会产生很多行,但我找不到更短的解决方案。

4

1 回答 1

0

该解决方案的灵感来自上述@Gurmeet Singh 的建议。

看起来,在训练过程中,可训练层的权重在一段时间后变得如此之大,并且所有这些权重都设置为 NaN,这让我认为我以错误的方式保存和重新加载模型,但问题是梯度爆炸。

我在 github 讨论中也看到了类似的问题,可以在这里查看:github.com/keras-team/keras/issues/2378 在 github 的那个线程的底部,建议使用较低的学习率来避免问题。

在这个链接(Keras ML library: how to do weight clipping after gradient updates?TensorFlow backend)中,讨论了2个解决方案: - 在优化器中使用clipvalue参数,它只是按照配置对计算出的梯度值进行切割。但这不是推荐的解决方案。(在另一个线程中解释。) - 第二件事是使用 clipnorm 参数,当它们的 L2 范数超过用户给定的值时,它会简单地裁剪计算的梯度值。

我还考虑过使用输入标准化(以避免爆炸梯度),但后来发现它已经在preprocess_input(..)函数中完成了。(查看此链接了解详情: https ://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet50/preprocess_input )虽然可以将模式参数设置为“tf”(设置为“caffe”否则默认情况下),这可能会进一步帮助(因为mode="tf"设置在 -1 和 1 之间缩放像素)但我没有尝试过。

我总结一下,在编译将要训练的模型时,我改变了两件事:

已更改的行如下:

变更前:

siamese_network.compile(optimizer=Adam(**lr=.0001**), 
                        loss=identity_loss)

更改后:

siamese_network.compile(optimizer=Adam(**lr=.00004**, **clipnorm=1.**),
                        loss=identity_loss)

1) 使用较小的学习率使梯度更新更小 2) 使用 clipnorm 参数对计算的梯度进行归一化并对其进行切割。

我再次训练了我的网络 10 个 epoch。损失根据需要减少,但现在更慢。而且我在保存和存储模型时没有遇到任何问题。(至少在 10 个 epoch 之后(在我的电脑上需要时间)。)

请注意,我将clipnorm的值设置为1。这意味着首先计算梯度的 L2 范数,如果计算的归一化梯度超过“1”的值,则梯度被剪裁。我认为这是一个可以优化的超参数,它会影响训练模型所需的时间,同时有助于避免梯度爆炸问题。

于 2018-07-12T04:50:48.413 回答