0

我的问题与此处提出的问题非常相似:https ://github.com/tensorflow/hub/issues/269 。但是这些问题仍然没有答案,所以我会在这里问。重现步骤:

张量流 1.14.0 张量流集线器 0.5.0 Python 3.7.4 Windows 10

这是重现问题的示例笔记本: https ://colab.research.google.com/drive/1PKUyoQRP3othu6cu7v7N7yn8K2pjkuKP

  1. 加载一个可训练的 tensor_hub Inception 3 模块:

    module_spec = hub.load_module_spec('https://tfhub.dev/google/imagenet/inception_v3/feature_vector/3')
    height, width = hub.get_expected_image_size(module_spec)
        with tf.Graph().as_default() as graph:
            resized_input_tensor =  tf.compat.v1.placeholder(tf.float32, [None, height, width, 3])
            module = hub.Module(module_spec, trainable=True, tags={"train"})  
            bottleneck_tensor = module(inputs=dict(images=resized_input_tensor, batch_norm_momentum=0.997),signature="image_feature_vector_with_bn_hparams")  
  1. 将此时创建的所有可训练/模型/全局变量保存到单独的“基本模型”列表(3 个列表) var 示例: base_model trainable_variables vars : 188, ['module/InceptionV3/Conv2d_1a_3x3/weights:0', 'module/ InceptionV3/Conv2d_1a_3x3/BatchNorm/beta:0'.. base_model model_variables vars : 188, ['module/InceptionV3/Conv2d_1a_3x3/BatchNorm/moving_mean:0', 'module/InceptionV3/Conv2d_1a_3x3/BatchNorm/moving_variance:0 base_model variables vars : 0 , [] #空列表

  2. 在模型顶部添加自定义分类层:


    batch_size, previous_tensor_size = bottleneck_tensor.get_shape().as_list()
    ground_truth_input = tf.compat.v1.placeholder(tf.int64, [batch_size], name='GroundTruthInput')
    initial_value = tf.random.truncated_normal([previous_tensor_size, class_count], stddev=0.001)
    layer_weights = tf.Variable(initial_value, name='final_weights')
    layer_biases = tf.Variable(tf.zeros([class_count]), name='final_biases')
    logits = tf.matmul(hidden_layer, layer_weights) + layer_biases
    final_tensor = tf.nn.softmax(logits, name=final_tensor_name)

  1. 同样,将所有新添加的变量名称放入 3 个新的不同“自定义”列表中:

    自定义 trainable_variables vars: 2, ['final_weights:0', 'final_biases:0'] 自定义 model_variables vars: 0, [] 自定义变量 vars: 0, []

  2. 添加火车操作。由于基础模型具有批量标准化,我们必须关心更新操作。这就是我使用 tf.contrib.training.create_train_op 的原因:

    cross_entropy_all = tf.compat.v1.losses.sparse_softmax_cross_entropy(labels=ground_truth_input, logits=logits)
    optimizer = tf.compat.v1.train.AdamOptimizer()

    #the update ops are set to the contents of the tf.GraphKeys.UPDATE_OPS collection.
    #variables to train will default to all tf.compat.v1.trainable_variables().
    train_step = tf.contrib.training.create_train_op(cross_entropy_mean, optimizer)
    1. 再次,将所有新添加的变量名称放入 3 个新的不同“优化器”列表中:优化器 trainable_variables vars: 0, [] optimizer model_variables vars: 0, [] optimizer variables vars: 383, ['global_step:0', 'beta1_power:0 '、'beta2_power:0'、'module/InceptionV3/Conv2d_1a_3x3/weights/Adam:0'、'module/InceptionV3/Conv2d_1a_3x3/weights/Adam_1:0'、'module/InceptionV3/Conv2d_1a_3x3/BatchNorm/beta/Adam:0 '、'module/InceptionV3/Conv2d_1a_3x3/BatchNorm/beta/Adam_1:0'、'module/InceptionV3/Conv2d_2a_3x3/weights/Adam:0'、'module/InceptionV3/Conv2d_2a_3x3/weights/Adam_1:0'、'module/InceptionV3 /Conv2d_2a_3x3/BatchNorm/beta/Adam:0',...

现在进行常规训练:


    with tf.compat.v1.Session(graph=graph) as sess:
        # Initialize all weights: for the module to their pretrained values,
        # and for the newly added retraining layer to random initial values.
        init = tf.compat.v1.global_variables_initializer()
        sess.run(init)

        #dump the checkssum for all the variables lists collected during graph building

        for i in range(1000):
            # Get a batch of input resized images values, calculated fresh
            (train_data, train_ground_truth) = get_random_batch_data(sess, image_lists....)

            #dump the checksum for all the variables lists collected during graph building


            # Feed the input placeholder and ground truth into the graph, and run a training
            # step.
            sess.run([train_step], feed_dict = {
                resized_input_tensor: train_data,
                ground_truth_input: train_ground_truth})

            #dump now again the checksum for all the variables lists collected during graph building

因此,在每个训练步骤之后,校验和仅更改为两个变量列表,自定义可训练和优化器全局:


    base_model trainable_variables, 2697202.0, cf4682249fc1f48e9a346149f84e503d unchanged
    base_model model_variables,
        2936996.0, 6f995f5f0f032604a49a96ceec576cf7 unchanged
    base_model variables, 0, d41d8cd98f00b204e9800998ecf8427e unchanged
    custom trainable_variables, -0.7915199408307672, 889c333a56b9496d412eacdcbeb3bef1 **changed**
    custom model_variables, 0, d41d8cd98f00b204e9800998ecf8427e unchanged
    custom variables, 0, d41d8cd98f00b204e9800998ecf8427e unchanged
    optimizer trainable_variables, 0, d41d8cd98f00b204e9800998ecf8427e unchanged
    optimizer model_variables, 0, d41d8cd98f00b204e9800998ecf8427e unchanged
    optimizer variables,
        5580902.81437762, d2cb2d4b253a1c12452f560eea35ac42 **changed**

所以,问题是,为什么基础模型的可训练变量没有改变?它们是 BatchNorm/moving_mean、BatchNorm/moving_variance、Conv2d_1a_3x3/weights,它们肯定应该在训练期间更新。更重要的是,moving_variance 也应该改变,因为 UPDATE_OPS 被包含在 tf.contrib.training.create_train_op 调用中作为训练步骤的依赖项。我检查了 UPDATE_OPS 列表,它包含以下有效值:更新操作:tf.Operation 'module_apply_image_feature_vector_with_bn_hparams/InceptionV3/InceptionV3/Conv2d_1a_3x3/BatchNorm/AssignMovingAvg/AssignSubVariableOp' type=AssignSubVariableOp>,

4

1 回答 1

0

好的,在对问题进行深入调试后,我发现问题如下:仅从全局变量列表中获取变量并在其上使用 eval() 获取它的值是不够的:它会返回一些值但它不是当前的(至少这是使用 dtype=resource 导入模型的变量所发生的情况)。

要计算当前值,我们必须首先使用变量获取值张量。值()或变量。read_value()并为其执行eval()(对于返回的“值”张量)。

这解决了这个问题。

于 2019-08-01T09:56:03.463 回答