1

我的模型中有一部分是 inception_v3:

logits, end_points = inception.inception_v3(input, num_classes=num_classes, is_training=trainable)

predictions = end_points['Multi_predictions_pretrained_model'] = tf.nn.sigmoid(
        logits, name='Multi_predictions_pretrained_model')

我用 训练它is_training=True,而不是保存我的模型。当我在新的执行中评估我的模型时,我设置了is_training=False

问题是预测的输出几乎是 NAN。

There is a nan : True                                                                              
Number of nan : 5378                                                                              
Pre-logits: [[[  1.90298520e+36   0.00000000e+00   7.08422267e+33 ...,  4.63560017e+34 
  3.25943330e+36   6.92397968e+35]]]                                           
Logits : [ nan  nan  nan ...,  nan  nan  nan]                                              
Prediction : [ nan  nan  nan ...,  nan  nan  nan]   

如果我设置is_training=True,模型运行良好;在预测中我的 NAN 为零。

There is a nan: False                                                                               
Number of nan: 0                                                                                   
Pre-logits: [[[ 0.05161751  0.          0.         ...,  0.10696397  0.09036615  0.        ]]]  
Logits : [ -9.96004391 -10.36448002 -10.86166286 ..., -13.0117816 -9.29876232 -8.85321808]                                                                      
Prediction : [  4.72484280e-05   3.15318794e-05   1.91792424e-05 ...,   2.23384995e-06  9.15290802e-05   1.42900652e-04]    

假和真有什么区别?我发现这个值作用于 dropout 和 batch_norm。

对于辍学

is_training: A bool `Tensor` indicating whether or not the model
  is in training mode. If so, dropout is applied and values scaled.
  Otherwise, inputs is returned.

对于批处理规范

is_training: Whether or not the layer is in training mode. In training mode
  it would accumulate the statistics of the moments into `moving_mean` and
  `moving_variance` using an exponential moving average with the given
  `decay`. When it is not in training mode then it would use the values of
  the `moving_mean` and the `moving_variance`.

我该如何解决这个问题?

谢谢。

4

1 回答 1

1

我找到了解决方案。

我按照本指南对 tensorflow 进行批量标准化:http: //ruishu.io/2016/12/27/batchnorm/

特别是这个:

'''Note: When is_training is True the moving_mean and moving_variance 
need to be updated, by default the update_ops are placed in 
tf.GraphKeys.UPDATE_OPS so they need to be added as a dependency to 
the train_op, example:'''

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
    # Ensures that we execute the update_ops before performing the train_step
    train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
于 2017-10-28T18:15:41.760 回答