2

我正在查看此答案以在训练期间运行评估指标:

如何在 tf-slim 中使用evaluation_loop和train_loop

似乎压倒一切train_step_fn=train_step_fn是合理的方法。但我想运行一个验证循环,而不是评估。我的图表是这样的:

with tf.Graph().as_default():

    train_dataset = slim.dataset.Dataset(data_sources= "train_*.tfrecord")
    train_images, _, train_labels = load_batch(train_dataset, 
                batch_size=mini_batch_size,
                is_training=True)

    val_dataset = slim.dataset.Dataset(data_sources= "validation_*.tfrecord")
    val_images, _, val_labels = load_batch(val_dataset, 
                batch_size=mini_batch_size,
                is_training=False)


    with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=0.0005)):
        net, end_points = vgg.vgg_16(train_images, 
                                      num_classes=10,
                                      is_training=is_training)
    predictions = tf.nn.softmax(net)
    labels = train_labels

    ...

    init_fn = slim.assign_from_checkpoint_fn(
        checkpoint_path,
        slim.get_variables_to_restore(exclude=['vgg_16/fc8']),
        ignore_missing_vars=True
        )     

    final_loss = slim.learning.train(train_op, TRAIN_LOG, 
                        train_step_fn=train_step_fn,
                        init_fn=init_fn,
                        global_step=global_step,
                        number_of_steps=steps,
                        save_summaries_secs=60,
                        save_interval_secs=600,
                        session_config=sess_config,
                      )

我想添加类似这样的内容,以针对网络的当前权重进行小批量验证循环

    def validate_on_checkpoint(sess, *args, **kwargs ):
        loss,mean,stddev = sess.run([val_loss, val_rms_mean, val_rms_stddev], 
                        feed_dict={images: val_images, 
                                   labels: val_labels, 
                                   is_training: is_training })
        validation_writer = tf.train.SummaryWriter(LOG_DIR + '/validation')                                              
        validation_writer.add_summary(loss, global_step)
        validation_writer.add_summary(mean, global_step)
        validation_writer.add_summary(stddev, global_step)


    def train_step_fn(sess, *args, **kwargs):
        total_loss, should_stop = train_step(sess, *args, **kwargs)

        if train_step_fn.step % FLAGS.validation_every_n_step == 0:
            validate_on_checkpoint(sess, *args, **kwargs )

        train_step_fn.step += 1
        return [total_loss, should_stop]   

但我得到一个错误=Graph is finalized and cannot be modified.

从概念上讲,我不确定应该如何添加它。training循环需要网络的梯度、丢失和权重更新,但循环validation跳过了所有这些。Graph is finalized and cannot be modified.如果我尝试修改图表或XXX is not defined使用if is_training: else:方法,我会不断变化

4

1 回答 1

3

我从其他几个stackoverflow答案中找到了一种方法来完成这项工作。以下是基础知识:

train1) 获取数据集和validation数据集的输入和标签

x_train, y_train = produce_batch(320)
x_validation, y_validation = produce_batch(320)

2) 用于在and循环reuse=True之间重用模型权重。这是一种方法:trainvalidation

  with tf.variable_scope("model") as scope:
    # Make the model, reuse weights for validation batches
    predictions, nodes = regression_model(inputs, is_training=True)
    scope.reuse_variables()
    val_predictions, _ = regression_model(val_inputs, is_training=False)

3)定义你的损失,把你的validation损失放在不同的集合中,这样它就不会被添加到train损失中tf.losses.get_losses()

  loss = tf.losses.mean_squared_error(labels=targets, predictions=predictions)
  total_loss = tf.losses.get_total_loss()

  val_loss = tf.losses.mean_squared_error(labels=val_targets, predictions=val_predictions,
                                          loss_collection="validation"
                                         )

4) 根据train_step_fn()需要定义一个触发验证循环

VALIDATION_INTERVAL = 1000 . # validate every 1000 steps
# slim.learning.train(train_step_fn=train_step_fn)
def train_step_fn(sess, train_op, global_step, train_step_kwargs):
  """
  slim.learning.train_step():
    train_step_kwargs = {summary_writer:, should_log:, should_stop:}
  """
  train_step_fn.step += 1  # or use global_step.eval(session=sess)

  # calc training losses
  total_loss, should_stop = slim.learning.train_step(sess, train_op, global_step, train_step_kwargs)


  # validate on interval
  if train_step_fn.step % VALIDATION_INTERVAL == 0:
    validiate_loss, validation_delta = sess.run([val_loss, summary_validation_delta])
    print(">> global step {}:    train={}   validation={}  delta={}".format(train_step_fn.step, 
                        total_loss, validiate_loss, validiate_loss-total_loss))


  return [total_loss, should_stop]
train_step_fn.step = 0

5)添加train_step_fn()到你的训练循环

  # Run the training inside a session.
  final_loss = slim.learning.train(
      train_op,
      train_step_fn=train_step_fn,
      ...
      )

在这个Colaboratory notebook中查看完整的结果

于 2018-02-28T01:19:29.563 回答