1

我在 Windows 10 上使用 tensorflow 版本 1.5。我正在使用从网站上获取的 Inception V4 网络的 Tensorflow slim 模型,使用它们的预训练权重并在最后添加我自己的层来分类 120 个不同的对象。这是完整的代码,除了包含导入模块和数据集路径的行。

image_size = 299
tf.logging.set_verbosity(tf.logging.INFO)
with slim.arg_scope(inception_blocks_v4.inception_v4_arg_scope()):
    X_input = tf.placeholder(tf.float32, shape = (None, image_size, image_size, 3))
    Y_label = tf.placeholder(tf.float32, shape = (None, num_classes))        
    targets = convert_to_onehot(labels_dir, no_of_features = num_classes)
    targets = tf.constant(targets, dtype = tf.float32)

    Images = [] 
    images = glob.glob(images_file_path)
    i = 0
    for my_img in images:
        image = mpimg.imread(my_img)[:, :, :3]
        image = tf.constant(image, dtype = tf.float32)
        Images.append(image)

    logits, end_points = inception_blocks_v4.inception_v4(inputs = X_input, num_classes = pre_num_classes, is_training = True, create_aux_logits= False)
    pretrained_weights = slim.assign_from_checkpoint_fn(ckpt_dir, slim.get_model_variables('InceptionV4'))
    with tf.Session() as sess:
        pretrained_weights(sess)

    my_layer = slim.fully_connected(logits, 560, activation_fn=tf.nn.relu, scope='myLayer1', weights_initializer = tf.truncated_normal_initializer(stddev = 0.001), weights_regularizer=slim.l2_regularizer(0.00005),biases_initializer = tf.truncated_normal_initializer(stddev=0.001), biases_regularizer=slim.l2_regularizer(0.00005))
    my_layer = slim.dropout(my_layer, keep_prob = 0.6, scope = 'myLayer2')
    my_layer = slim.fully_connected(my_layer, num_classes,activation_fn = tf.nn.relu,scope= 'myLayer3', weights_initializer = tf.truncated_normal_initializer(stddev=0.001), weights_regularizer=slim.l2_regularizer(0.00005), biases_initializer = tf.truncated_normal_initializer(stddev=0.001), biases_regularizer=slim.l2_regularizer(0.00005))
    my_layer_logits = slim.fully_connected(my_layer, num_classes, activation_fn=None,scope='myLayer4')  
    loss = tf.losses.softmax_cross_entropy(onehot_labels = Y_label, logits = my_layer_logits)  
    optimizer = tf.train.AdamOptimizer(learning_rate=0.0001)
    train_op = slim.learning.create_train_op(loss, optimizer) 
    images, labels = tf.train.batch([Images, targets], batch_size = 8, num_threads = 1, capacity = batch_size, enqueue_many=True)
    tensor_images = tf.convert_to_tensor(images, dtype = tf.float32)
    tensor_labels = tf.convert_to_tensor(labels, dtype = tf.float32)
    with tf.Session() as sess:
        print (tensor_images)
        print (tensor_labels)
    final_loss = slim.learning.train(train_op,logdir = new_ckpt_dir, number_of_steps = 1000, save_summaries_secs=5,log_every_n_steps=50)(feed_dict = {X_input:tensor_images ,Y_label: tensor_labels})  #{X_input:images ,Y_label: labels}

我试图在训练操作步骤期间将正确的数据张量传递给图形的 feed_dict 并打印它们给我以下输出。

Tensor("batch:0", shape=(8, 299, 299, 3), dtype=float32, device=/device:CPU:0)
Tensor("batch:1", shape=(8, 120), dtype=float32, device=/device:CPU:0)

但它也输出以下错误:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,120]
 [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[?,120], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
4

1 回答 1

0

提供从中生成的数据的正确方法tf.train.batch是这样的:

构建您的模型如下:

logits, end_points = inception_blocks_v4.inception_v4(
    inputs = tensor_images, num_classes = pre_num_classes, 
    is_training = True, create_aux_logits= False)

在你失去的时候,你应该使用:

loss = tf.losses.softmax_cross_entropy(
    onehot_labels = tf.one_hot(tensor_labels), logits = my_layer_logits)  

目前不支持馈入tensortf.placeholder

注意:我假设你tensor_labels的只是标签的索引。

于 2018-03-14T15:36:29.753 回答