8

我正在尝试在 TensorFlow(此处为PyTorch实现)中实现本文所述的“扩张残差网络”,以在CityScapes 数据集上对其进行训练并将其用于语义图像分割。不幸的是,我在尝试训练时遇到错误,似乎无法找到解决方法。

由于这种类型的网络可以看作是 ResNet 的扩展,因此我使用了官方的 TensorFlow ResNet 模型(链接)并通过改变步幅、添加扩张(作为 tf.layers.conv2d 函数中的参数)和去除残差来修改架构连接。

为了训练这个网络,我想使用与 TensorFlow ResNet 模型中相同的方法:tf.estimator 和 input_fn(可以在本文末尾找到)。

现在,当我想用​​ CityScapes 数据集训练这个网络时,我收到以下错误:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-19-263240bbee7e> in <module>()
----> 1 main()

<ipython-input-16-b57cd9b52bc7> in main()
     27         print('Starting a training cycle.')
     28         drn_classifier.train(
---> 29             input_fn=lambda: input_fn(True, _BATCH_SIZE, _EPOCHS_PER_EVAL),hooks=[logging_hook])
     30 
     31         print(2)

~\Anaconda3\envs\master-thesis\lib\site-packages\tensorflow\python\estimator\estimator.py in train(self, input_fn, hooks, steps, max_steps, saving_listeners)
    300 
    301     saving_listeners = _check_listeners_type(saving_listeners)
--> 302     loss = self._train_model(input_fn, hooks, saving_listeners)
    303     logging.info('Loss for final step: %s.', loss)
    304     return self

~\Anaconda3\envs\master-thesis\lib\site-packages\tensorflow\python\estimator\estimator.py in _train_model(self, input_fn, hooks, saving_listeners)
    709       with ops.control_dependencies([global_step_read_tensor]):
    710         estimator_spec = self._call_model_fn(
--> 711             features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
    712       # Check if the user created a loss summary, and add one if they didn't.
    713       # We assume here that the summary is called 'loss'. If it is not, we will

~\Anaconda3\envs\master-thesis\lib\site-packages\tensorflow\python\estimator\estimator.py in _call_model_fn(self, features, labels, mode, config)
    692     if 'config' in model_fn_args:
    693       kwargs['config'] = config
--> 694     model_fn_results = self._model_fn(features=features, **kwargs)
    695 
    696     if not isinstance(model_fn_results, model_fn_lib.EstimatorSpec):

<ipython-input-15-797249462151> in drn_model_fn(features, labels, mode, params)
      7         params['arch'], params['size'], _LABEL_CLASSES, params['data_format'])
      8     print(4)
----> 9     logits = network(inputs=features, is_training=(mode == tf.estimator.ModeKeys.TRAIN))
     10     print(12)
     11     predictions = {

\Code\Semantic Image Segmentation\drn.py in model(inputs, is_training)
    255             print(16)
    256         inputs = conv2d_fixed_padding(
--> 257             inputs=inputs, filters=16, kernel_size=7, strides=2,
    258             data_format=data_format,dilation_rate=1)
    259                 print(17)

\Code\Semantic Image Segmentation\drn.py in conv2d_fixed_padding(inputs, filters, kernel_size, strides, data_format, dilation_rate)
     90       kernel_initializer=tf.variance_scaling_initializer(),
     91       data_format=data_format,
---> 92       dilation_rate=dilation_rate)
     93 
     94 

~\Anaconda3\envs\master-thesis\lib\site-packages\tensorflow\python\layers\convolutional.py in conv2d(inputs, filters, kernel_size, strides, padding, data_format, dilation_rate, activation, use_bias, kernel_initializer, bias_initializer, kernel_regularizer, bias_regularizer, activity_regularizer, kernel_constraint, bias_constraint, trainable, name, reuse)
    606       _reuse=reuse,
    607       _scope=name)
--> 608   return layer.apply(inputs)
    609 
    610 

~\Anaconda3\envs\master-thesis\lib\site-packages\tensorflow\python\layers\base.py in apply(self, inputs, *args, **kwargs)
    669       Output tensor(s).
    670     """
--> 671     return self.__call__(inputs, *args, **kwargs)
    672 
    673   def _add_inbound_node(self,

~\Anaconda3\envs\master-thesis\lib\site-packages\tensorflow\python\layers\base.py in __call__(self, inputs, *args, **kwargs)
    557           input_shapes = [x.get_shape() for x in input_list]
    558           if len(input_shapes) == 1:
--> 559             self.build(input_shapes[0])
    560           else:
    561             self.build(input_shapes)

~\Anaconda3\envs\master-thesis\lib\site-packages\tensorflow\python\layers\convolutional.py in build(self, input_shape)
    130       channel_axis = -1
    131     if input_shape[channel_axis].value is None:
--> 132       raise ValueError('The channel dimension of the inputs '
    133                        'should be defined. Found `None`.')
    134     input_dim = input_shape[channel_axis].value

ValueError: The channel dimension of the inputs should be defined. Found `None`.

我已经在网上搜索了这个错误,但只找到了与 Keras 相关的帖子,当时后端没有正确初始化(s. this)。

如果有人能指出我寻找错误的方向,我会很高兴。

这是我的 input_fn:

def input_fn(is_training, batch_size, num_epochs=1):
    """Input function which provides batches for train or eval."""
    # Get list of paths belonging to training images and corresponding label images
    filename_list = filenames(is_training)
    filenames_train = []
    filenames_labels = []
    for i in range(len(filename_list)):
        filenames_train.append(train_dataset_dir+filename_list[i])
        filenames_labels.append(gt_dataset_dir+filename_list[i])


    filenames_train = tf.convert_to_tensor(tf.constant(filenames_train, dtype=tf.string))
    filenames_labels = tf.convert_to_tensor(tf.constant(filenames_labels, dtype=tf.string))

    dataset = tf.data.Dataset.from_tensor_slices((filenames_train,filenames_labels))

    if is_training:
        dataset = dataset.shuffle(buffer_size=_FILE_SHUFFLE_BUFFER)
        dataset = dataset.map(image_parser)
        dataset = dataset.prefetch(batch_size)

        if is_training:
          # When choosing shuffle buffer sizes, larger sizes result in better
          # randomness, while smaller sizes have better performance.
            dataset = dataset.shuffle(buffer_size=_SHUFFLE_BUFFER)

      # We call repeat after shuffling, rather than before, to prevent separate
      # epochs from blending together.
    dataset = dataset.repeat(num_epochs)
    dataset = dataset.batch(batch_size)

    iterator = dataset.make_one_shot_iterator()
    images, labels = iterator.get_next()
    return images, labels

这是 input_fn 中使用的 image_parser 函数:

def image_parser(filename, label): 
    image_string = tf.read_file(filename)
    image_decoded = tf.image.decode_image(image_string,_NUM_CHANNELS)  
    image_decoded = tf.image.convert_image_dtype(image_decoded, dtype=tf.float32)
    label_string = tf.read_file(label)
    label_decoded = tf.image.decode_image(label)
    return image_decoded, tf.one_hot(label_decoded, _LABEL_CLASSES)
4

2 回答 2

9

之后试试这个tf.read_file

image_decoded = tf.image.decode_image(image_string, channels=3)
image_decoded.set_shape([None, None, 3])
于 2018-04-27T13:26:58.983 回答
1

问题出在tf.image.decode_image. 不知何故,尽管您通过了它,但它并没有设置频道。

如果您知道数据集中的图像类型,请tf.image.decode_image用适当的解码器(如tf.image.decode_png.

于 2018-02-05T08:03:57.790 回答