0

我正在尝试使用 Google Colab 的免费 TPU 进行培训,并且我已经使用 tf.data 创建了一个数据集。我的 y_label 是带有 7 个标签的标签编码数据。我得到这个错误

InvalidArgumentError:无法挤压昏暗[1],预期尺寸>1,>'tpu_140081340653808/metrics/metrics/sparse_categorical_accuracy/remov>e_squeezable_dimensions/Squeeze'(操作:'Squeeze')得到7,输入形状:[1024, 7]。

我如何加载我的数据

def preprocess_image(image):
    image = tf.image.decode_jpeg(image,channels = 3)
    image = tf.image.convert_image_dtype(image, tf.float32)
    image = tf.image.resize_images(image,[135,180])
    image /= 255.0
    return image
def load_and_preprocess_image(path,label):
    image = tf.read_file(path)
    return preprocess_image(image),label

def label_encode(dataset):
  le = LabelEncoder()
  dataset['encoded'] = le.fit_transform(dataset['dx'])
  return dataset

def load_dataset(image_paths,image_labels):
  label_dataset = tf.cast(image_labels, tf.int32)
  path_ds = tf.data.Dataset.from_tensor_slices((image_paths,label_dataset))
  ds = path_ds.map(load_and_preprocess_image,tf.data.experimental.AUTOTUNE)

  return ds

def get_training_dataset(image_file, label_file, batch_size):


    dataset = load_dataset(image_file, label_file)
    #dataset = dataset.cache()  # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
    #dataset = dataset.shuffle(buffer_size=image_count)

    dataset = dataset.repeat() # Mandatory for Keras for now
    dataset = dataset.batch(batch_size,drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
    dataset = dataset.prefetch(buffer_size=AUTOTUNE)  # fetch next batches while training on the current one
    return dataset

training_dataset = get_training_dataset(train_image_paths, train_image_labels, BATCH_SIZE)


# For TPU, we will need a function that returns the dataset with batches
training_input_fn = lambda: get_training_dataset(train_image_paths, train_image_labels, BATCH_SIZE)

我的模型

def create_res(input_sp):
  resnet = ResNet50(input_shape=input_sp,include_top=False,weights='imagenet')
  resnet.trainable=False
  return resnet

def create_seq_model(input_shape):
  tf.keras.backend.clear_session()

  resnet = create_res(input_shape)
  model = Sequential()
  model.add(resnet)
  model.add(GlobalAveragePooling2D())
  model.add(Dense(1024,activation= 'relu'))
  model.add(Dense(7,activation='softmax')) 

  return model

这是我创建我的 tpu 模型并编译模型以进行训练的地方,在我运行之后,我在开始 epoch 1 后收到上述错误

strategy = tf.contrib.tpu.TPUDistributionStrategy(tpu)
  trained_model = tf.contrib.tpu.keras_to_tpu_model(model, strategy=strategy)
  trained_model.compile(optimizer=tf.train.AdagradOptimizer(learning_rate=0.1),
              loss='sparse_categorical_crossentropy',
              metrics=['sparse_categorical_accuracy'])
  # Work in progress: reading directly from dataset object not yet implemented
  # for Keras/TPU. Keras/TPU needs a function that returns a dataset.
  history = trained_model.fit(training_input_fn, steps_per_epoch=10, epochs=EPOCHS)
INFO:tensorflow:Querying Tensorflow master (grpc://10.34.91.42:8470) for TPU system metadata.
INFO:tensorflow:Found TPU system:
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, -1, 5096825871840033721)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 4168719798427690218)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 12924042521108751459)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 2745039220817617241)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 3340897553582653661)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 5742351359072887449)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 8474216619759453218)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 10296052414400763019)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 5559949278042991869)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 13163336187739408258)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 17179869184, 4869688774298217560)
WARNING:tensorflow:tpu_model (from tensorflow.contrib.tpu.python.tpu.keras_support) is experimental and may change or be removed at any time, and without warning.
Epoch 1/5
INFO:tensorflow:New input shapes; (re-)compiling: mode=train (# of cores 8), [TensorSpec(shape=(128,), dtype=tf.int32, name=None), TensorSpec(shape=(128, 135, 180, 3), dtype=tf.float32, name=None), TensorSpec(shape=(128,), dtype=tf.int32, name=None)]
INFO:tensorflow:Overriding default placeholder.
INFO:tensorflow:Remapping placeholder for resnet50_input
INFO:tensorflow:Remapping placeholder for input_1
INFO:tensorflow:Default: input_1
ERROR:tensorflow:Operation of type Placeholder (tpu_140081340653808/input_1) is not supported on the TPU. Execution will fail if this op is used in the graph. 
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1658   try:
-> 1659     c_op = c_api.TF_FinishOperation(op_desc)
   1660   except errors.InvalidArgumentError as e:

InvalidArgumentError: Can not squeeze dim[1], expected a dimension of 1, got 7 for 'tpu_140081340653808/metrics/metrics/sparse_categorical_accuracy/remove_squeezable_dimensions/Squeeze' (op: 'Squeeze') with input shapes: [1024,7].

4

1 回答 1

1

对于sparse_categorical_accuracy,您的标签必须是整数,即标签的形状必须是(batch_size, 1)。从您的错误消息来看,您的标签似乎sparse_categorical_accuracy是一次性编码的,即形状(batch_size, 7)

您可以从实现中看到这一点:

# If the shape of y_true is (num_samples, 1), squeeze to (num_samples,)
if (len(K.int_shape(y_true)) == len(K.int_shape(y_pred))):
    y_true = array_ops.squeeze(y_true, [-1])

从您的代码中很难看出您的数据集究竟是如何到达训练阶段的,但似乎dataset['encoded']在准确度计算期间未使用存储的标签编码版本。

于 2019-04-11T14:31:42.160 回答