0

这是原始 colab 笔记本的 URL:

https://colab.research.google.com/drive/17u-pRZJnKN0gO5XZmq8n5A2bKGrfKEUg#scrollTo=xEuWqzjlPobA

滚动到“现在快速研究示例:超网络”的最后一个单元格:

input_dim = 784
classes = 10

# The model we'll actually use (the hypernetwork).
outer_model = Linear(classes)

# It doesn't need to create its own weights, so let's mark it as already built.
# That way, calling `outer_model` won't create new variables.
outer_model.built = True

# The model that generates the weights of the model above.
inner_model = Linear(input_dim * classes + classes)

# Loss and optimizer.
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)

# Prepare a dataset.
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
    (x_train.reshape(60000, 784).astype('float32') / 255, y_train))

# We'll use a batch size of 1 for this experiment.
dataset = dataset.shuffle(buffer_size=1024).batch(1)

losses = []  # Keep track of the losses over time.
for step, (x, y) in enumerate(dataset):
  with tf.GradientTape() as tape:

    # Predict weights for the outer model.
    weights_pred = inner_model(x)

    # Reshape them to the expected shapes for w and b for the outer model.
    w_pred = tf.reshape(weights_pred[:, :-classes], (input_dim, classes))
    b_pred = tf.reshape(weights_pred[:, -classes:], (classes,))

    # Set the weight predictions as the weight variables on the outer model.
    outer_model.w = w_pred
    outer_model.b = b_pred

    # Inference on the outer model.
    preds = outer_model(x)
    loss = loss_fn(y, preds)

  # Train only inner model.
  grads = tape.gradient(loss, inner_model.trainable_weights)
  optimizer.apply_gradients(zip(grads, inner_model.trainable_weights))

  # Logging.
  losses.append(float(loss))
  if step % 100 == 0:
    print(step, sum(losses) / len(losses))

  # Stop after 1000 steps.
  if step >= 1000:
    break

在训练循环中,请注意:

grads = tape.gradient(loss, inner_model.trainable_weights)

在外面:

with tf.GradientTape() as tape:

我以为这应该在里面?如果有人可以保证这是正确的,并且同时解释渐变胶带的情况,那就太好了。

如果你运行这个笔记本,不管代码是否正常工作,因为你可以看到每个时期的损失都下降了。

4

1 回答 1

0

我见过的所有例子都在 with 语句之外。请注意,在 with 语句之外,磁带确实不再存在。“ exit ”函数只是被调用。

于 2019-03-13T21:28:45.423 回答