配置中有一些参数,特别是当我更改max_len
,hidden_size
或embedding_size
.
config = {
"max_len": 64,
"hidden_size": 64,
"vocab_size": vocab_size,
"embedding_size": 128,
"n_class": 15,
"learning_rate": 1e-3,
"batch_size": 32,
"train_epoch": 20
}
我收到一个错误:
“ValueError:无法为张量'Placeholder:0'提供形状(32、32)的值,其形状为'(?,64)'”
下面的张量流图是我理解有问题的。有没有办法了解需要设置哪些 relative 或参数来避免我上面遇到的max_len
错误hidden_size
?embedding_size
embeddings_var = tf.Variable(tf.random_uniform([self.vocab_size, self.embedding_size], -1.0, 1.0),
trainable=True)
batch_embedded = tf.nn.embedding_lookup(embeddings_var, self.x)
# multi-head attention
ma = multihead_attention(queries=batch_embedded, keys=batch_embedded)
# FFN(x) = LN(x + point-wisely NN(x))
outputs = feedforward(ma, [self.hidden_size, self.embedding_size])
outputs = tf.reshape(outputs, [-1, self.max_len * self.embedding_size])
logits = tf.layers.dense(outputs, units=self.n_class)
self.loss = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=self.label))
self.prediction = tf.argmax(tf.nn.softmax(logits), 1)
# optimization
loss_to_minimize = self.loss
tvars = tf.trainable_variables()
gradients = tf.gradients(loss_to_minimize, tvars, aggregation_method=tf.AggregationMethod.EXPERIMENTAL_TREE)
grads, global_norm = tf.clip_by_global_norm(gradients, 1.0)
self.global_step = tf.Variable(0, name="global_step", trainable=False)
self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate)
self.train_op = self.optimizer.apply_gradients(zip(grads, tvars), global_step=self.global_step,
name='train_step')
print("graph built successfully!")