0

我有等级 3 的张量和等级 2 的另一个张量,我想使用 tf.tensordot 但它给了我这个错误..??

我正在使用 tensorflow 0.12.0 并且正在导入 math_ops

请问有人可以帮忙吗?

def self_attention(输入,attention_size):

hidden_size = 1200     
w_omega = tf.Variable(tf.random_normal([hidden_size, attention_size], stddev=0.1))
b_omega = tf.Variable(tf.random_normal([attention_size], stddev=0.1))
u_omega = tf.Variable(tf.random_normal([attention_size], stddev=0.1))

with tf.name_scope('v'):
    # Applying fully connected layer with non-linear activation to each of the B*T timestamps;
    #  the shape of `v` is (B,T,D)*(D,A)=(B,T,A), where A=attention_size
    v = tf.tanh(tf.tensordot(inputs, w_omega, axes=1) + b_omega)

# For each of the timestamps its vector of size A from `v` is reduced with `u` vector

vu = tf.tensordot(v, u_omega, axes=1, name='vu')  # (B,T) shape

alphas = tf.nn.softmax(vu, name='alphas')         # (B,T) shape


# Output of (Bi-)RNN is reduced with attention vector; the result has (B,D) shape
m = inputs * tf.expand_dims(alphas, -1)

#M of size [-1,100,1200] then we sum_pooling or avg_pooling it later to to creat [-1,1200] vector that represents the sentence
output = tf.reduce_sum(m, 1)
return output, alphas
4

0 回答 0