假设我有一个速率矩阵R
,我想将它分解为矩阵U
并V
使用 tensorflow
没有批量大小它的简单问题,可以用以下代码解决:
# define Variables
u = tf.Variable(np.random.rand(R_dim_1, output_dim), dtype=tf.float32, name='u')
v = tf.Variable(np.random.rand(output_dim, R_dim_2), dtype=tf.float32, name='v')
# predict rate by multiplication
predicted_R = tf.matmul(tf.cast(u, tf.float32), tf.cast(v, tf.float32))
#cost function and train step
cost = tf.reduce_sum(tf.reduce_sum(tf.abs(tf.sub(predicted_R, R))))
train_step = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with tf.Session() as sess:
init = tf.initialize_all_variables()
sess.run(init)
for i in range(no_epochs):
_, this_cost = sess.run([train_step, cost])
print 'cost: ', this_cost
我决定通过批量更新来解决这个问题,我的解决方案是发送索引U
,V
我想用它来预测速率矩阵R
并只更新那些选定的,这是我的代码(如果需要很多时间,请阅读评论):
# define variables
u = tf.Variable(np.random.rand(R_dim_1, output_dim), dtype=tf.float32, name='u')
v = tf.Variable(np.random.rand(output_dim, R_dim_2), dtype=tf.float32, name='v')
idx1 = tf.placeholder(tf.int32, shape=batch_size1, name='idx1')
idx2 = tf.placeholder(tf.int32, shape=batch_size2, name='idx2')
# get current U and current V by slicing U and V
cur_u = tf.Variable(tf.gather(u, idx1), dtype=tf.float32, name='cur_u')
cur_v = tf.transpose(v)
cur_v = tf.gather(cur_v, idx2)
cur_v = tf.Variable(tf.transpose(cur_v), dtype=tf.float32, name='cur_v')
# predict rate by multiplication
predicted_R = tf.matmul(tf.cast(cur_u, tf.float32), tf.cast(cur_v, tf.float32))
# get needed rate from rate matrix by slicing it
cur_rate = tf.gather(R, idx1)
cur_rate = tf.transpose(cur_rate)
cur_rate = tf.gather(cur_rate, idx2)
cur_rate = tf.transpose(cur_rate)
#cost function and train step
cost = tf.reduce_sum(tf.reduce_sum(tf.abs(tf.sub(predicted_R, cur_rate))))
train_step = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with tf.Session() as sess:
# initialize variables
init_new_vars_op = tf.initialize_variables([v, u])
sess.run(init_new_vars_op)
init = tf.initialize_all_variables()
rand_idx = np.sort(np.random.randint(0, R_dim_1, batch_size1))
rand_idx2 = np.sort(np.random.randint(0, R_dim_2, batch_size2))
sess.run(init, feed_dict={idx1: rand_idx, idx2: rand_idx2})
for i in range(no_epochs):
with tf.Graph().as_default():
rand_idx1 = np.random.randint(0, R_dim_1, batch_size1)
rand_idx2 = np.random.randint(0, R_dim_2, batch_size2)
_, this_cost, tmp_u, tmp_v, tmp_cur_u, tmp_cur_v = sess.run([train_step, cost, u, v, cur_u, cur_v],feed_dict={idx1: rand_idx1, idx2: rand_idx2})
print this_cost
#update U and V with computed current U and current V
tmp_u = np.array(tmp_u)
tmp_u[rand_idx] = tmp_cur_u
u = tf.assign(u, tmp_u)
tmp_v = np.array(tmp_v)
tmp_v[:, rand_idx2] = tmp_cur_v
v = tf.assign(v, tmp_v)
但我有内存泄漏u = tf.assign(u, tmp_u)
,u = tf.assign(u, tmp_u)
我应用了这个但什么也没得到。
还有另一种解决方案可以将更新仅应用于子集U
和V
类似这样,但遇到了许多其他错误,所以请继续了解如何解决我的内存泄漏问题。
抱歉我的长问题,感谢阅读。