0

尝试训练 word2vec 模型时,我在将值加载到 feed_dict 的过程中被卡住了。错误信息是:

ValueError                                Traceback (most recent call last)
<ipython-input-31-eba8f8f5ab96> in <module>()
----> 1 model.train_word2vec()

<ipython-input-28-d20feabd3b23> in train_word2vec(self)
     47                 target_word = batch[:,0]
     48                 loss_get,_ = sess.run([loss,optimizer],feed_dict={center_words:center_word,
---> 49                                                               target_words:target_word})
     50                 average_loss+=loss_get
     51 

/Users/mac/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    765     try:
    766       result = self._run(None, fetches, feed_dict, options_ptr,
--> 767                          run_metadata_ptr)
    768       if run_metadata:
    769         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/Users/mac/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    936                 ' to a larger type (e.g. int64).')
    937 
--> 938           np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
    939 
    940           if not subfeed_t.get_shape().is_compatible_with(np_val.shape):

/Users/mac/anaconda3/lib/python3.6/site-packages/numpy/core/numeric.py in asarray(a, dtype, order)
    529 
    530     """
--> 531     return array(a, dtype, copy=False, order=order)
    532 
    533 

ValueError: setting an array element with a sequence.

这是我的模型代码:

center_words = tf.placeholder(dtype=tf.int32,shape=[self.batch_size],name="center_words")
target_words = tf.placeholder(dtype=tf.int32,shape=[self.batch_size,1],name="target_words")

...

with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            for i in range(self.training_steps):
                batch = next(batch_gen)
                center_word = batch[:,1]
                target_word = batch[:,0]
                loss_get,_ = sess.run([loss,optimizer],feed_dict={center_words:center_word,
                                                              target_words:target_word})
                average_loss+=loss_get

这是我生成的 8 号批次,仅用于演示目的:

gen=gen_batch(batchesX,batchesY,batch_size=8)


batch=next(gen)


batch[:,0]


#target words

array([array([-1, -1, -1,  1,  2,  3], dtype=int32),
       array([-1, -1, -1,  2,  3,  4], dtype=int32),
       array([-1, -1, -1,  3,  4,  5], dtype=int32),
       array([0, 1, 2, 4, 5, 6], dtype=int32),
       array([1, 2, 3, 5, 6, 7], dtype=int32),
       array([2, 3, 4, 6, 7, 0], dtype=int32),
       array([3, 4, 5, 7, 0, 8], dtype=int32),
       array([4, 5, 6, 0, 8, 9], dtype=int32)], dtype=object)

batch[:,1]


#center words:
array([0, 1, 2, 3, 4, 5, 6, 7], dtype=object)

从我收集的数组形状来看,center_words 和 target_words 的形状都是一致的(batch_size,)。我的猜测是它必须与 dtype=object part 做一些事情,但我不确定。将不胜感激任何建议。

gen_batch 代码:

def gen_batch(batchesX,batchesY,batch_size=256):

    '''Batch generator in order to save some computation time'''

    batches=generate_empty_2D_batch_array()
    for batch in zip(batchesX,batchesY):
        for i in range(len(batch[0])):
            X_sample = batch[0][i] 
            Y_sample = batch[1][i]
            one_batch = np.array([[X_sample,Y_sample]])
            batches=np.append(batches,one_batch,axis=0)
            if len(batches)==batch_size:
                yield batches
                batches=generate_empty_2D_batch_array()

generate_empty_2D_batch_array 的代码:

def generate_empty_2D_batch_array():
    ''' Name of function is self-explanatory'''

    arr = np.array([],dtype=np.int32)
    arr = arr.reshape(-1,2)
    return arr
4

1 回答 1

0

无论如何,我意识到我应该遵循不同的批次模式,所以我将其更改为(输入,输出)对,它们都是一维数组。这就是它对我有用的方式。

于 2017-08-30T14:02:45.400 回答