1

我正在尝试在 siaseme LSTM 中重现结果,以比较此处两个句子的语义相似性:- https://github.com/dhwajraj/deep-siamese-text-similarity

我正在使用 tensorflow 1.4 和 python 2.7

train.py 工作正常。为了评估模型,我创建了一个 match_valid.tsv 文件,它是那里可用的“train_snli.txt”的一个子集。我已经修改了 input_helpers.py 文件中的 getTsvTestData 函数。

def getTsvTestData(self, filepath):
        print("Loading testing/labelled data from "+filepath+"\n")
        x1=[]
        x2=[]
        y=[]
        # positive samples from file
        for line in open(filepath):
            l=line.strip().split("\t")
            if len(l)<3:
                continue
            x1.append(l[1].lower()) # text
            x2.append(l[0].lower()) # text
            y.append(int(l[2])) # similarity score 0 or 1
        return np.asarray(x1),np.asarray(x2),np.asarray(y)

我从 eval.py 中的这部分代码中收到错误

for db in batches:
            x1_dev_b,x2_dev_b,y_dev_b = zip(*db)
            #x1_dev_b = tf.convert_to_tensor(x1_dev_b,)
            print("type x1_dev_b {}".format(type(x1_dev_b))) # tuple
            print("type x2_dev_b {}".format(type(x2_dev_b))) # tuple
            print("type y_dev_b {}\n".format(type(y_dev_b))) # tuple

            feed = {input_x1: x1_dev_b, 
                    input_x2: x2_dev_b, 
                    input_y:y_dev_b, 
                    dropout_keep_prob: 1.0}

            batch_predictions, batch_acc, sim = sess.run([predictions,accuracy,sim], feed_dict=feed)

            print("type batch_predictions {}".format(type(batch_predictions))) # numpy.ndarray
            print("type batch_acc {}".format(type(batch_acc))) # numpy.float32
            print("type sim {}".format(type(sim))) # numpy.ndarray

            all_predictions = np.concatenate([all_predictions, batch_predictions])

            print("\n printing batch predictions {} \n".format(batch_predictions))

            all_d = np.concatenate([all_d, sim])

            print("DEV acc {} \n".format(batch_acc))

我收到此错误。我尝试在 sess.run() 中使用 print 语句来查找类型,但它不起作用。

Traceback (most recent call last):
  File "eval.py", line 92, in <module>
    batch_predictions, batch_acc, sim = sess.run([predictions,accuracy,sim], feed_dict=feed)
  File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 889, in run
    run_metadata_ptr)
  File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1105, in _run
    self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
  File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 414, in __init__
    self._fetch_mapper = _FetchMapper.for_fetch(fetches)
  File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 234, in for_fetch
    return _ListFetchMapper(fetch)
  File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 341, in __init__
    self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
  File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 242, in for_fetch
    return _ElementFetchMapper(fetches, contraction_fn)
  File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 275, in __init__
    % (fetch, type(fetch), str(e)))
TypeError: Fetch argument array([ 1.,  1.,  0.,  0.,  0.,  1.,  1.,  0.,  1.,  0.,  0.,  1.,  0.,
        0.,  0.,  1.,  1.,  0.,  0.,  1.,  0.,  0.,  0.,  1.,  0.,  0.,
        0.,  1.,  0.,  1.,  1.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  1.,
        0.,  0.,  1.,  1.,  1.,  0.,  1.,  1.,  0.,  1.,  1.,  1.,  1.,
        1.,  0.,  0.,  0.,  0.,  1.,  0.,  1.,  1.,  0.,  0.,  1.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,  0.,
        0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,
        0.,  0.,  1.,  1.,  0.,  0.,  0.,  1.,  1.,  1.,  0.,  0.,  0.,
        0.,  0.,  0.,  1.,  1.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,
        0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        1.,  0.,  0.,  1.,  0.,  0.,  1.,  0.,  1.,  1.,  0.,  1.,  0.,
        0.,  0.,  0.,  0.,  0.,  1.,  1.,  0.,  0.,  1.,  0.,  0.,  0.,
        1.,  1.,  1.,  1.,  0.,  1.,  1.,  0.,  0.,  1.,  0.,  0.,  1.,
        1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  0.,  0.,  0.,  1.,  0.,
        0.,  1.,  0.,  0.,  1.,  0.,  0.,  1.,  1.,  0.,  0.,  1.,  0.,
        0.,  0.,  1.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  1.,  0.,  0.,  1.,  0.,  1.,  1.,  0.,  1.,  0.,  1.,  0.,
        0.,  0.,  0.,  1.,  0.,  0.,  0.,  1.,  0.,  1.,  0.,  0.,  1.,
        1.,  0.,  0.,  1.,  0.,  1.,  0.,  0.,  0.], dtype=float32) has invalid type <type 'numpy.ndarray'>, must be a string or Tensor. (Can not convert a ndarray into a Tensor or Operation.)

实际上,我正在尝试进行查询相似度,将查询向量与我的语料库中的所有文档向量进行比较,并根据相似度分数对句子进行排名。我知道目前 LSTM 只是将两个句子相互比较并将相似度输出为 0 或 1。我该怎么做?

4

1 回答 1

5

问题是您正在替换 的值sim,它(我想)最初包含对 TensorFlow 张量或操作的引用,并用评估它的结果(这是一个 NumPy 数组),所以第二次迭代失败,因为sim不是TensorFlow张量或操作了。

你可以尝试这样的事情:

for db in batches:
            x1_dev_b,x2_dev_b,y_dev_b = zip(*db)
            #x1_dev_b = tf.convert_to_tensor(x1_dev_b,)
            print("type x1_dev_b {}".format(type(x1_dev_b))) # tuple
            print("type x2_dev_b {}".format(type(x2_dev_b))) # tuple
            print("type y_dev_b {}\n".format(type(y_dev_b))) # tuple

            feed = {input_x1: x1_dev_b, 
                    input_x2: x2_dev_b, 
                    input_y:y_dev_b, 
                    dropout_keep_prob: 1.0}

            batch_predictions, batch_acc, batch_sim = sess.run([predictions,accuracy,sim], feed_dict=feed)

            print("type batch_predictions {}".format(type(batch_predictions))) # numpy.ndarray
            print("type batch_acc {}".format(type(batch_acc))) # numpy.float32
            print("type batch_sim {}".format(type(batch_sim))) # numpy.ndarray

            all_predictions = np.concatenate([all_predictions, batch_predictions])

            print("\n printing batch predictions {} \n".format(batch_predictions))

            all_d = np.concatenate([all_d, batch_sim])

            print("DEV acc {} \n".format(batch_acc))
于 2017-12-04T11:30:02.513 回答