2

我正在尝试使用 Keras 实现带有负采样的 Word2Vec CBOW,遵循此处找到的代码:

EMBEDDING_DIM = 100

sentences = SentencesIterator('test_file.txt')
v_gen = VocabGenerator(sentences=sentences, min_count=5, window_size=3,
                       sample_threshold=-1, negative=5)

v_gen.scan_vocab()
v_gen.filter_vocabulary()
reverse_vocab = v_gen.generate_inverse_vocabulary_lookup('test_lookup')

# Generate embedding matrix with all values between -1/2d, 1/2d
embedding = np.random.uniform(-1.0 / (2 * EMBEDDING_DIM),
                              1.0 / (2 * EMBEDDING_DIM),
                              (v_gen.vocab_size + 3, EMBEDDING_DIM))

# Creating CBOW model
# Model has 3 inputs
# Current word index, context words indexes and negative sampled word indexes
word_index = Input(shape=(1,))
context = Input(shape=(2*v_gen.window_size,))
negative_samples = Input(shape=(v_gen.negative,))

# All inputs are processed through a common embedding layer
shared_embedding_layer = (Embedding(input_dim=(v_gen.vocab_size + 3),
                                    output_dim=EMBEDDING_DIM,
                                    weights=[embedding]))

word_embedding = shared_embedding_layer(word_index)
context_embeddings = shared_embedding_layer(context)
negative_words_embedding = shared_embedding_layer(negative_samples)

# Now the context words are averaged to get the CBOW vector
cbow = Lambda(lambda x: K.mean(x, axis=1),
              output_shape=(EMBEDDING_DIM,))(context_embeddings)

# Context is multiplied (dot product) with current word and negative
# sampled words
word_context_product = merge([word_embedding, cbow], mode='dot')
negative_context_product = merge([negative_words_embedding, cbow],
                                 mode='dot',
                                 concat_axis=-1)

# The dot products are outputted
model = Model(input=[word_index, context, negative_samples],
              output=[word_context_product, negative_context_product])

# Binary crossentropy is applied on the output
model.compile(optimizer='rmsprop', loss='binary_crossentropy')
print(model.summary())

model.fit_generator(v_gen.pretraining_batch_generator(reverse_vocab),
                    samples_per_epoch=10,
                    nb_epoch=1)

但是,在合并部分出现错误,因为嵌入层是 3D 张量,而 cbow 只有 2 维。我假设我需要将嵌入(即 [?, 1, 100])重塑为 [1, 100] 但我找不到如何使用功能 API 重塑。我正在使用 TensorFlow 后端。

此外,如果有人可以使用 Keras(Gensim 免费)指出 CBOW 的其他实现,我很想看看它!

谢谢!

编辑:这是错误

Traceback (most recent call last):
  File "cbow.py", line 48, in <module>
    word_context_product = merge([word_embedding, cbow], mode='dot')
    .
    .
    .
ValueError: Shape must be rank 2 but is rank 3 for 'MatMul' (op: 'MatMul') with input shapes: [?,1,100], [?,100].
4

1 回答 1

2
ValueError: Shape must be rank 2 but is rank 3 for 'MatMul' (op: 'MatMul') with input shapes: [?,1,100], [?,100].

确实,您需要重塑word_embedding张量。两种方法:

  • 您可以使用Reshape()从 导入的图层,keras.layers.core这样做的方式如下:

    word_embedding = Reshape((100,))(word_embedding)
    

    的参数Reshape是具有目标形状的元组。

  • 或者您可以使用Flatten()同样从 导入的图层,keras.layers.core使用如下:

    word_embedding = Flatten()(word_embedding)
    

    什么都不作为参数,它只会删除“空”维度。

这对你有帮助吗?

编辑 :

事实上,第二merge()个有点棘手。dotKeras 中的合并只接受相同等级的张量,所以相同len(shape)。因此,您要做的是使用一个Reshape()图层添加回该 1 个空维度,然后使用该功能dot_axes而不是与合并concat_axis无关的功能。dot这就是我建议您的解决方案:

word_embedding = shared_embedding_layer(word_index)
# Shape output = (None,1,emb_size)
context_embeddings = shared_embedding_layer(context)
# Shape output = (None, 2*window_size, emb_size)
negative_words_embedding = shared_embedding_layer(negative_samples)
# Shape output = (None, negative, emb_size)

# Now the context words are averaged to get the CBOW vector
cbow = Lambda(lambda x: K.mean(x, axis=1),
                     output_shape=(EMBEDDING_DIM,))(context_embeddings)
# Shape output = (None, emb_size)
cbow = Reshape((1,emb_size))(cbow)
# Shape output = (None, 1, emb_size)

# Context is multiplied (dot product) with current word and negative
# sampled words
word_context_product = merge([word_embedding, cbow], mode='dot')
# Shape output = (None, 1, 1)
word_context_product = Flatten()(word_context_product)
# Shape output = (None,1)
negative_context_product = merge([negative_words_embedding, cbow], mode='dot',dot_axes=[2,2])
# Shape output = (None, negative, 1)
negative_context_product = Flatten()(negative_context_product)
# Shape output = (None, negative)

它在工作吗?:)

问题来自于 TF 关于矩阵乘法的刚性。与“点”模式合并调用后端batch_dot()函数,与 Theano 不同,TensorFlow 要求矩阵具有相同的秩:阅读此处

于 2017-02-28T20:38:59.100 回答