2

我收到以下警告:

94: UserWarning: Converting sparse IndexedSlices to a dense Tensor with 1200012120 elements. This may consume a large amount of memory.

对于以下代码:

from wordbatch.extractors import WordSeq
import wordbatch
from keras.layers import Input,Embedding
...
wb = wordbatch.WordBatch(normalize_text, extractor=(WordSeq, {"seq_maxlen": MAX_NAME_SEQ}), procs=NUM_PROCESSOR)
wb.dictionary_freeze = True
full_df["ws_name"] = wb.fit_transform(full_df["name"])
...
name = Input(shape=[full_df["name"].shape[1]], name="name")
emb_name = Embedding(MAX_TEXT, 50)(name)
...

那就是我利用 GRU 网络嵌入层的 WordSeq(来自 WordBatch)输出。我应该如何修改代码以使其在不转换为密集张量的情况下工作?

4

1 回答 1

1

我在 Keras 中的嵌入层遇到了同样的问题。解决方案是显式使用 TensorFlow 优化器,如下所示:

model.compile(loss='mse', optimizer=TFOptimizer(tf.train.GradientDescentOptimizer(0.1)))

于 2018-12-21T17:45:25.443 回答