我必须在 java 中保存和加载一个 keras 模型,然后我想我可以使用 DL4J。问题是当我保存我的模型时,它没有带有他自己权重的嵌入层。我在 keras 中重新加载模型时遇到了同样的问题,但在这种情况下,我可以创建相同的架构并仅加载模型的重量。
特别是我从这样的架构开始:
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 300, 300) 219184200
_________________________________________________________________
lstm_1 (LSTM) (None, 300, 256) 570368
_________________________________________________________________
dropout_1 (Dropout) (None, 300, 256) 0
_________________________________________________________________
lstm_2 (LSTM) (None, 128) 197120
_________________________________________________________________
dropout_2 (Dropout) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 2) 258
=================================================================
保存并加载后,我得到了这个(在 keras 和 DL4J 中):
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, None, 300) 219184200
_________________________________________________________________
lstm_1 (LSTM) (None, None, 256) 570368
_________________________________________________________________
dropout_1 (Dropout) (None, None, 256) 0
_________________________________________________________________
lstm_2 (LSTM) (None, 128) 197120
_________________________________________________________________
dropout_2 (Dropout) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 2) 258
=================================================================
在java中有解决方案或解决方法吗?1)是否可以正确保存和加载keras中的结构和重量?
2) 是否可以使用 DL4J 或其他库在 java 中创建这种类型的模型?
3)是否可以在函数中实现转换词到嵌入,然后将先前在嵌入中转换的输入提供给神经网络?
4) 我可以使用 DL4J 在 java 的嵌入层中加载权重吗?
这是我的网络的代码:
sentence_indices = Input(shape=input_shape, dtype=np.int32)
emb_dim = 300 # embedding di 300 parole in italiano
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index, emb_dim)
embeddings = embedding_layer(sentence_indices)
X = LSTM(256, return_sequences=True)(embeddings)
X = Dropout(0.15)(X)
X = LSTM(128)(X)
X = Dropout(0.15)(X)
X = Dense(num_activation, activation='softmax')(X)
model = Model(sentence_indices, X)
sequentialModel = Sequential(model.layers)
提前致谢。