我通过执行以下操作从保存的文件加载自动编码器,结构如下:
autoencoder = load_model("autoencoder_mse1.h5")
autoencoder.summary()
>>> ____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_8 (InputLayer) (None, 19) 0
____________________________________________________________________________________________________
dense_43 (Dense) (None, 16) 320 input_8[0][0]
____________________________________________________________________________________________________
dense_44 (Dense) (None, 16) 272 dense_43[0][0]
____________________________________________________________________________________________________
dense_45 (Dense) (None, 2) 34 dense_44[0][0]
____________________________________________________________________________________________________
dense_46 (Dense) (None, 16) 48 dense_45[0][0]
____________________________________________________________________________________________________
dense_47 (Dense) (None, 16) 272 dense_46[0][0]
____________________________________________________________________________________________________
dense_48 (Dense) (None, 19) 323 dense_47[0][0]
====================================================================================================
Total params: 1269
__________________
包括 在内的前四层InputLayer
构成了编码器部分。我想知道是否有一种快速的方法可以抓住这四层。到目前为止,我遇到的唯一可能的解决方案是:
encoder = Sequential()
encoder.add(Dense(16, 19, weights=autoencoder.layers[1].get_weights()))
^ 并为另外两层手动执行此操作。我希望有一种方法可以更有效地提取前四层。特别是因为该.summary()
方法会吐出层摘要。
编辑 1(可能的解决方案):我已经找到了以下解决方案,但我希望能有更高效的解决方案(更少的代码)。
encoder = Sequential()
for i,l in enumerate(autoencoder.layers[1:]):
if i==0:
encoder.add(Dense(input_dim=data.shape[1],output_dim=l.output_dim,
activation="relu",weights=l.get_weights()))
else:
encoder.add(Dense(output_dim=l.output_dim,activation="relu",weights=l.get_weights()))
if l.output_dim == 2:
break