0
Epoch 1/8
222/222 [==============================] - 18s 67ms/step - loss: 1.4523 - accuracy: 0.9709 - val_loss: 1.3310 - val_accuracy: 0.9865
Epoch 2/8
222/222 [==============================] - 14s 63ms/step - loss: 1.3345 - accuracy: 0.9747 - val_loss: 1.2312 - val_accuracy: 0.9865
Epoch 3/8
222/222 [==============================] - 14s 64ms/step - loss: 1.1911 - accuracy: 0.9868 - val_loss: 1.1245 - val_accuracy: 0.9887
Epoch 4/8
222/222 [==============================] - 14s 63ms/step - loss: 1.0926 - accuracy: 0.9873 - val_loss: 1.0798 - val_accuracy: 0.9769
Epoch 5/8
222/222 [==============================] - 14s 63ms/step - loss: 1.0622 - accuracy: 0.9760 - val_loss: 1.0887 - val_accuracy: 0.9555
Epoch 6/8
222/222 [==============================] - 14s 63ms/step - loss: 0.9589 - accuracy: 0.9841 - val_loss: 0.9216 - val_accuracy: 0.9814
Epoch 7/8
222/222 [==============================] - 14s 64ms/step - loss: 0.8648 - accuracy: 0.9885 - val_loss: 0.8241 - val_accuracy: 0.9896
Epoch 8/8
222/222 [==============================] - 14s 63ms/step - loss: 0.7993 - accuracy: 0.9908 - val_loss: 0.7694 - val_accuracy: 0.9893
Model: "model_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_6 (InputLayer)         [(None, 32, 32, 3)]       0         
_________________________________________________________________
model_1 (Functional)         (None, 10)                3250058   
=================================================================
Total params: 3,250,058
Trainable params: 3,228,170
Non-trainable params: 21,888
_________________________________________________________________
Epoch 1/8
222/222 [==============================] - 18s 66ms/step - loss: 1.4423 - accuracy: 0.9741 - val_loss: 1.3361 - val_accuracy: 0.9839
Epoch 2/8
222/222 [==============================] - 14s 64ms/step - loss: 1.3457 - accuracy: 0.9734 - val_loss: 1.2327 - val_accuracy: 0.9845
Epoch 3/8
222/222 [==============================] - 14s 63ms/step - loss: 1.1927 - accuracy: 0.9893 - val_loss: 1.1287 - val_accuracy: 0.9870

这是我的输出,正如你在训练后加载模型时看到的那样,与训练前的值相比,损失的值仍然相同。我真的很困惑。

这是我的代码,我想使用两个模型(合并后,最终合并),我使用load_modeand model.save。因为我想模仿联邦学习过程。

希望有人能给我一些想法。

def train2():
  img_input = Input(shape=(32, 32, 3))
  Mobilenet2 = load_model('Final combining.h5')
  output = Mobilenet2(img_input)
  model = Model(img_input, output)
  model.summary()

  # set optimizer
  sgd = optimizers.SGD(lr=.1, momentum=0.9, nesterov=True)
  model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

  # start training
  h2 = model.fit(X_train2, y_2_train, batch_size=batch_size,
                  steps_per_epoch=len(X_train2) // batch_size,
                  epochs=epochs1,
                  # callbacks=cbks,
                  validation_data=(X_test, y_test))
                  # callbacks=callbacks                 
  
  model.save('After combining.h5')

def train3():
  img_input = Input(shape=(32, 32, 3))
  Mobilenet1 = load_model('After combining.h5')
  output = Mobilenet1(img_input)
  model = Model(img_input, output)
  model.summary()

  # set optimizer
  sgd = optimizers.SGD(lr=.1, momentum=0.9, nesterov=True)
  model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

  # start training
  h3 = model.fit(X_train1, y_1_train, batch_size=batch_size,
                  steps_per_epoch=len(X_train1) // batch_size,
                  epochs=epochs1,
                  # callbacks=cbks,
                  validation_data=(X_test, y_test))
                  # callbacks=callbacks   
  
  model.save('Final combining.h5')

我用for循环来控制训练过程,输出的是最后一次迭代...,accuracy和loss的值和第一次迭代差不多

for _ in range(5):
  num = 0
  if num % 2==0:
    train2()
    num+=1
  else:
    train3()
    num+=1
4

1 回答 1

0

我在更改模型的相同名称后解决它

_________________________________________________________________
Epoch 1/8
222/222 [==============================] - 25s 100ms/step - loss: 0.2912 - accuracy: 0.9854 - val_loss: 0.3016 - val_accuracy: 0.9800
Epoch 2/8
222/222 [==============================] - 22s 98ms/step - loss: 0.2637 - accuracy: 0.9906 - val_loss: 0.3110 - val_accuracy: 0.9800
Epoch 3/8
222/222 [==============================] - 22s 97ms/step - loss: 0.2420 - accuracy: 0.9922 - val_loss: 0.2764 - val_accuracy: 0.9865
Epoch 4/8
222/222 [==============================] - 22s 97ms/step - loss: 0.2960 - accuracy: 0.9743 - val_loss: 0.2632 - val_accuracy: 0.9842
Epoch 5/8
222/222 [==============================] - 22s 98ms/step - loss: 0.2291 - accuracy: 0.9928 - val_loss: 0.2757 - val_accuracy: 0.9789
Epoch 6/8
222/222 [==============================] - 22s 97ms/step - loss: 0.2286 - accuracy: 0.9921 - val_loss: 0.2806 - val_accuracy: 0.9744
Epoch 7/8
222/222 [==============================] - 22s 98ms/step - loss: 0.2161 - accuracy: 0.9920 - val_loss: 0.2381 - val_accuracy: 0.9828
Epoch 8/8
222/222 [==============================] - 22s 98ms/step - loss: 0.1936 - accuracy: 0.9953 - val_loss: 0.2192 - val_accuracy: 0.9887
Model: "model_20"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_22 (InputLayer)        [(None, 32, 32, 3)]       0         
_________________________________________________________________
model_19 (Functional)        (None, 10)                3250058   
=================================================================
Total params: 3,250,058
Trainable params: 3,228,170
Non-trainable params: 21,888
_________________________________________________________________
Epoch 1/8
222/222 [==============================] - 25s 101ms/step - loss: 0.1774 - accuracy: 0.9972 - val_loss: 0.2197 - val_accuracy: 0.9876
Epoch 2/8
222/222 [==============================] - 22s 98ms/step - loss: 0.1805 - accuracy: 0.9928 - val_loss: 0.2880 - val_accuracy: 0.9713
Epoch 3/8
222/222 [==============================] - 22s 98ms/step - loss: 0.2062 - accuracy: 0.9852 - val_loss: 0.2234 - val_accuracy: 0.9814
Epoch 4/8
222/222 [==============================] - 22s 97ms/step - loss: 0.1765 - accuracy: 0.9938 - val_loss: 0.2218 - val_accuracy: 0.9769
Epoch 5/8
222/222 [==============================] - 22s 98ms/step - loss: 0.1792 - accuracy: 0.9905 - val_loss: 0.2180 - val_accuracy: 0.9803
Epoch 6/8
222/222 [==============================] - 22s 98ms/step - loss: 0.1608 - accuracy: 0.9942 - val_loss: 0.2602 - val_accuracy: 0.9679
Epoch 7/8
222/222 [==============================] - 22s 98ms/step - loss: 0.1581 - accuracy: 0.9925 - val_loss: 0.1826 - val_accuracy: 0.9873
Epoch 8/8
222/222 [==============================] - 22s 98ms/step - loss: 0.2309 - accuracy: 0.9734 - val_loss: 0.2034 - val_accuracy: 0.9831
于 2021-02-22T11:58:50.410 回答