0

我正在尝试在 TFLEARN 中编写一个模型以适应 16 个参数。

我之前在 Matlab 中使用具有 2000 和 1500 个节点的 2 个隐藏层的“fitnet”函数运行了同样的实验。

在探索其他架构/下降算法/超参数调整之前,我试图在 tensorflow 中复制这些结果。我做了一些研究,并确定 matlab fitnet 函数使用 tanh 节点作为隐藏层,使用线性节点作为输出。此外,下降算法默认为 levenberg-Marquardt,但也适用于其他(sgd)算法。

似乎准确度在 0.2 左右达到最大值,然后在连续的时期内振荡低于此值。我在 matlab 中没有看到这种行为。

我的 TFLEARN 代码如下所示:

tnorm = tflearn.initializations.uniform_scaling()

adam = tflearn.optimizers.Adam (learning_rate=0.1, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam')

#  network building
input_data = tflearn.input_data(shape=[None, np.shape(prepared_x)[1]])
fc1 = tflearn.fully_connected(input_data, 2000,activation='tanh',weights_init=tnorm)
fc2 = tflearn.fully_connected(fc1,1500,activation='tanh',weights_init=tnorm)
output = tflearn.fully_connected(fc2, 16, activation='linear',weights_init=tnorm)
network = tflearn.regression(output, optimizer=adam, loss='mean_square')

#define model with checkpoints
model = tflearn.DNN(network, tensorboard_dir='output/', tensorboard_verbose=3, checkpoint_path='output')

#Train Model
model.fit(prepared_x, prepared_t, n_epoch=5, batch_size=100,shuffle=True, show_metric=True, snapshot_epoch=False,validation_set=0.1 )

#save
model.save('TFLEARN_FC_final.tfl')

训练会话的输出如下所示:


Run id: UTSD6N
Log directory: output/
[?25l---------------------------------
Training samples: 43200
Validation samples: 4800
--
Training Step: 1 
[2K
| Adam | epoch: 000 | loss: 0.00000 - acc: 0.0000 -- iter: 00100/43200
[A[ATraining Step: 2  | total loss: [1m[32m0.67871[0m[0m
[2K
| Adam | epoch: 000 | loss: 0.67871 - acc: 0.0455 -- iter: 00200/43200
[A[ATraining Step: 3  | total loss: [1m[32m33.14599[0m[0m
[2K
| Adam | epoch: 000 | loss: 33.14599 - acc: 0.0082 -- iter: 00300/43200
[A[ATraining Step: 4  | total loss: [1m[32m28.01067[0m[0m
[2K
| Adam | epoch: 000 | loss: 28.01067 - acc: 0.0021 -- iter: 00400/43200
[A[ATraining Step: 5  | total loss: [1m[32m17.35706[0m[0m
[2K
| Adam | epoch: 000 | loss: 17.35706 - acc: 0.0006 -- iter: 00500/43200
[A[ATraining Step: 6  | total loss: [1m[32m9.73368[0m[0m
[2K
| Adam | epoch: 000 | loss: 9.73368 - acc: 0.0002 -- iter: 00600/43200
[A[ATraining Step: 7  | total loss: [1m[32m5.19867[0m[0m
[2K
| Adam | epoch: 000 | loss: 5.19867 - acc: 0.0001 -- iter: 00700/43200
[A[ATraining Step: 8  | total loss: [1m[32m3.54779[0m[0m
[2K
| Adam | epoch: 000 | loss: 3.54779 - acc: 0.0113 -- iter: 00800/43200
[A[ATraining Step: 9  | total loss: [1m[32m3.80998[0m[0m
[2K
| Adam | epoch: 000 | loss: 3.80998 - acc: 0.0106 -- iter: 00900/43200
[A[ATraining Step: 10  | total loss: [1m[32m4.33370[0m[0m
[2K
| Adam | epoch: 000 | loss: 4.33370 - acc: 0.0053 -- iter: 01000/43200
[A[ATraining Step: 11  | total loss: [1m[32m4.24100[0m[0m
[2K

 ... 

[2K
| Adam | epoch: 004 | loss: 0.02448 - acc: 0.1817 -- iter: 42800/43200
[A[ATraining Step: 2157  | total loss: [1m[32m0.02633[0m[0m
[2K
| Adam | epoch: 004 | loss: 0.02633 - acc: 0.1875 -- iter: 42900/43200
[A[ATraining Step: 2158  | total loss: [1m[32m0.02509[0m[0m
[2K
| Adam | epoch: 004 | loss: 0.02509 - acc: 0.1688 -- iter: 43000/43200
[A[ATraining Step: 2159  | total loss: [1m[32m0.02525[0m[0m
[2K
| Adam | epoch: 004 | loss: 0.02525 - acc: 0.1529 -- iter: 43100/43200
[A[ATraining Step: 2160  | total loss: [1m[32m0.02695[0m[0m
[2K
| Adam | epoch: 005 | loss: 0.02695 - acc: 0.1456 -- iter: 43200/43200

张量板的准确性/损失图像

任何建议将不胜感激。

4

1 回答 1

0

对于任何未来的潜伏者——我通过修复下降算法解决了我自己的问题。

Adam 优化器的默认学习率为 0.001,但这太高了,我不得不切换到 0.005 才能收敛。

于 2017-05-01T20:59:26.340 回答