-1

我的神经网络解决了一个非线性问题,但是测试损失很高。当我使用没有隐藏层的神经网络时,测试损失低于隐藏层但也很高。有人知道为什么吗?以及如何改善损失?

#data

    train_X = data_in[0:9001, :]
    train_Y = data_out[0:9001, :]
    test_X = data_in[9000:10001, :]
    test_Y = data_out[9000:10001, :
    n = train_X.shape[1] 
    m = train_X.shape[0]
    d = train_Y.shape[1]  
    l = test_X.shape[0]

#parameters

    trainX = tf.placeholder(tf.float32, [batch_size, n])
    trainY = tf.placeholder(tf.float32, [batch_size, d])
    testX = tf.placeholder(tf.float32, [l, n])
    testY = tf.placeholder(tf.float32, [l, d])
    def multilayer(trainX, h1, h2, hout, b1, b2, bout):
        layer_1 = tf.matmul(trainX, h1) + b1
        layer_1 = tf.nn.sigmoid(layer_1)
        layer_2 = tf.matmul(layer_1, h2) + b2
        layer_2 = tf.nn.sigmoid(layer_2)
        out_layer = tf.matmul(layer_2, hout) + bout
        return out_layer
    h1 = tf.Variable(tf.zeros([n, n_hidden_1]))
    h2 = tf.Variable(tf.zeros([n_hidden_1, n_hidden_2]))
    hout = tf.Variable(tf.zeros([n_hidden_2, d]))
    b1 = tf.Variable(tf.zeros([n_hidden_1]))
    b2 = tf.Variable(tf.zeros([n_hidden_2]))
    bout = tf.Variable(tf.zeros([d]))
    pred = multilayer(trainX, h1, h2, hout, b1, b2, bout)
    predtest = multilayer(testX, h1, h2, hout, b1, b2, bout)
    loss = tf.reduce_sum(tf.pow(pred - trainY, 2)) / (batch_size)
    losstest = tf.reduce_sum(tf.pow(predtest - testY, 2)) / (l)
    optimizer = 
    tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)

# Initializing the variables

    init = tf.global_variables_initializer()
    a = np.linspace(0, m - batch_size, m / batch_size, dtype=np.int32)
    with tf.Session() as sess:
        sess.run(init)
        for i in (a):
            x = train_X[i:i + batch_size, :]
            y = train_Y[i:i + batch_size, :]
            for epoch in range(training_epochs):
                sess.run(optimizer, feed_dict={trainX: np.asarray(x), trainY: 
                          np.asarray(y)})
                c = sess.run(loss, feed_dict={trainX: np.asarray(x), trainY: 
                            np.asarray(y)})
                print("Batch:", '%04d' % (i / batch_size + 1), "Epoch:", '%04d'%
                      (epoch + 1), "loss=", "{:.9f}".format(c))
# Testing
    print("Testing... (Mean square loss Comparison)")
    testing_loss = sess.run(losstest, feed_dict={testX: np.asarray(test_X), 
    testY: np.asarray(test_Y)})
    pred_y_vals = sess.run(predtest, feed_dict={testX: test_X})
    print("Testing loss=", testing_loss)
4

1 回答 1

0

从我在您的训练循环中看到的情况来看,您在迭代批次之前先迭代时代。这意味着您的网络将training_epochs在同一批次上训练多次(次),然后继续下一批。它永远不会在以前见过的批次上回来。

直观地说,我会说你的网络在训练期间看到的最后一批严重过度拟合。这解释了测试期间的高损耗。

在你的训练中反转两个循环,你应该没问题。

于 2017-11-02T13:48:43.053 回答