1

我正在尝试使用 MLP 训练稀疏数据来预测预测。但是,对测试数据的预测为所有观察结果提供了相同的值。一旦我省略了每一层的激活函数,结果就会开始不同。我的代码如下:

# imports
import numpy as np
import tensorflow as tf
import random
import json
from scipy.sparse import rand


# Parameters
learning_rate= 0.1 
training_epochs = 50
batch_size = 100

# Network Parameters
m= 1000 #number of features
n= 5000 # number of observations

hidden_layers = [5,2,4,1,6]
n_layers = len(hidden_layers)
n_input =  m 
n_classes = 1 # it's a regression problem

X_train = rand(n, m, density=0.2,format = 'csr').todense().astype(np.float32)
Y_train =  np.random.randint(4, size=n)


X_test = rand(200, m, density=0.2,format = 'csr').todense().astype(np.float32)
Y_test =  np.random.randint(4, size=200)

# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None])


# Store layers weight & bias
weights = {}
biases = {}
weights['h1']=tf.Variable(tf.random_normal([n_input,    hidden_layers[0]])) #first matrice
biases['b1'] = tf.Variable(tf.random_normal([hidden_layers[0]]))

for i in xrange(2,n_layers+1):
    weights['h'+str(i)]=   tf.Variable(tf.random_normal([hidden_layers[i-2], hidden_layers[i-1]]))
    biases['b'+str(i)] = tf.Variable(tf.random_normal([hidden_layers[i-1]]))

weights['out']=tf.Variable(tf.random_normal([hidden_layers[-1], 1]))   #matrice between last layer and output
biases['out']= tf.Variable(tf.random_normal([1]))


# Create model
def multilayer_perceptron(_X, _weights, _biases):
    layer_begin = tf.nn.relu(tf.add(tf.matmul(_X, _weights['h1'],a_is_sparse=True), _biases['b1']))

    for layer in xrange(2,n_layers+1):
        layer_begin = tf.nn.relu(tf.add(tf.matmul(layer_begin, _weights['h'+str(layer)]), _biases['b'+str(layer)]))
        #layer_end = tf.nn.dropout(layer_begin, 0.3)

    return tf.matmul(layer_begin, _weights['out'])+ _biases['out']


# Construct model
pred = multilayer_perceptron(x, weights, biases)



# Define loss and optimizer
rmse = tf.reduce_sum(tf.abs(y-pred))/tf.reduce_sum(tf.abs(y)) # rmse loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(rmse) # Adam Optimizer

# Initializing the variables
init = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init)

    #training
    for step in xrange(training_epochs):

        # Generate a minibatch.
        start = random.randrange(1, n - batch_size)
        #print start
        batch_xs=X_train[start:start+batch_size,:]
        batch_ys =Y_train[start:start+batch_size]

        #printing
        _,rmseRes = sess.run([optimizer, rmse] , feed_dict={x: batch_xs, y: batch_ys} )
        if step % 20 == 0:
             print "rmse [%s] = %s" % (step, rmseRes)


    #testing
    pred_test = multilayer_perceptron(X_test, weights, biases)
    print "prediction", pred_test.eval()[:20] 
    print  "actual = ", Y_test[:20]

PS:我随机生成我的数据只是为了重现错误。事实上,我的数据很稀疏,与随机生成的数据非常相似。我要解决的问题是:MLP 对测试数据中的所有观察结果给出相同的预测。

4

1 回答 1

1

这表明你的训练失败了。使用 GoogeLeNet Imagenet 训练时,我已经看到它在选择错误的超参数开始时将所有内容都标记为“线虫”。需要检查的东西——你的训练损失减少了吗?如果它没有减少,请尝试不同的学习率/架构。如果它减少到零,那么您的损失可能是错误的,就像这里的情况一样

于 2016-04-11T20:21:14.203 回答