我正在尝试用千层面做一个简单的多元线性回归。这是我的输入:
x_train = np.array([[37.93, 139.5, 329., 16.64,
16.81, 16.57, 1., 707.,
39.72, 149.25, 352.25, 16.61,
16.91, 16.60, 40.11, 151.5,
361.75, 16.95, 16.98, 16.79]]).astype(np.float32)
y_train = np.array([37.92, 138.25, 324.66, 16.28, 16.27, 16.28]).astype(np.float32)
对于这两个数据点,网络应该能够y
完美地学习。
这是模型:
i1 = T.matrix()
y = T.vector()
lay1 = lasagne.layers.InputLayer(shape=(None,20),input_var=i1)
out1 = lasagne.layers.get_output(lay1)
lay2 = lasagne.layers.DenseLayer(lay1, 6, nonlinearity=lasagne.nonlinearities.linear)
out2 = lasagne.layers.get_output(lay2)
params = lasagne.layers.get_all_params(lay2, trainable=True)
cost = T.sum(lasagne.objectives.squared_error(out2, y))
grad = T.grad(cost, params)
updates = lasagne.updates.sgd(grad, params, learning_rate=0.1)
f_train = theano.function([i1, y], [out1, out2, cost], updates=updates)
多次执行后
f_train(x_train,y_train)
成本爆炸到无穷大。知道这里出了什么问题吗?
谢谢!