1

我正在尝试通过使用 Robert Layton 的 Learning Data Mining with Python 中的代码来学习神经网络/千层面。我想我正在按照代码进行操作,但我收到以下错误消息。非常感谢任何提示或直觉我做错了什么;

Traceback (most recent call last):

  File "<ipython-input-78-3ff2950373de>", line 3, in <module>
    updates=lasagne.updates.sgd(loss,all_params,learning_rate=0.01)

  File "C:\Users\WouterD\Anaconda\lib\site-packages\lasagne\updates.py", line 134, in sgd
    grads = get_or_compute_grads(loss_or_grads, params)

  File "C:\Users\WouterD\Anaconda\lib\site-packages\lasagne\updates.py", line 110, in get_or_compute_grads
    return theano.grad(loss_or_grads, params)

  File "C:\Users\WouterD\Anaconda\lib\site-packages\theano-0.7.0-py2.7.egg\theano\gradient.py", line 551, in grad
    handle_disconnected(elem)

  File "C:\Users\WouterD\Anaconda\lib\site-packages\theano-0.7.0-py2.7.egg\theano\gradient.py", line 538, in handle_disconnected
    raise DisconnectedInputError(message)

DisconnectedInputError: grad method was asked to compute the gradient with respect to a variable that is not part of the computational graph of the cost, or is used only by a non-differentiable operator: W
Backtrace when the node is created:
  File "C:\Users\WouterD\Anaconda\lib\site-packages\theano-0.7.0-py2.7.egg\theano\compile\sharedvalue.py", line 248, in shared
    utils.add_tag_trace(var)

下面的代码:

from sklearn.datasets import load_iris
iris=load_iris()
X=iris.data.astype(np.float32)
y_true=iris.data.astype(np.int32)

from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y_true,random_state=14)

import lasagne
input_layer=lasagne.layers.InputLayer(shape=(10,X.shape[1]))

hidden_layer=lasagne.layers.DenseLayer(input_layer,num_units=12,nonlinearity=lasagne.nonlinearities.sigmoid)

output_layer=lasagne.layers.DenseLayer(hidden_layer,num_units=3,nonlinearity=lasagne.nonlinearities.softmax)

import theano.tensor as T
net_input=T.matrix('net_input')
net_output=output_layer.get_output_for(net_input)
true_output=T.ivector("true_output")

loss=T.mean(T.nnet.categorical_crossentropy(net_output,true_output))
all_params=lasagne.layers.get_all_params(output_layer)
updates=lasagne.updates.sgd(loss,all_params,learning_rate=0.01)
4

2 回答 2

2

问题是您没有计算相对于实际输入变量的损失。net_input=T.matrix('net_input')是您自己对网络的符号输入,但当您创建InputLayer. 您也不需要获取特定输入的输出,只需获取输入层的输出即可。

所以,替换这两行

net_input=T.matrix('net_input')
net_output=output_layer.get_output_for(net_input)

用单行

net_output=lasagne.layers.get_output(output_layer)

预计您将遇到的下一个问题,您可以通过以下方式获取为您创建的输入变量 Lasagne,input_layer.input_var以便您可以像这样编译您的训练函数:

import theano
f = theano.function([input_layer.input_var, true_output], outputs=loss, updates=updates)
于 2015-11-14T10:36:27.523 回答
-1

input_layer=lasagne.layers.InputLayer(shape=(10,X.shape[1]),input_var=input)

而输入是您之前定义的张量

于 2017-03-09T06:32:44.040 回答