0

我使用了 Daniel Nouri 在他的同名网站上提供的框架。这是我使用的代码。看起来不错,我所做的唯一更改是将 output_nonlinearity=lasagne.nonlinearities.softmax 和回归更改为 False。否则它看起来很简单

from lasagne import layers
import theano
from lasagne.updates import sgd,nesterov_momentum
from nolearn.lasagne import NeuralNet
from sklearn.metrics import classification_report
import lasagne
import cv2
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.datasets import fetch_mldata
import sys

mnist = fetch_mldata('MNIST original')
X = np.asarray(mnist.data, dtype='float32')
y = np.asarray(mnist.target, dtype='int32')

(trainX, testX, trainY, testY) = train_test_split(X,y,test_size =0.3,random_state=42)
trainX = trainX.reshape(-1, 1, 28, 28)
testX = testX.reshape(-1, 1, 28, 28)

clf = NeuralNet(
    layers=[
    ('input', layers.InputLayer),
    ('conv1', layers.Conv2DLayer),
    ('pool1', layers.MaxPool2DLayer),
    ('dropout1', layers.DropoutLayer),  # !
    ('conv2', layers.Conv2DLayer),
    ('pool2', layers.MaxPool2DLayer),
    ('dropout2', layers.DropoutLayer),  # !
    ('hidden4', layers.DenseLayer),
    ('dropout4', layers.DropoutLayer),  # !
    ('hidden5', layers.DenseLayer),
    ('output', layers.DenseLayer),
    ],
 input_shape=(None,1, 28, 28),
 conv1_num_filters=20, conv1_filter_size=(3, 3), pool1_pool_size=(2, 2),
 dropout1_p=0.1,  # !
 conv2_num_filters=50, conv2_filter_size=(3, 3), pool2_pool_size=(2, 2),
 dropout2_p=0.2,  # !
 hidden4_num_units=500,
 dropout4_p=0.5,  # !
 hidden5_num_units=500,

 output_num_units=10,

 output_nonlinearity=lasagne.nonlinearities.softmax,

 update=nesterov_momentum,

 update_learning_rate=theano.shared(float32(0.03)),
 update_momentum=theano.shared(float32(0.9)),

 regression=False,
 max_epochs=3000,
 verbose=1,
 )

clf.fit(trainX,trainY)

但是在运行它时我得到了这个 NaN

input               (None, 1, 28, 28)       produces     784 outputs
conv1               (None, 20, 26, 26)      produces   13520 outputs
pool1               (None, 20, 13, 13)      produces    3380 outputs
dropout1            (None, 20, 13, 13)      produces    3380 outputs
conv2               (None, 50, 11, 11)      produces    6050 outputs
pool2               (None, 50, 6, 6)        produces    1800 outputs
dropout2            (None, 50, 6, 6)        produces    1800 outputs
hidden4             (None, 500)             produces     500 outputs
dropout4            (None, 500)             produces     500 outputs
hidden5             (None, 500)             produces     500 outputs
output              (None, 10)              produces      10 outputs
epoch    train loss    valid loss    train/val    valid acc  dur
-------  ------------  ------------  -----------  -----------  ------
  1           nan           nan          nan      0.09923  16.18s
  2           nan           nan          nan      0.09923  16.45s

提前致谢。

4

1 回答 1

0

我玩游戏很晚了,但希望有人觉得这个答案有用!

根据我的经验,这里可能会出现很多问题。我将在 nolearn/lasagne 中写出调试此类问题的步骤:

  1. 使用 Theano 的fast_compile优化器可能会导致下溢问题,从而导致nan输出(这在我的案例中是最终问题)

  2. 当输出以nan值开始时,或者如果nan值在训练开始后不久就开始出现,则学习率可能太高。如果是0.01,请尝试并成功0.001

  3. 输入或输出值可能彼此太接近,您可能想尝试缩放它们。一种标准方法是通过减去平均值并除以标准偏差来缩放输入。

  4. 确保regression=True在使用 nolearn 处理回归问题时使用

  5. 尝试使用线性输出而不是 softmax。其他非线性有时也有帮助,但根据我的经验,并不经常。

  6. 如果这一切都失败了,请尝试找出问题出在您的网络还是您的数据上。如果您输入预期范围内的随机值并仍然获得nan输出,则它可能并非特定于您正在训练的数据集。

希望有帮助!

于 2016-04-26T13:42:40.760 回答