5

总的来说,我是 Nolearn 和 Theano 的新手。当我在Nolearn 教程中尝试代码时,我得到了 0.9 的极高错误率!

为什么当教程的错误为 0.005 时,我会得到如此高的错误?有没有其他人能够重现这个问题?

在 OS X Yosemite 上使用 Theano 0.7.0、Lasagne v0.1、nolearn v0.5。

输出

[DBN] fitting X.shape=(46900, 784)
[DBN] layers [784, 300, 10]
[DBN] Fine-tune...
100%

Epoch 1:
100%

  loss 2.30829815265
  err  0.901340505464
  (0:00:30)
Epoch 2:
100%

  loss 2.30304712187
  err  0.902813353825
  (0:00:34)
Epoch 3:
100%

  loss 2.30303548692
  err  0.90072148224
  (0:00:34)
Epoch 4:
100%

  loss 2.30297605197
  err  0.902322404372
  (0:00:28)
Epoch 5:
100%

  loss 2.30295462556
  err  0.901191086066
  (0:00:26)
Epoch 6:
100%

  loss 2.30293222366
  err  0.898352117486
  (0:00:33)
Epoch 7:
100%

  loss 2.30283567033
  err  0.901425887978
  (0:00:34)
Epoch 8:
100%

  loss 2.30283342522
  err  0.90059340847
  (0:00:35)
Epoch 9:
100%

  loss 2.30283433199
  err  0.902813353825
  (0:00:33)
Epoch 10:
  loss 2.30279696997
  err  0.897861168033
  (0:00:33)

代码

# import the necessary packages
from sklearn.cross_validation import train_test_split
from sklearn.metrics import classification_report
from sklearn import datasets
from nolearn.dbn import DBN
import numpy as np

# grab the MNIST dataset (if this is the first time you are running
# this script, this make take a minute -- the 55mb MNIST digit dataset
# will be downloaded)
print "[X] downloading data..."
dataset = datasets.fetch_mldata("MNIST Original")

# scale the data to the range [0, 1] and then construct the training
# and testing splits
(trainX, testX, trainY, testY) = train_test_split(
    dataset.data / 255.0, dataset.target.astype("int0"), test_size = 0.33)

# train the Deep Belief Network with 784 input units (the flattened,
# 28x28 grayscale image), 300 hidden units, 10 output units (one for
# each possible output classification, which are the digits 1-10)
dbn = DBN(
    [trainX.shape[1], 300, 10],
    learn_rates = 0.3,
    learn_rate_decays = 0.9,
    epochs = 10,
    verbose = 1)
dbn.fit(trainX, trainY)

分类报告

在此处输入图像描述

4

0 回答 0