7

I feel like this should be trivial, but I've struggled to find anything useful in the PyBrain documentation, on here, or elsewhere.

The problem is this :

I have a three layer (input, hidden, output) feedforward network built and trained in PyBrain. Each layer has three nodes. I want to activate the network with novel inputs and store the resultant activation values of the nodes at the hidden layer. As far as I can tell, net.activate() and net.activateOnDataset() will only return the activation values of output layer nodes and are the only ways to activate a network.

How do I get at the hidden layer activations of a PyBrain network?

I'm not sure example code will help that much in this case, but here's some anyway (with a cut-down training set) :

from pybrain.tools.shortcuts import buildNetwork
from pybrain.datasets import SupervisedDataSet
from pybrain.supervised.trainers import BackpropTrainer

net = buildNetwork(3, 3, 3)

dataSet = SupervisedDataSet(3, 3)
dataSet.addSample((0, 0, 0), (0, 0, 0))
dataSet.addSample((1, 1, 1), (0, 0, 0))
dataSet.addSample((1, 0, 0), (1, 0, 0))
dataSet.addSample((0, 1, 0), (0, 1, 0))
dataSet.addSample((0, 0, 1), (0, 0, 1))

trainer = BackpropTrainer(net, dataSet)
trained = False
acceptableError = 0.001

# train until acceptable error reached
while trained == False :
    error = trainer.train()
    if error < acceptableError :
        trained = True

result = net.activate([0.5, 0.4, 0.7])
print result

In this case, desired functionality is to print a list of the hidden layer's activation values.

4

1 回答 1

6

看起来这应该有效:

net['in'].outputbuffer[net['in'].offset]
net['hidden0'].outputbuffer[net['hidden0'].offset]

纯粹基于查看源代码

于 2012-09-15T09:58:32.423 回答