1

我正在尝试训练一个可以在 MATLAB 中对二次方程进行分类的 3 个输入、1 个输出的神经网络(具有一个输入层、一个隐藏层和一个输出层)。我正在尝试实现前馈阶段,$x_i^{out}=f(s_i)$, $s_i={\sum}_{\substack{j\\}} w_{ij}x_j^{in} $ 反向传播 ${\delta}_j^{in}=f'(s_i){\sum}_{\substack{j\\}} {\delta}_i^{out}w_{ij}$ 和更新$w_{ij}^{new}=w_{ij}^{old}-\epsilon {\delta}_i^{out}x_j^{in}$,其中 $x$ 是输入向量,$w$ 是权重,$\epsilon$ 是学习率。

我在编码隐藏层和添加激活函数 $f(s)=tanh(s)$ 时遇到了麻烦,因为网络输出中的错误似乎没有减少。有人可以指出我实施的错误吗?

输入是二次 $ax^2 + bx + c = 0$ 的实系数,如果二次有两个实根,则输出应该为正,如果没有,则输出应该为负。

nTrain = 100; % training set
nOutput = 1;
nSecondLayer = 7; % size of hidden layer (arbitrary)
trainExamples = rand(4,nTrain); % independent random set of examples
trainExamples(4,:) = ones(1,nTrain);  % set the dummy input to be 1

T = sign(trainExamples(2,:).^2-4*trainExamples(1,:).*trainExamples(3,:)); % The teacher provides this for every example
%The student neuron starts with random weights
w1 = rand(4,nSecondLayer);
w2 = rand(nSecondLayer,nOutput);
nepochs=0;
nwrong = 1;
S1(nSecondLayer,nTrain) = 0;
S2(nOutput,nTrain) = 0; 

while( nwrong>1e-2 )  % more then some small number close to zero
    for i=1:nTrain
        x = trainExamples(:,i);
        S2(:,i) = w2'*S1(:,i);
        deltak = tanh(S2(:,i)) - T(:,i); % back propagate
        deltaj = (1-tanh(S2(:,i)).^2).*(w2*deltak); % back propagate      
        w2 = w2 - tanh(S1(:,i))*deltak'; % updating
        w1 = w1- x*deltaj'; % updating  
    end
   output = tanh(w2'*tanh(w1'*trainExamples));
   dOutput = output-T;
   nwrong = sum(abs(dOutput));
   disp(nwrong)
   nepochs = nepochs+1          
end
nepochs

谢谢

4

1 回答 1

2

在把我的头撞到墙上几天后,我发现了一个小错字。以下是一个有效的解决方案:

clear
% Set up parameters
nInput = 4; % number of nodes in input
nOutput = 1; % number of nodes in output
nHiddenLayer = 7; % number of nodes in th hidden layer
nTrain = 1000; % size of training set
epsilon = 0.01; % learning rate


% Set up the inputs: random coefficients between -1 and 1
trainExamples = 2*rand(nInput,nTrain)-1;
trainExamples(nInput,:) = ones(1,nTrain);  %set the last input to be 1

% Set up the student neurons for both hidden and the output layers
S1(nHiddenLayer,nTrain) = 0;
S2(nOutput,nTrain) = 0;

% The student neuron starts with random weights from both input and the hidden layers
w1 = rand(nInput,nHiddenLayer);
w2 = rand(nHiddenLayer+1,nOutput);

% Calculate the teacher outputs according to the quadratic formula
T = sign(trainExamples(2,:).^2-4*trainExamples(1,:).*trainExamples(3,:));


% Initialise values for looping
nEpochs = 0;
nWrong = nTrain*0.01;
Wrong = [];
Epoch = [];

while(nWrong >= (nTrain*0.01)) % as long as more than 1% of outputs are wrong
    for i=1:nTrain
        x = trainExamples(:,i);
        S1(1:nHiddenLayer,i) = w1'*x;
        S2(:,i) = w2'*[tanh(S1(:,i));1];
        delta1 = tanh(S2(:,i)) - T(:,i); % back propagate
        delta2 = (1-tanh(S1(:,i)).^2).*(w2(1:nHiddenLayer,:)*delta1); % back propagate       
        w1 = w1 - epsilon*x*delta2'; % update
        w2 = w2 - epsilon*[tanh(S1(:,i));1]*delta1'; % update
    end

    outputNN = sign(tanh(S2));
    delta = outputNN - T; % difference between student and teacher
    nWrong = sum(abs(delta/2));
    nEpochs = nEpochs + 1;
    Wrong = [Wrong nWrong];
    Epoch = [Epoch nEpochs];
end
plot(Epoch,Wrong);
于 2012-06-10T07:16:32.010 回答