1

I've created a neural network to model a certain (simple) input-output relationship. When I look at the time-series responses plot using the nntrain gui the predictions seem quite adequate, however, when I try to do out of sample prediction the results are nowhere close to the function being modelled.

I've googled this problem extensively and messed around with my code to no avail, I'd really appreciate a little insight into what I've been doing wrong.

I've included a minimal working example below.

 A = 1:1000;  B = 10000*sin(A); C = A.^2  +B;
 Set = [A' B' C'];
 input = Set(:,1:end-1);
 target = Set(:,end);
 inputSeries = tonndata(input(1:700,:),false,false);
 targetSeries = tonndata(target(1:700,:),false,false);

 inputSeriesVal = tonndata(input(701:end,:),false,false);
 targetSeriesVal = tonndata(target(701:end,:),false,false);

 inputDelays = 1:2;
 feedbackDelays = 1:2;
 hiddenLayerSize = 5;
 net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);

[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);
net.divideFcn = 'divideblock';  % Divide data in blocks
net.divideMode = 'time';  % Divide up every value

 % Train the Network
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
Y = net(inputs,inputStates,layerStates); 

 % Prediction Attempt
delay=length(inputDelays); N=300;
inputSeriesPred  = [inputSeries(end-delay+1:end),inputSeriesVal];
targetSeriesPred = [targetSeries(end-delay+1:end), con2seq(nan(1,N))];
netc = closeloop(net);
[Xs,Xi,Ai,Ts] = preparets(netc,inputSeriesPred,{},targetSeriesPred);
yPred = netc(Xs,Xi,Ai);
perf = perform(net,yPred,targetSeriesVal);

 figure;
plot([cell2mat(targetSeries),nan(1,N);
      nan(1,length(targetSeries)),cell2mat(yPred);
      nan(1,length(targetSeries)),cell2mat(targetSeriesVal)]')
legend('Original Targets','Network Predictions','Expected Outputs')
  end 

I realise narx net with a time delay is probably overkill for this type of problem but I intend on using this example as a base for a more complicated time-series problem in the future.

Kind regards, James

4

3 回答 3

0

我不确定你是否解决了这个问题。但是,您的问题至少还有另一种解决方案。

由于您正在处理时间序列,因此最好(至少在这种情况下)设置 net.divideFcn = 'dividerand'。'divideblock' 将仅使用时间序列的第一部分进行训练,这可能会导致有关长期趋势的信息丢失。

于 2013-03-13T07:38:53.727 回答
0

从训练数据到新数据的泛化能力差的最可能原因是(1)没有足够的训练数据来表征问题,或者(2)神经网络的神经元和延迟比问题所需的多,所以它过度拟合了数据(即,它很容易记住示例,而不必弄清楚它们是如何相关的。

(1) 的修复通常是更多数据。(2) 的解决方法是减少抽头延迟和/或神经元的数量。

希望这可以帮助!

于 2013-02-23T23:59:31.943 回答
0

增加 inputdelay、feedbackdelay 和 hiddenlayersize 如下:

 inputDelays = 1:30;
 feedbackDelays = 1:3;
 hiddenLayerSize = 30;

还将功能更改为

net.divideFcn = 'dividerand';

即使网络需要时间,这种更改对我也有效

于 2020-04-07T13:25:33.273 回答