2

我是手电筒的新手。最近,我正在尝试使用torch 进行多线性回归。但错误始终是无穷大和 nan。

对于前两个错误,它显然在增加。这是我的代码。

dataset= 
124.0000   81.6900   64.5000  118.0000
 150.0000  103.8400   73.3000  143.0000
   ...
 137.0000   94.9600   67.0000  191.0000
 110.0000   99.7900   75.5000  192.0000
   ...
  94.0000   89.4000   64.5000  139.0000
  74.0000   93.0000   74.0000  148.0000
  89.0000   93.5900   75.5000  179.0000
linLayer = nn.Linear(3,1)
model = nn.Sequential()  
model:add(linLayer)
criterion = nn.MSECriterion()

feval = function(x_new)
    if x ~= x_new then
      x:copy(x_new)
   end
   _nidx_ = (_nidx_ or 0) + 1
   if _nidx_ > (#dataset_inputs)[1] then _nidx_ = 1 end

   local sample = dataset[_nidx_]
   local inputs = sample[{ {2,4} }]
   local target = sample[{ {1} }] 

   dl_dx:zero()

   local loss_x = criterion:forward(model:forward(inputs),target)
   model:backward(inputs, criterion:backward(model.output,target))

   -- return loss(x) and dloss/dx
   return loss_x, dl_dx
end


sgd_params = {
   learningRate = 1e-3,
   learningRateDecay = 1e-4,
   weightDecay = 0,
   momentum = 0
}
epochs = 100


  for i = 1,epochs do
        current_loss = 0
        for i = 1,(#dataset_inputs)[1] do

            _,fs = optim.sgd(feval,x,sgd_params)

            current_loss = current_loss + fs[1]
        end
        current_loss = current_loss / (#dataset_inputs)[1]
        print('epoch = ' .. i .. 
         ' of ' .. epochs .. 
         ' current loss = ' .. current_loss)
    end

And the result:
epoch = 1 of 100 current loss = 8.1958765768632e+138    
epoch = 2 of 100 current loss = 5.0759297005752e+278    
epoch = 3 of 100 current loss = inf 
epoch = 4 of 100 current loss = inf 
epoch = 5 of 100 current loss = nan 
... ...
epoch = 97 of 100 current loss = nan    
epoch = 98 of 100 current loss = nan    
epoch = 99 of 100 current loss = nan    
epoch = 100 of 100 current loss = nan

这个问题我该怎么办?我对训练逻辑回归使用相同的方法。结果似乎比这更好。但还是不够好。有什么不对?非常感谢。

4

0 回答 0