2

I have an issue using a learned model with torch.

I followed this howto http://code.cogbits.com/wiki/doku.php?id=tutorial_supervised to train a model. Everything is fine, my model was trained and I have corrects results when I use my model. But it's slow !

The testing part for training look like this:

model:evaluate()

-- test over test data
print('==> testing on test set:')
for t = 1,testData:size() do
   -- disp progress
   xlua.progress(t, testData:size())

   -- get new sample
   local input = testData.data[t]
   if opt.type == 'double' then input = input:double()
   elseif opt.type == 'cuda' then input = input:cuda() end
   local target = testData.labels[t]

   -- test sample
   local pred = model:forward(input)
   confusion:add(pred, target)
end

-- timing
time = sys.clock() - time
time = time / testData:size()
print("\n==> time to test 1 sample = " .. (time*1000) .. 'ms')

I have the following speed recorded during testing:

==> time to test 1 sample = 12.419194088996ms

(Of course it vary, but it's ~12ms).

I want to use the learned model on others images, so I did this in a simple and new script:

(... requires)

torch.setnumthreads(8)
torch.setdefaulttensortype('torch.FloatTensor')

model = torch.load('results/model.net')
model:evaluate()

(... Image loading, resizing and normalization)

local time = sys.clock()

local result_info = model:forward(cropped_image:double())

print("==> time to test 1 frame = " .. (sys.clock() - time) * 1000 .. "ms")

The time spent is much bigger, I have the following output: ==> time to test 1 frame = 212.7647127424ms

I tested with more than one image, always with the resizing and normalization outside clock's measurements, and I always have > 200ms / image.

I don't understand what I'm doing wrong and why my code is much slower than during the training/testing.

Thanks !

4

0 回答 0