0

我正在模型中尝试 torchinfo,它需要两个输入,一个是 3D,另一个是 1D 输入。所以我尝试了:
print(summary(model, input_size=([(10,1684,40),(10)])))
但我收到了:

TypeError: rand() argument after * must be an iterable, not int

我试过
print(summary(model, input_size=([(10,1684,40),(10,20)])))\

'lengths' argument should be a 1D CPU int64 tensor, but got 2D cuda:0 Long tensor

我认为“长度”对应于第一个代码中的第二个参数 (10) 和第二个代码中的 (10,20)。

我该做什么?

我修复了第二个参数并将 .cpu() 添加到长度。 print(summary(model, input_size=([(10,1684,40),(10,)])))\

但我收到:

RuntimeError                              Traceback (most recent call last)
~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
    267             if isinstance(x, (list, tuple)):
--> 268                 _ = model.to(device)(*x, **kwargs)
    269             elif isinstance(x, dict):

~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1101                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102             return forward_call(*input, **kwargs)
   1103         # Do not call functions when jit is used

~/06rnn_attentionf6/my_model.py in forward(self, input_sequence, input_lengths, label_sequence)
     85         # エンコーダに入力する
---> 86         enc_out, enc_lengths = self.encoder(input_sequence,
     87                                             input_lengths)

~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1119 
-> 1120         result = forward_call(*input, **kwargs)
   1121         if _global_forward_hooks or self._forward_hooks:

~/06rnn_attentionf6/encoder.py in forward(self, sequence, lengths)
    101             rnn_input \
--> 102                 = nn.utils.rnn.pack_padded_sequence(output, 
    103                                                   output_lengths.cpu(), #ここを修正

~/.local/lib/python3.8/site-packages/torch/nn/utils/rnn.py in pack_padded_sequence(input, lengths, batch_first, enforce_sorted)
    248     data, batch_sizes = \
--> 249         _VF._pack_padded_sequence(input, lengths, batch_first)
    250     return _packed_sequence_init(data, batch_sizes, sorted_indices, None)

RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0

The above exception was the direct cause of the following exception:

RuntimeError                              Traceback (most recent call last)
/tmp/ipykernel_715630/614744292.py in <module>
      1 from torchinfo import summary
----> 2 print(summary(model, input_size=([(10,1684,40),(10,)])))

~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs)
    199         input_data, input_size, batch_dim, device, dtypes
    200     )
--> 201     summary_list = forward_pass(
    202         model, x, batch_dim, cache_forward_pass, device, **kwargs
    203     )

~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
    275     except Exception as e:
    276         executed_layers = [layer for layer in summary_list if layer.executed]
--> 277         raise RuntimeError(
    278             "Failed to run torchinfo. See above stack traces for more details. "
    279             f"Executed layers up to: {executed_layers}"

RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []

我应该做什么

4

0 回答 0