2

我正在尝试指定动态数量的层,我似乎做错了。我的问题是,当我在这里定义 100 层时,我会在前进步骤中出错。但是当我正确定义图层时它会起作用吗?下面的简化示例

class PredictFromEmbeddParaSmall(LightningModule):
    def __init__(self, hyperparams={'lr': 0.0001}):
        super(PredictFromEmbeddParaSmall, self).__init__()
        #Input is something like tensor.size=[768*100]
        self.TO_ILLUSTRATE = nn.Linear(768, 5)
        self.enc_ref=[]
        for i in range(100):
            self.enc_red.append(nn.Linear(768, 5))
        # gather the layers output sth
        self.dense_simple1 = nn.Linear(5*100, 2)
        self.output = nn.Sigmoid()
    def forward(self, x):
        # first input to enc_red
        x_vecs = []
        for i in range(self.para_count):
            layer = self.enc_red[i]
            # The first dim is the batch size here, output is correct
            processed_slice = x[:, i * 768:(i + 1) * 768]
            # This works and give the out of size 5
            rand = self.TO_ILLUSTRATE(processed_slice)
            #This will fail? Error below
            ret = layer(processed_slice)
            #more things happening we can ignore right now since we fail earlier

执行“ret = layer.forward(processed_slice)”时出现此错误

RuntimeError:设备类型为 cuda 的预期对象,但在调用 _th_addmm 时获得了参数 #1 'self' 的设备类型 cpu

有没有更聪明的方法来编程?或解决错误?

4

1 回答 1

3

您应该使用 pytorch 中的 ModuleList 而不是列表:https ://pytorch.org/docs/master/generated/torch.nn.ModuleList.html 。这是因为 Pytorch 必须保留一个包含模型所有模块的图表,如果您只是将它们添加到一个列表中,它们在图表中没有正确索引,从而导致您面临的错误。

你的 coude 应该是类似的东西:

class PredictFromEmbeddParaSmall(LightningModule):
    def __init__(self, hyperparams={'lr': 0.0001}):
        super(PredictFromEmbeddParaSmall, self).__init__()
        #Input is something like tensor.size=[768*100]
        self.TO_ILLUSTRATE = nn.Linear(768, 5)
        self.enc_ref=nn.ModuleList()                     # << MODIFIED LINE <<
        for i in range(100):
            self.enc_red.append(nn.Linear(768, 5))
        # gather the layers output sth
        self.dense_simple1 = nn.Linear(5*100, 2)
        self.output = nn.Sigmoid()
    def forward(self, x):
        # first input to enc_red
        x_vecs = []
        for i in range(self.para_count):
            layer = self.enc_red[i]
            # The first dim is the batch size here, output is correct
            processed_slice = x[:, i * 768:(i + 1) * 768]
            # This works and give the out of size 5
            rand = self.TO_ILLUSTRATE(processed_slice)
            #This will fail? Error below
            ret = layer(processed_slice)
            #more things happening we can ignore right now since we fail earlier

那么它应该可以正常工作!

编辑:替代方式。

除了使用ModuleList你也可以只使用nn.Sequential,这可以让你避免for在前向传递中使用循环。这也意味着您将无法访问中间激活,因此如果您需要它们,这不是您的解决方案。

class PredictFromEmbeddParaSmall(LightningModule):
    def __init__(self, hyperparams={'lr': 0.0001}):
        super(PredictFromEmbeddParaSmall, self).__init__()
        #Input is something like tensor.size=[768*100]
        self.TO_ILLUSTRATE = nn.Linear(768, 5)
        self.enc_ref=[]
        for i in range(100):
            self.enc_red.append(nn.Linear(768, 5))

        self.enc_red = nn.Seqential(*self.enc_ref)       # << MODIFIED LINE <<
        # gather the layers output sth
        self.dense_simple1 = nn.Linear(5*100, 2)
        self.output = nn.Sigmoid()
    def forward(self, x):
        # first input to enc_red
        x_vecs = []
        out = self.enc_red(x)                            # << MODIFIED LINE <<

于 2020-07-16T14:53:34.577 回答