1

我在我的链中使用三维卷积链接(带有 ConvolutionND)。

前向计算运行顺利(我检查了中间结果形状以确保我正确理解了 convolution_nd 参数的含义),但在后向计算过程中,aCuDNNError会随着 message 升高CUDNN_STATUS_NOT_SUPPORTED

ConvolutionND的cover_all参数作为其默认值 False,所以从文档中我看不出错误的原因是什么。

这是我定义卷积层之一的方法:

self.conv1 = chainer.links.ConvolutionND(3, 1, 4, (3, 3, 3)).to_gpu(self.GPU_1_ID)

调用堆栈是

File "chainer/function_node.py", line 548, in backward_accumulate
    gxs = self.backward(target_input_indexes, grad_outputs)
File "chainer/functions/connection/convolution_nd.py", line 118, in backward
    gy, W, stride=self.stride, pad=self.pad, outsize=x_shape)
File "chainer/functions/connection/deconvolution_nd.py", line 310, in deconvolution_nd
    y, = func.apply(args)
File chainer/function_node.py", line 258, in apply
    outputs = self.forward(in_data)
File "chainer/functions/connection/deconvolution_nd.py", line 128, in forward
    return self._forward_cudnn(x, W, b)
File "chainer/functions/connection/deconvolution_nd.py", line 105, in _forward_cudnn
    tensor_core=tensor_core)
File "cupy/cudnn.pyx", line 881, in cupy.cudnn.convolution_backward_data
File "cupy/cuda/cudnn.pyx", line 975, in cupy.cuda.cudnn.convolutionBackwardData_v3
File "cupy/cuda/cudnn.pyx", line 461, in cupy.cuda.cudnn.check_status
cupy.cuda.cudnn.CuDNNError: CUDNN_STATUS_NOT_SUPPORTED

那么在使用的时候有没有特别要注意的地方ConvolutionND呢?

例如,失败的代码是:

import chainer
from chainer import functions as F
from chainer import links as L
from chainer.backends import cuda

import numpy as np
import cupy as cp

chainer.global_config.cudnn_deterministic = False

NB_MASKS = 60
NB_FCN = 3
NB_CLASS = 17

class MFEChain(chainer.Chain):
    """docstring for Wavelphasenet."""
    def __init__(self,
                 FCN_Dim,
                 gpu_ids=None):
        super(MFEChain, self).__init__()

        self.GPU_0_ID, self.GPU_1_ID = (0, 1) if gpu_ids is None else gpu_ids
        with self.init_scope():
            self.conv1 = chainer.links.ConvolutionND(3, 1, 4, (3, 3, 3)).to_gpu(
                self.GPU_1_ID
            )

    def __call__(self, inputs):
        ### Pad input ###
        processed_sequences = []
        for convolved in inputs:
            ## Transform to sequences)
            copy = convolved if self.GPU_0_ID == self.GPU_1_ID else F.copy(convolved, self.GPU_1_ID)
            processed_sequences.append(copy)

        reprocessed_sequences = []
        with cuda.get_device(self.GPU_1_ID):
            for convolved in processed_sequences:
                convolved = F.expand_dims(convolved, 0)
                convolved = F.expand_dims(convolved, 0)
                convolved = self.conv1(convolved)

                reprocessed_sequences.append(convolved)

            states = F.vstack(reprocessed_sequences)

            logits = states

            ret_logits = logits if self.GPU_0_ID == self.GPU_1_ID else F.copy(logits, self.GPU_0_ID)
        return ret_logits

def mfe_test():
    mfe = MFEChain(150)
    inputs = list(
        chainer.Variable(
            cp.random.randn(
                NB_MASKS,
                11,
                in_len,
                dtype=cp.float32
            )
        ) for in_len in [53248]
    )
    val = mfe(inputs)
    grad = cp.ones(val.shape, dtype=cp.float32)
    val.grad = grad
    val.backward()
    for i in inputs:
        print(i.grad)

if __name__ == "__main__":
    mfe_test()
4

1 回答 1

1

cupy.cuda.cudnn.convolutionBackwardData_v3 与某些特定参数不兼容,如官方 github 中的问题所述。

不幸的是,这个问题只涉及 deconvolution_2d.py (不是 deconvolution_nd.py ),因此在你的情况下,关于是否使用 cudnn 的决策失败了,我猜。

您可以通过确认来检查您的参数

  1. 检查是否将膨胀参数(!= 1)或组参数(!= 1)传递给卷积。
  2. 打印 chainer.config.cudnn_deterministic、configuration.config.autotune 和 configuration.config.use_cudnn_tensor_core。

可以通过在官方 github 中提出问题来获得进一步的支持。

您显示的代码非常复杂。

为了澄清问题,下面的代码会有所帮助。

from chainer import Variable, Chain
from chainer import links as L
from chainer import functions as F

import numpy as np
from six import print_

batch_size = 1
in_channel = 1
out_channel = 1

class MyLink(Chain):
    def __init__(self):
        super(MyLink, self).__init__()
        with self.init_scope():
            self.conv = L.ConvolutionND(3, 1, 1, (3, 3, 3), nobias=True, initialW=np.ones((in_channel, out_channel, 3, 3, 3)))

    def __call__(self, x):
        return F.sum(self.conv(x))

if __name__ == "__main__":
    my_link = MyLink()
    my_link.to_gpu(0)
    batch = Variable(np.ones((batch_size, in_channel, 3, 3, 3)))
    batch.to_gpu(0)
    loss = my_link(batch)
    loss.backward()
    print_(batch.grad)
于 2018-07-20T09:05:14.377 回答