问题:分段错误(SIGSEGV,信号 11)
简要程序说明:
- 处理来自远程客户端的请求的高性能 gpu (CUDA) 服务器
- 每个传入的请求都会产生一个线程,该线程在多个 GPU(串行,非并行)上执行计算并将结果发送回客户端,这通常需要 10-200 毫秒,因为每个请求都包含数十或数百个内核调用
- 请求处理线程具有对 GPU 的独占访问权限,这意味着如果一个线程在 GPU1 上运行某些东西,所有其他线程都必须等到它完成
- 使用 -arch=sm_35 -code=compute_35 编译
- 使用 CUDA 5.0
- 我没有明确使用任何 CUDA 原子或任何内核同步障碍,尽管我显然使用了推力(各种函数)和 cudaDeviceSynchronize()
- Nvidia 驱动程序:NVIDIA dlloader X Driver 313.30 Wed Mar 27 15:33:21 PDT 2013
操作系统和硬件信息:
- Linux lub1 3.5.0-23-generic #35~precise1-Ubuntu x86_64 x86_64 x86_64 GNU/Linux
- GPU:4x GPU 0:GeForce GTX TITAN
- 32 GB 内存
- MB:华硕 MAXIMUS V EXTREME
- CPU:i7-3770K
崩溃信息:
在处理了数千个请求后(有时更快,有时更晚),崩溃“随机”发生。一些崩溃的堆栈跟踪如下所示:
#0 0x00007f8a5b18fd91 in __pthread_getspecific (key=4) at pthread_getspecific.c:62
#1 0x00007f8a5a0c0cf3 in ?? () from /usr/lib/libcuda.so.1
#2 0x00007f8a59ff7b30 in ?? () from /usr/lib/libcuda.so.1
#3 0x00007f8a59fcc34a in ?? () from /usr/lib/libcuda.so.1
#4 0x00007f8a5ab253e7 in ?? () from /usr/local/cuda-5.0/lib64/libcudart.so.5.0
#5 0x00007f8a5ab484fa in cudaGetDevice () from /usr/local/cuda-5.0/lib64/libcudart.so.5.0
#6 0x000000000046c2a6 in thrust::detail::backend::cuda::arch::device_properties() ()
#0 0x00007ff03ba35d91 in __pthread_getspecific (key=4) at pthread_getspecific.c:62
#1 0x00007ff03a966cf3 in ?? () from /usr/lib/libcuda.so.1
#2 0x00007ff03aa24f8b in ?? () from /usr/lib/libcuda.so.1
#3 0x00007ff03b3e411c in ?? () from /usr/local/cuda-5.0/lib64/libcudart.so.5.0
#4 0x00007ff03b3dd4b3 in ?? () from /usr/local/cuda-5.0/lib64/libcudart.so.5.0
#5 0x00007ff03b3d18e0 in ?? () from /usr/local/cuda-5.0/lib64/libcudart.so.5.0
#6 0x00007ff03b3fc4d9 in cudaMemset () from /usr/local/cuda-5.0/lib64/libcudart.so.5.0
#7 0x0000000000448177 in libgbase::cudaGenericDatabase::cudaCountIndividual(unsigned int, ...
#0 0x00007f01db6d6153 in ?? () from /usr/lib/libcuda.so.1
#1 0x00007f01db6db7e4 in ?? () from /usr/lib/libcuda.so.1
#2 0x00007f01db6dbc30 in ?? () from /usr/lib/libcuda.so.1
#3 0x00007f01db6dbec2 in ?? () from /usr/lib/libcuda.so.1
#4 0x00007f01db6c6c58 in ?? () from /usr/lib/libcuda.so.1
#5 0x00007f01db6c7b49 in ?? () from /usr/lib/libcuda.so.1
#6 0x00007f01db6bdc22 in ?? () from /usr/lib/libcuda.so.1
#7 0x00007f01db5f0df7 in ?? () from /usr/lib/libcuda.so.1
#8 0x00007f01db5f4e0d in ?? () from /usr/lib/libcuda.so.1
#9 0x00007f01db5dbcea in ?? () from /usr/lib/libcuda.so.1
#10 0x00007f01dc11e0aa in ?? () from /usr/local/cuda-5.0/lib64/libcudart.so.5.0
#11 0x00007f01dc1466dd in cudaMemcpy () from /usr/local/cuda-5.0/lib64/libcudart.so.5.0
#12 0x0000000000472373 in thrust::detail::backend::cuda::detail::b40c_thrust::BaseRadixSortingEnactor
#0 0x00007f397533dd91 in __pthread_getspecific (key=4) at pthread_getspecific.c:62
#1 0x00007f397426ecf3 in ?? () from /usr/lib/libcuda.so.1
#2 0x00007f397427baec in ?? () from /usr/lib/libcuda.so.1
#3 0x00007f39741a9840 in ?? () from /usr/lib/libcuda.so.1
#4 0x00007f39741add08 in ?? () from /usr/lib/libcuda.so.1
#5 0x00007f3974194cea in ?? () from /usr/lib/libcuda.so.1
#6 0x00007f3974cd70aa in ?? () from /usr/local/cuda-5.0/lib64/libcudart.so.5.0
#7 0x00007f3974cff6dd in cudaMemcpy () from /usr/local/cuda-5.0/lib64/libcudart.so.5.0
#8 0x000000000046bf26 in thrust::detail::backend::cuda::detail::checked_cudaMemcpy(void*
如您所见,通常它最终会__pthread_getspecific
从libcuda.so
库本身或某个地方调用。据我记得,只有一种情况它没有崩溃,而是以一种奇怪的方式挂起:如果我的请求不涉及任何 GPU 计算(统计数据等),程序能够响应我的请求,否则我从来没有得到答复。此外,执行 nvidia-smi -L 不起作用,它只是挂在那里,直到我重新启动计算机。在我看来,这有点像 GPU 死锁。不过,这可能是与此完全不同的问题。
有没有人知道问题可能出在哪里或可能导致这种情况的原因?
更新:
一些额外的分析:
cuda-memcheck
不打印任何错误消息。valgrind
- 泄漏检查确实会打印很多消息,如下所示(有数百个这样的消息):
==2464== 16 bytes in 1 blocks are definitely lost in loss record 6 of 725 ==2464== at 0x4C2B1C7: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==2464== by 0x568C202: ??? (in /usr/local/cuda-5.0/lib64/libcudart.so.5.0.35) ==2464== by 0x56B859D: ??? (in /usr/local/cuda-5.0/lib64/libcudart.so.5.0.35) ==2464== by 0x5050C82: __nptl_deallocate_tsd (pthread_create.c:156) ==2464== by 0x5050EA7: start_thread (pthread_create.c:315) ==2464== by 0x6DDBCBC: clone (clone.S:112) ==2464== ==2464== 16 bytes in 1 blocks are definitely lost in loss record 7 of 725 ==2464== at 0x4C2B1C7: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==2464== by 0x568C202: ??? (in /usr/local/cuda-5.0/lib64/libcudart.so.5.0.35) ==2464== by 0x56B86D8: ??? (in /usr/local/cuda-5.0/lib64/libcudart.so.5.0.35) ==2464== by 0x5677E0F: ??? (in /usr/local/cuda-5.0/lib64/libcudart.so.5.0.35) ==2464== by 0x400F90D: _dl_fini (dl-fini.c:254) ==2464== by 0x6D23900: __run_exit_handlers (exit.c:78) ==2464== by 0x6D23984: exit (exit.c:100) ==2464== by 0x6D09773: (below main) (libc-start.c:258) ==2464== 408 bytes in 3 blocks are possibly lost in loss record 222 of 725 ==2464== at 0x4C29DB4: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==2464== by 0x5A89B98: ??? (in /usr/lib/libcuda.so.313.30) ==2464== by 0x5A8A1F2: ??? (in /usr/lib/libcuda.so.313.30) ==2464== by 0x5A8A3FF: ??? (in /usr/lib/libcuda.so.313.30) ==2464== by 0x5B02E34: ??? (in /usr/lib/libcuda.so.313.30) ==2464== by 0x5AFFAA5: ??? (in /usr/lib/libcuda.so.313.30) ==2464== by 0x5AAF009: ??? (in /usr/lib/libcuda.so.313.30) ==2464== by 0x5A7A6D3: ??? (in /usr/lib/libcuda.so.313.30) ==2464== by 0x59B205C: ??? (in /usr/lib/libcuda.so.313.30) ==2464== by 0x5984544: cuInit (in /usr/lib/libcuda.so.313.30) ==2464== by 0x568983B: ??? (in /usr/local/cuda-5.0/lib64/libcudart.so.5.0.35) ==2464== by 0x5689967: ??? (in /usr/local/cuda-5.0/lib64/libcudart.so.5.0.35)
更多信息:
我尝试在更少的卡上运行(3 个,因为这是程序所需的最小值),但仍然发生崩溃。
以上不正确,我错误地配置了应用程序,它使用了所有四张卡。仅用 3 张卡重新运行实验似乎可以解决问题,它现在在重负载下运行了几个小时而没有崩溃。我现在将尝试让它运行更多,然后尝试使用 3 张卡的不同子集来验证这一点,同时测试问题是否与一张特定的卡有关。
我在测试运行期间监测了 GPU 温度,似乎没有任何问题。卡在最高负载下达到约 78-80 °C,风扇运转约 56%,这一直持续到崩溃发生(几分钟),对我来说似乎不太高。
我一直在考虑的一件事是处理请求的方式 - 有很多 cudaSetDevice 调用,因为每个请求都会产生一个新线程(我正在使用 mongoose 库),然后这个线程通过调用 cudaSetDevice( id) 具有适当的设备 ID。切换可以在一个请求期间发生多次,并且我没有使用任何流(因此它全部转到默认 (0) 流 IIRC)。这可能与 pthread_getspecific 中发生的崩溃有关吗?
我也尝试升级到最新的驱动程序(测试版,319.12),但这并没有帮助。