1

我正在尝试在由 anaconda 创建的 python 虚拟环境中在 NVIDIA 1060 GPU 上使用 Cuda 8.0 在 Ubuntu 16.04 上运行 theano 简单代码。以下是我的 theanorc 文件:

[global]
floatX = float32
device = cuda

我尝试运行的代码是 theano 网站上的一个简短示例:

from theano import function, config, shared, tensor
import numpy
import time

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], tensor.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
    r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, tensor.Elemwise) and
              ('Gpu' not in type(x.op).__name__)
              for x in f.maker.fgraph.toposort()]):
    print('Used the cpu')
else:
    print('Used the gpu')

当我运行代码时,我收到一堆警告和以下错误:

ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu: ('nvcc return status', 1, 'for cmd', 'nvcc -shared -O3 -m64 -Xcompiler -DCUDA_NDARRAY_CUH=c72d035fdf91890f3b36710688069b2e,-DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,-fPIC,-fvisibility=hidden -Xlinker -rpath,/home/eb/.theano/compiledir_Linux-4.8--generic-x86_64-with-debian-stretch-sid-x86_64-2.7.13-64/cuda_ndarray -I/home/eb/anaconda2/envs/deep/lib/python2.7/site-packages/theano/sandbox/cuda -I/home/eb/anaconda2/envs/deep/lib/python2.7/site-packages/numpy/core/include -I/home/eb/anaconda2/envs/deep/include/python2.7 -I/home/eb/anaconda2/envs/deep/lib/python2.7/site-packages/theano/gof -L/home/eb/anaconda2/envs/deep/lib -o /home/eb/.theano/compiledir_Linux-4.8--generic-x86_64-with-debian-stretch-sid-x86_64-2.7.13-64/cuda_ndarray/cuda_ndarray.so mod.cu -lcublas -lpython2.7 -lcudart')
Can not use cuDNN on context None: cannot compile with cuDNN. We got this error:
/tmp/try_flags_M8OZOh.c:4:19: fatal error: cudnn.h: No such file or directory
compilation terminated.

Mapped name None to device cuda: GeForce GTX 1060 6GB (0000:01:00.0)

令人惊讶的是,代码运行并打印所需的输出如下:

[GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float32, (False,))>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.365814 seconds
Result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
  1.62323296]
Used the gpu

我想知道我是否缺少 Theano 配置或其他东西?知道出了什么问题吗?

ps 除了安装在系统级别的 Cuda 库之外,所有库都已安装在我的 python 虚拟环境中。 - 谢谢

4

0 回答 0