14

在我的 Ubuntu 上安装 TensorFlow 时,我想将 GPU 与 CUDA 一起使用。

但是我在官方教程中的这一步停止了:

在此处输入图像描述

这到底是哪里./configure?或者我的源代码树的根在哪里。

我的 TensorFlow 位于此处/usr/local/lib/python2.7/dist-packages/tensorflow。但我还是没有找到./configure

编辑

./configure根据萨尔瓦多·达利的回答,我找到了答案。但是在执行示例代码时,出现以下错误:

>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 8
E tensorflow/stream_executor/cuda/cuda_driver.cc:466] failed call to cuInit: CUDA_ERROR_NO_DEVICE
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:86] kernel driver does not appear to be running on this host (cliu-ubuntu): /proc/driver/nvidia/version does not exist
I tensorflow/core/common_runtime/gpu/gpu_init.cc:112] DMA: 
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 8

找不到 cuda 设备。

回答

请参阅有关我如何在此处启用 GPU 支持的答案。

4

4 回答 4

7

这是一个 bash 脚本,假设在

源代码树的根

当你克隆 repo时。这是https://github.com/tensorflow/tensorflow/blob/master/configure

于 2015-11-11T20:21:23.277 回答
3
  • 第一个问题的答案:./configure已经根据这里的答案找到了。它位于tensorflow如下所示源文件夹下。

  • 回答第二个问题:

实际上,我有 GPU NVIDIA Corporation GK208GLM [Quadro K610M]。我也安装了CUDA+ cuDNN。(因此,以下答案基于您已经使用正确的版本正确安装了CUDA 7.0++ cuDNN。)但是问题是:我安装了驱动程序,但 GPU 无法正常工作。我通过以下步骤使其工作:

起初,我这样做lspci并得到:

01:00.0 VGA compatible controller: NVIDIA Corporation GK208GLM [Quadro K610M] (rev ff)

这里的状态是rev ff。然后,我做了sudo update-pciids,再次检查lspci,得到:

01:00.0 VGA compatible controller: NVIDIA Corporation GK208GLM [Quadro K610M] (rev a1)

现在,Nvidia GPU 的状态正确为rev a1。但是现在,tensorflow还不支持GPU。接下来的步骤是(我安装的 Nvidia 驱动是 version nvidia-352):

sudo modprobe nvidia_352
sudo modprobe nvidia_352_uvm

为了将驱动程序添加到正确的模式。再检查一遍:

cliu@cliu-ubuntu:~$ lspci -vnn | grep -i VGA -A 12
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK208GLM [Quadro K610M] [10de:12b9] (rev a1) (prog-if 00 [VGA controller])
    Subsystem: Hewlett-Packard Company Device [103c:1909]
    Flags: bus master, fast devsel, latency 0, IRQ 16
    Memory at cb000000 (32-bit, non-prefetchable) [size=16M]
    Memory at 50000000 (64-bit, prefetchable) [size=256M]
    Memory at 60000000 (64-bit, prefetchable) [size=32M]
    I/O ports at 5000 [size=128]
    Expansion ROM at cc000000 [disabled] [size=512K]
    Capabilities: <access denied>
    Kernel driver in use: nvidia
cliu@cliu-ubuntu:~$ lsmod | grep nvidia
nvidia_uvm             77824  0 
nvidia               8646656  1 nvidia_uvm
drm                   348160  7 i915,drm_kms_helper,nvidia

我们可以发现Kernel driver in use: nvidia已显示并且nvidia处于正确模式。

现在,使用此处的示例来测试 GPU:

cliu@cliu-ubuntu:~$ python
Python 2.7.9 (default, Apr  2 2015, 15:33:21) 
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
>>> b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
>>> c = tf.matmul(a, b)
>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 8
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:888] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:88] Found device 0 with properties: 
name: Quadro K610M
major: 3 minor: 5 memoryClockRate (GHz) 0.954
pciBusID 0000:01:00.0
Total memory: 1023.81MiB
Free memory: 1007.66MiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:112] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:122] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:643] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Quadro K610M, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/gpu/gpu_region_allocator.cc:47] Setting region size to 846897152
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 8
Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: Quadro K610M, pci bus id: 0000:01:00.0
I tensorflow/core/common_runtime/local_session.cc:107] Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: Quadro K610M, pci bus id: 0000:01:00.0

>>> print sess.run(c)
b: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:289] b: /job:localhost/replica:0/task:0/gpu:0
a: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:289] a: /job:localhost/replica:0/task:0/gpu:0
MatMul: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:289] MatMul: /job:localhost/replica:0/task:0/gpu:0
[[ 22.  28.]
 [ 49.  64.]]

如您所见,GPU 已被利用。

于 2016-03-18T16:39:04.560 回答
2

对于您的第二个问题:您是否安装了兼容的 GPU(NVIDIA 计算能力 3.5 或更高版本),并且您是否按照说明安装了 CUDA 7.0 + cuDNN?这是您看到失败的最可能原因。如果答案是肯定的,则可能是 cuda 安装问题。当您运行 nvidia-smi 时,您是否看到列出了您的 GPU?如果没有,你需要先解决这个问题。这可能需要获取更新的驱动程序和/或重新运行 nvidia-xconfig 等。

于 2015-11-12T00:45:13.093 回答
0

只有拥有 7.0 cuda 库和 6.5 cudnn 库时,才能从源代码重建 GPU 版本。这需要由谷歌更新,我认为

于 2015-11-15T20:47:54.747 回答