2

我在 4 GPU Amazon 实例上遇到了这个问题,使用了一个简单的示例脚本:

import skflow
import tensorflow as tf
from sklearn import datasets

iris = datasets.load_iris()
X_train, X_test, y_train, y_test = cross_validation.train_test_split(iris.data, iris.target,
    test_size=0.2, random_state=42)

def my_model(X, y):

    with tf.device('/gpu:1'):
        layers = skflow.ops.dnn(X, [1000, 500, 150], keep_prob=0.5) # many neurons to see the impac on memory
    with tf.device('/cpu:0'):
        return skflow.models.logistic_regression(layers, y)

classifier = skflow.TensorFlowEstimator(model_fn=my_model, n_classes=3)
classifier.fit(X_train, y_train)

nvidia-smi启动脚本之前的结果是:

Fri Feb 19 11:30:22 2016       
+------------------------------------------------------+                       
| NVIDIA-SMI 346.46     Driver Version: 346.46         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GRID K520           Off  | 0000:00:03.0     Off |                  N/A |
| N/A   40C    P0    41W / 125W |   2247MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GRID K520           Off  | 0000:00:04.0     Off |                  N/A |
| N/A   36C    P0    40W / 125W |   2113MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GRID K520           Off  | 0000:00:05.0     Off |                  N/A |
| N/A   41C    P0    43W / 125W |     53MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  GRID K520           Off  | 0000:00:06.0     Off |                  N/A |
| N/A   39C    P0    41W / 125W |   1816MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

在脚本运行时:

Fri Feb 19 11:30:53 2016       
+------------------------------------------------------+                       
| NVIDIA-SMI 346.46     Driver Version: 346.46         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GRID K520           Off  | 0000:00:03.0     Off |                  N/A |
| N/A   40C    P0    46W / 125W |   3926MiB /  4095MiB |     26%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GRID K520           Off  | 0000:00:04.0     Off |                  N/A |
| N/A   37C    P0    42W / 125W |   3926MiB /  4095MiB |     17%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GRID K520           Off  | 0000:00:05.0     Off |                  N/A |
| N/A   41C    P0    44W / 125W |     92MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  GRID K520           Off  | 0000:00:06.0     Off |                  N/A |
| N/A   39C    P0    42W / 125W |   1856MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

所以内存被分配给 GPU0,即使代码中没有提到它。你知道这种行为是从哪里来的吗?这会导致一个问题,因为我们在这个实例上有多个用户,即使没有人打算使用 GPU0,GPU0 也会饱和。

4

2 回答 2

1

我们发现的一种解决方法是修改skflow.TensorFlowEstimator

罪魁祸首是

with self._graph.as_default():
    tf.set_random_seed(self.tf_random_seed)
    self._global_step = tf.Variable(
        0, name="global_step", trainable=False)

in skflow.TensorFlowEstimator.setup_training(),我们将其修改为

with self._graph.as_default(), tf.device("/gpu:{0}".format(self.gpu_number)):
    tf.set_random_seed(self.tf_random_seed)
    self._global_step = tf.get_variable('global_step', [],
                                      initializer=tf.constant_initializer(0), trainable=False)

向类添加一个属性gpu_number,并sessionallow_soft_placement=Truein初始化skflow.TensorFlowEstimator._setup_training()

于 2016-02-19T14:14:28.450 回答
0

如果您只对使用 GPU1 感兴趣,我会考虑将脚本包装在一些设置中CUDA_VISIBLE_DEVICES(请参阅https://devblogs.nvidia.com/parallelforall/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/1. 这样,脚本只能看到一个 GPU(它看起来就像它的 id 是0)。如果您将其设置为,2,3您将分别获得具有 id 的那些 GPU 0,1

于 2016-02-19T13:29:08.750 回答