Lethr Lethr -3 years ago 193
Python Question

Tensorflow allocating GPU memory when using tf.device('/cpu:0')

System Info: 1.1.0, GPU, Windows, Python 3.5, code runs in ipython consoles.

I am trying to run two different Tensorflow sessions, one on the GPU (that does some batch work) and one on the CPU that I use for quick tests while the other works.

The problem is that when I spawn the second session specifying

with tf.device('/cpu:0')
the session tries to allocate GPU memory and crashes my other session.

My code:

import os
os.environ["CUDA_VISIBLE_DEVICES"] = ""
import time

import tensorflow as tf

with tf.device('/cpu:0'):
with tf.Session() as sess:
# Here 6 GBs of GPU RAM are allocated.

How do I force Tensorflow to ignore the GPU?


As suggested in a comment by @Nicolas, I took a look at this answer and ran

import os
os.environ["CUDA_VISIBLE_DEVICES"] = ""
import tensorflow as tf

from tensorflow.python.client import device_lib

which prints:

[name: "/cpu:0"
device_type: "CPU"
memory_limit: 268435456
locality {
incarnation: 2215045474989189346
, name: "/gpu:0"
device_type: "GPU"
memory_limit: 6787871540
locality {
bus_id: 1
incarnation: 13663872143510826785
physical_device_desc: "device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0"

It seems to me that even if I explicitly tell the script to ignore any CUDA devices, it still finds and uses them. Could this be a bug of TF 1.1?

Answer Source

Would you mind trying one of these config options ?

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
# or config.gpu_options.per_process_gpu_memory_fraction = 0.0
with tf.Session(config=config) as sess:

As per the documentation, it should help you manage your GPU memory for this particular session and so your second session should be able to run on GPU.

EDIT: according this answer you should also try this:

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"   # see issue #152
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download