tensorflow.python.framework.errors.InternalError: Message length was negativ
[[Node: random_uniform_1_S1 = _Recv[client_terminated=false,
Yes, currently there is a 2GB limit on an individual tensor when sending it between processes. This limit is imposed by the protocol buffer representation (more precisely, by the auto-generated C++ wrappers produced by the
protoc compiler) that is used in TensorFlow's communication layer.
We are investigating ways to lift this restriction. In the mean time, you can work around it by manually adding
tf.concat() operations to partition the tensor for transfer. If you have very large
tf.Variable objects, you can use variable partitioners to perform this transformation automatically. Note that in your program you have multiple 8GB tensors in memory at once, so the peak memory utilization will be at least 16GB.