ErlangBestLanguage ErlangBestLanguage - 19 days ago 9
Python Question

How to batch inputs together for tensorflow?

I'm trying to batch together the inputs for a neural network I'm working on so I can feed them into tensorflow like in the tensorflow MNIST tutorial. However I can't find anyway of doing this and it isn't covered in the tutorial.

input = tf.placeholder(tf.float32, [10, 10])
...
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
inputs = #A list containing 50 of the inputs
sess.run(accuracy, feed_dict={input: inputs})


This will throw the following error:

ValueError: Cannot feed value of shape (50, 10, 10) for Tensor'Placeholder:0', which has shape '(10, 10)'


I understand that why I'm getting the above error, I just don't know how to get tensorflow to treat my inputs as a batch of inputs rather than think I'm trying to feed it all in as one shape.

Thanks very much for any help!

Answer

You need to modify the signature of your placeholder. Let's break down the error message:

ValueError: Cannot feed value of shape (50, 10, 10) for 
Tensor'Placeholder:0', which has shape '(10, 10)'

your inputs variable is the one that has shape (50, 10, 10) that means 50 elements of shape (10, 10) and Tensor Placeholder:0 is your input variable. If you print (input.name you will get the value Placeholder:0. Cannot feed value means that it cannot assign inputs to input.

A first quick solution is to fix the shape of the placeholder input to

input = tf.placeholder(tf.float32, [50, 10, 10])

but each time you want to modify the size of the batch you will need to update the batch size in your input. A better way to specify the batch size is to put a undefined shape dimension for the batch size using None:

input = tf.placeholder(tf.float32, [None, 10, 10])

This now will works with any batch size, from 1 to the hardware limits of your architecture.