Vladislav Ladenkov - 7 months ago 106

Python Question

I am learning TensorFlow, and my goal is to implement MultiPerceptron for my needs. I checked the MNIST tutorial with MultiPerceptron implementation and everything was clear to me except this:

`_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,`

y: batch_y})

I guess,

`x`

`y`

`x = tf.placeholder("float", [None, n_input])`

y = tf.placeholder("float", [None, n_classes])

They feed whole batches (which are packs of data points and labels)! How does tensorflow interpret this "batch" input? And how does it update the weights: simultaneously after each element in a batch, or after running through the whole batch?

And, if I need to input one number (

`input_shape = [1,1]`

`output_shape = [1,4]`

`tf.placeholders`

- When I ask, how does tensorflow interpret it, I want to know how tensorflow splits the batch into single elements. For example, batch is a 2-D array, right? In which direction does it split an array? Or it uses matrix operations and doesn't split anything?
- When I ask, how should I feed my data, I want to know, should it be a 2-D array with samples at its rows and features at its columns, or, maybe, could it be a 2-D list.

When I feed my float numpy array

`X_train`

`x`

`x = tf.placeholder("float", [1, n_input])`

I receive an error:

`ValueError: Cannot feed value of shape (1, 18) for Tensor 'Placeholder_10:0', which has shape '(1, 1)'`

It appears that I have to create my data as a Tensor too?

When I tried [18x1]:

`Cannot feed value of shape (18, 1) for Tensor 'Placeholder_12:0', which has shape '(1, 1)'`

Answer

They feed whole bathces(which are packs of data points and labels)!

Yes, this is how neural networks are usually trained (due to some nice mathematical properties of having best of two worlds - better gradient approximation than in SGD on one hand and much faster convergence than full GD).

How does tensorflow interpret this "batch" input?

It "interprets" it according to operations in your graph. You probably have **reduce mean** somewhere in your graph, which calculates **average** over your batch, thus causing this to be the "interpretation".

And how does it update the weights: 1.simultaniusly after each element in a batch? 2. After running threw the whole batch?.

As in the previous answer - there is nothing "magical" about batch, it is just another dimension, and each internal operation of neural net is well defined for the batch of data, thus there is still **a single** update in the end. Since you use reduce mean operation (or maybe reduce sum?) you are updating according to mean of the "small" gradients (or sum if there is reduce sum instead). Again - you could control it (up to the agglomerative behaviour, you cannot force it to do per-sample update unless you introduce while loop into the graph).

And, if i need to imput one number(input_shape = [1,1]) and ouput four nubmers (output_shape = [1,4]), how should i change the tf.placeholders and in which form should i feed them into session? THANKS!!

just set the variables, `n_input=1`

and `n_classes=4`

, and you push your data as before, as [batch, n_input] and [batch, n_classes] arrays (in your case batch=1, if by "1x1" you mean "one sample of dimension 1", since your edit start to suggest that you actually do have a batch, and by 1x1 you meant a 1d input).

EDIT: 1.when i ask, how does tensorflow interpret it, i want to know, how tensorflow split the batch into single elements. For example, batch is a 2-D array, right? In which direction it splits an array. Or it uses matrix operations and doesnt split anything? 2. When i ask, how should i feed my data, i want to know, should it be a 2-D array with samples at its rows and features at its colums, or, maybe, could it be a 2-D list.

It **does not** split anything. It is just a matrix, and each operation is perfectly well defined for matrices as well. **Usually** you put examples in rows, thus in **first dimension**, and this is exactly what [batch, n_inputs] says - that you have `batch`

rows each with `n_inputs`

columns. But again - there is nothing special about it, and you could also create a graph which accepts column-wise batches if you would really need to.