octavian - 1 month ago 10

Python Question

If I have tensorflow neural network, which I ran on the test data like this:

`result = sess.run(y_conv, feed_dict={x: test_inputs})`

However, this would have memory issues, so I tried to break up the computation like this:

`result = []`

for i in range(0, len(test_inputs), 100):

end = min(i+100 - 1, len(test_inputs) - 1)

r = sess.run(y_conv, feed_dict={x: test_inputs.loc[i:end, :]})

result.append(r)

However, now I get this error:

`InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_2' with dtype float`

[[Node: Placeholder_2 = Placeholder[dtype=DT_FLOAT, shape=<unknown>, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

So what is the cause of this problem?

I would have thought that the network would work well on smaller batch of examples.

If this bares any relevance, the neural network is created like this:

`W_conv1 = weight_variable([5, 5, 1, 32])`

b_conv1 = bias_variable([32])

x_image = tf.reshape(x, [-1, 28, 28, 1])

h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)

h_pool1 = max_pool_2x2(h_conv1)

W_conv2 = weight_variable([5, 5, 32, 64])

b_conv2 = bias_variable([64])

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)

h_pool2 = max_pool_2x2(h_conv2)

W_fc1 = weight_variable([7 * 7 * 64, 1024])

b_fc1 = bias_variable([1024])

h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])

h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

keep_prob = tf.placeholder(tf.float32)

h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

W_fc2 = weight_variable([1024, 10])

b_fc2 = bias_variable([10])

y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

Answer Source

You have fed `x`

as your input but not fed `keep_prob`

. Your network looks similar to Deep MNIST for Experts. An example snippet:

```
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
```

For inference, you should change keep_prob to 1.0.