Lawrence_Liu Lawrence_Liu - 19 days ago 8
Python Question

I can't figure out how to feed a Placeholder by data from matlab

I am trying to implement a simple feed forward network. However, I can't figure out how to feed a Placeholder by data from matlab. This example:

import tensorflow as tf
import numpy as np
import scipy.io as scio
import math

# # create data
train_input=scio.loadmat('/Users/liutianyuan/Desktop/image_restore/data/input_for_tensor.mat')
train_output=scio.loadmat('/Users/liutianyuan/Desktop/image_restore/data/output_for_tensor.mat')
x_data=np.float32(train_input['input_for_tensor'])
y_data=np.float32(train_output['output_for_tensor'])

print x_data.shape
print y_data.shape
## create tensorflow structure start ###
def add_layer(inputs, in_size, out_size, activation_function=None):
Weights = tf.Variable(tf.random_uniform([in_size,out_size], -4.0*math.sqrt(6.0/(in_size+out_size)), 4.0*math.sqrt(6.0/(in_size+out_size))))
biases = tf.Variable(tf.zeros([1, out_size]))
Wx_plus_b = tf.matmul(inputs, Weights) + biases
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
return outputs

xs = tf.placeholder(tf.float32, [None, 256])
ys = tf.placeholder(tf.float32, [None, 1024])
y= add_layer(xs, 256, 1024, activation_function=None)

loss = tf.reduce_mean(tf.square(y - ys))
optimizer = tf.train.GradientDescentOptimizer(0.1)
train = optimizer.minimize(loss)

init = tf.initialize_all_variables()
### create tensorflow structure end ###

sess = tf.Session()
sess.run(init)

for step in range(201):
sess.run(train)
if step % 20 == 0:
print(step, sess.run(loss,feed_dict={xs: x_data, ys: y_data}))


Gives me the following error:

/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Users/liutianyuan/PycharmProjects/untitled1/easycode.py

(1, 256)

(1, 1024)

Traceback (most recent call last):
File "/Users/liutianyuan/PycharmProjects/untitled1/easycode.py", line 46, in <module>
sess.run(train)

File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 340, in run
run_metadata_ptr)

File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 564, in _run
feed_dict_string, options, run_metadata)

File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 637, in _do_run
target_list, options, run_metadata)

File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 659, in _do_call
e.code)

tensorflow.python.framework.errors.InvalidArgumentError: **You must feed a value for placeholder tensor 'Placeholder' with dtype float**
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Caused by op u'Placeholder', defined at:

File "/Users/liutianyuan/PycharmProjects/untitled1/easycode.py", line 30, in <module>
xs = tf.placeholder(tf.float32, [None, 256])

File "/Library/Python/2.7/site-packages/tensorflow/python/ops/array_ops.py", line 762, in placeholder
name=name)

File "/Library/Python/2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 976, in _placeholder
name=name)

File "/Library/Python/2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op
op_def=op_def)

File "/Library/Python/2.7/site-packages/tensorflow/python/framework/ops.py", line 2154, in create_op
original_op=self._default_original_op, op_def=op_def)

File "/Library/Python/2.7/site-packages/tensorflow/python/framework/ops.py", line 1154, in __init__
self._traceback = _extract_stack()


I have checked both the type and shape of x_data and y_data, it seams they are corret. So i have no ideal where goes wrong.

Answer

Your train operation depends on the placeholders xs and ys, so you must feed values for those placeholders when you call sess.run(train).

A common way to do this is to divide your input data into mini-batches:

BATCH_SIZE = ...
for step in range(201):
    # N.B. You'll need extra code to handle the cases where start_index and/or end_index
    # wrap around the end of x_data and y_data.
    start_index = step * BATCH_SIZE
    end_index = (step + 1) * BATCH_SIZE
    start_index = sess.run(train, {xs: x_data[start_index:end_index,:],
                                   ys: y_data[start_index:end_index,:]})

The code in the example is just to get you started. For a more flexible way of generating data for feeding, see the MNIST dataset example in the tf.learn codebase.