I am trying to work with LSTMs in Tensor Flow. I found a tutorial online where a set of sequences is taken in and the objective function is composed of the last output of the LSTM and the known values. However, I would like to have my objective function use information from each output. Specifically, I am trying to have the LSTM learn the set of sequences (i.e. learn all the letters in words in a sentence).:
cell = rnn_cell.BasicLSTMCell(num_units)
inputs = [tf.placeholder(tf.float32,shape=[batch_size,input_size]) for _ in range(seq_len)]
result = [tf.placeholder(tf.float32, shape=[batch_size,input_size]) for _ in range(seq_len)]
W_o = tf.Variable(tf.random_normal([num_units,input_size], stddev=0.01))
b_o = tf.Variable(tf.random_normal([input_size], stddev=0.01))
outputs, states = rnn.rnn(cell, inputs, dtype=tf.float32)
losses = 
for i in xrange(len(outputs)):
final_transformed_val = tf.matmul(outputs[i],W_o) + b_o
cost = tf.reduce_mean(losses)
TypeError: List of Tensors when single Tensor expected
In your code,
losses is a Python list. TensorFlow's
reduce_mean() expects a single tensor, not a Python list.
losses = tf.reshape(tf.concat(1, losses), [-1, size])
where size is the number of values you're taking a softmax over should do what you want. See concat()
But, one thing I notice in your code that seems a bit odd, is that you have a list of placeholders for your inputs, whereas the code in the TensorFlow Tutorial uses an order 3 tensor for inputs. Your input is a list of order 2 tensors. I recommend looking over the code in the tutorial, because it does almost exactly what you're asking about.
One of the main files in that tutorial is here. In particular, line 139 is where they create their cost. Regarding your input, lines 90 and 91 are where the input and target placeholders are setup. The main takeaway in those 2 lines is that an entire sequence is passed in in a single placeholder rather than with a list of placeholders.
See line 120 in the ptb_word_lm.py file to see where they do their concatenation.