I have been given some data of this format and the following details:
person1, day1, feature1, feature2, ..., featureN, label
person1, day2, feature1, feature2, ..., featureN, label
person1, dayN, feature1, feature2, ..., featureN, label
person2, day1, feature1, feature2, ..., featureN, label
person2, day2, feature1, feature2, ..., featureN, label
person2, dayN, feature1, feature2, ..., featureN, label
dynamic: because not all features are present each day
You've got the wrong concept of dynamic here. Dynamic RNN in Tensorflow means the graph is dynamically created during execution, but the inputs are always the same size (0 as the lack of a feature should work ok).
Anyways, what you've got here are sequences of varying length (day1 ... day?) of feature vectors (feature1 ... featureN). First, you need a LSTM cell
cell = tf.contrib.rnn.LSTMcell(size)
so you can then create a dynamically unrolled rnn graph using tf.nn.dynamic_rnn. From the docs:
inputs: The RNN inputs.
If time_major == False (default), this must be a Tensor of shape: [batch_size, max_time, ...], or a nested tuple of such elements.
where max_time refers to the input sequence length. Because we're using dynamic_rnn, the sequence length doesn't need to be defined during compile time, so your input placeholder could be:
x = tf.placeholder(tf.float32, shape=(batch_size, None, N))
Which is then fed into the rnn like
outputs, state = tf.nn.dynamic_rnn(cell, x)
Meaning your input data should have the shape
(batch_size, seq_length, N). If examples in one batch have varying length, you should pad them with 0-vectors to the max length and pass the appropriate
sequence_length parameter to
Obviously I've skipped a lot of details, so to fully understand RNNs you should probably read one of the many excellent RNN tutorials, like this one for example.