I'm running tensorflow 0.10.0rc0 on OSX 10.9.5 Mavericks.
There are approximately 25k training examples, 250 features (x), 15 classes (y_) and the predict (y) is a single-hidden-layer NN perceptron.
The following snippet of a simple training loop seems to have a massive memory leak (of order 10s of GBs over =~ 200 iterations - brings down my MBP :( ) :
import tensorflow as tf
# Initialize placeholders and variables etc...
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y,y_))
train_step = tf.train.GradientDescentOptimizer(lrate).minimize(cost)
init = tf.initialize_all_variables()
sess = tf.Session()
for i in range(niter):
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
# EDIT: Calculate test error
test_prediction = tf.equal(tf.argmax(ytest,1), tf.argmax(ytest_,1))
sess.run(correct_prediction)-- it's a tensorflow graph variable on which the
accuracyvariable is dependant. This implies that it will be evaluated during the call to
sess.run(accuracy)in any case.
accuracyvariables on each iteration. This is also unnecessary -- they can be moved outside the loop and simply evaluated each time with calls to
sess.run. So your inner loop will be something like
for i in range(niter): # Train _, c = sess.run([train_step, cost]) print sess.run(accuracy)