Tham Tham - 3 years ago 473
Python Question

Turn is_training of batchnorm(TensorFlow) to False

I want to turn the

is_training
state of the model to
False
after training, how could I do that?

net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same')
net = tf.contrib.layers.batch_norm(net, is_training = True)
net = tf.nn.relu(net)
net = tf.reshape(net, [-1, 64 * 7 * 7]) #
net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output')

#......
#after training

saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')


How could I turn the
is_training
of the batchnorm to
False
after I save it?

I tried the keywords like tensorflow batchnorm turn of training, tensorflow change state, but could not find out how to do it.

Edit 1:

Thanks to @Maxim solution, it works, but when I try to freeze the graph, another problem occur.

Command :

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph=graph_final.pb --input_checkpoint=reshape_final.ckpt --output_graph=frozen_graph.pb --output_node_names=regression_output/BiasAdd

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/optimize_for_inference.py --input frozen_graph.pb --output opt_graph.pb --frozen_graph True --input_names input --output_names regression_output/BiasAdd

~/Qt/3rdLibs/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=opt_graph.pb --out_graph=fused_graph.pb --inputs=input --outputs=regression_output/BiasAdd --transforms="fold_constants sort_by_execution_order fold_batch_norms fold_old_batch_norms"


After I execute transform_graph, error messages pop out

"You must feed a value for placeholder tensor 'training' with dtype bool"

I save the graph by following codes:

sess.run(loss, feed_dict={features : train_imgs, x : real_delta, training : False})
saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')


Edit 2:

Change placeholder to Variable works, but the graph after transformed cannot loaded by opencv dnn.

change

training = tf.placeholder(tf.bool, name='training')


to

training = tf.Variable(False, name='training', trainable=False)

Answer Source

You should define a placeholder variable for the mode (it can be boolean or a string), and pass different values to session.run during training and testing. Sample code:

x = tf.placeholder('float32', (None, 784), name='x')
y = tf.placeholder('float32', (None, 10), name='y')
phase = tf.placeholder(tf.bool, name='phase')
...

# training (phase = 1)
sess.run([loss, accuracy], 
         feed_dict={'x:0': mnist.train.images,
                    'y:0': mnist.train.labels,
                    'phase:0': 1})
...

# testing (phase = 0)
sess.run([loss, accuracy],
         feed_dict={'x:0': mnist.test.images,
                    'y:0': mnist.test.labels,
                    'phase:0': 0})

You can find the complete code in this post.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download