I have been working on this for a while now and can't seem to crack it. In other questions I have seen them use these code samples in order to save and restore a model using the metagraph and checkpoint files, but when I do something similar to this it says that
import tensorflow as tf
w1 = tf.Variable(tf.random_normal(shape=), name='w1')
w2 = tf.Variable(tf.random_normal(shape=), name='w2')
saver = tf.train.Saver([w1,w2])
sess = tf.Session()
with tf.Session() as sess:
saver = tf.train.import_meta_graph('my_test_model-1000.meta',clear_devices=True)
Your graph is saved correctly, but restoring it does not restore your variables that contain nodes of the graph. w1 is a python variable that you've never declared in you 'restoring' part of the code. To get back a handle on your weights,
you can use their names in the TF graph:
w1=get_variable(name='w1'). The problem is that you'll have to pay close attention to your name scopes, and make sure that you don't have multiple variables of the same name (in which case TF adds '_1' to one of their names, so you might get the wrong one). If you go that way, tensorboard can be of great help to know the exact name of each variable.
You can use collections: save the interesting nodes in collections, and get them back from them after restoring. When building the graph, before saving it, do for instance:
tf.add_to_collection('weights', w1) and
tf.add_to_collection('weights', w2), and in your restoring code:
[w1, w2] = tf.get_collection('weights1'). Then you'll be able to use w1 and w2 normally.
I think the latter, though more verbose, is probably better with regard to future changes in your architecture. I know all of this looks quite verbose, but remember that usually you don't have to get back handles on all your variables, but on few of them: the inputs, outputs, and train step are usually enough.