I'm using Keras with Tensorflow as backend.
I am trying to save a model in my main process and then load/run (i.e. call
model = load_model()
From my experience - the problem lies in loading
Keras to one process and then spawning a new process when the
keras has been loaded to your main environment. But for some applications (like e.g. training a mixture of
Kerasmodels) it's simply better to have all of this things in one process. So what I advise is the following (a little bit cumbersome - but working for me) approach:
DO NOT LOAD KERAS TO YOUR MAIN ENVIRONMENT. If you want to load Keras / Theano / TensorFlow do it only in the function environment. E.g. don't do this:
import keras def training_function(...): ...
but do the following:
def training_function(...): import keras ...
Run work connected with each model in a separate process: I'm usually creating workers which are making the job (like e.g. training, tuning, scoring) and I'm running them in separate processes. What is nice about it that whole memory used by this process is completely freed when your process is done. This helps you with loads of memory problems which you usually come across when you are using multiprocessing or even running multiple models in one process. So this looks e.g. like this:
def _training_worker(train_params): import keras model = obtain_model(train_params) model.fit(train_params) send_message_to_main_process(...) def train_new_model(train_params): training_process = multiprocessing.Process(targer=_training_worker, args = train_params) training_process.start() get_message_from_training_process(...) training_process.join()
Different approach is simply preparing different scripts for different model actions. But this may cause memory errors especially when your models are memory consuming. NOTE that due to this reason it's better to make your execution strictly sequential.