Suppose you find yourself in the unfortunate position of having a dependency on a poorly behaved library. Your code needs to call FlakyClient.call(), but sometimes that function ends up hanging for an unacceptable amount of time.
As shown below, one way around this is to wrap the call in its own Process, and use the timeout parameter in the join method to define a maximum amount of time that you're willing to wait on the FlakyClient. This provides a good safeguard, but it also prevents the main body of code from reacting to the result of calling FlakyClient.call(). The only way that I know of addressing this other problem (getting the result into the main body of code) is by using some cumbersome IPC technique.
What is a clean and pythonic way of dealing with these two problems? I want to protect myself if the library call hangs, and be able to use the result if the call completes.
from multiprocessing import Process
from flaky.library import FlakyClient
TIMEOUT_IN_SECS = 10
result = FlakyClient.call()
proc = Process(target=make_flaky_call)
raise Exception("Timeout during call to FlakyClient.call().")
If you are using Process I would suggest you use a Queue to handle result transfer and indirectly also manage function timeout.
from multiprocessing import Process, Queue from flaky.library import FlakyClient import time TIMEOUT_IN_SECS = 10 def make_flaky_call(queue): result = FlakyClient.call() queue.put(result) queue.put('END') q = Queue() proc = Process(target=make_flaky_call, args=(q,)) proc.start() content = 0 result = None while content != 'END': try: content = q.get(timeout=TIMEOUT_IN_SECS) if content != 'END': result = content except Empty: proc.terminate() raise Exception("Timeout during call to FlakyClient.call().")