Mr G0z Mr G0z - 22 days ago 7
Python Question

Multiprocesses seem to use a single logical core

I'm trying to execute a shared function that different processes can call. The only way found, is to create a class containing this function (see this previous question). But now the problem I'm facing is that the processes seem to wait for something, thus using only the power of a single logical core (see the image).

I've succeed to reproduce the same problem with the following code (it has been simplified):

#!/usr/bin/python

from multiprocessing import Pool
from multiprocessing.managers import BaseManager
from itertools import repeat

class FunctionManager(BaseManager):
pass

class MaClass:
def maFunction(self, val):
print(str(val))
for i in range(0, 10000):
for j in range(0, 10000):
for k in range(0, 10000):
pass

FunctionManager.register('MaClass', MaClass)

myManager = FunctionManager()
myManager.start()
monObjet = myManager.MaClass()

p = Pool()
p.imap_unordered(monObjet.maFunction, range(10))
p.close()
p.join()

myManager.shutdown()


HTOP CPU USAGE

Any ideas?

Answer

monObject is not a regular class instance, its a proxy <class 'multiprocessing.managers.AutoProxy[MaClass]'> to the instance. When child processes call that proxy, the request goes back to the parent where it is processed. The background threads doing the proxy are still subject to the GIL so, no matter how many child processes call it, it can only execute one of these threads at a time.

Its uncommon to run a proxy method inside a pool. I appreciate that you've just cooked up an example and may really need to run the method in the parent, but here is a straw man example that keeps the work in the child

#!/usr/bin/python

from multiprocessing import Pool

def worker(val):
    c = MaClass()
    return c.maFunction(val)

class MaClass:
    def maFunction(self, val):
        print(str(val))
        for i in range(0, 10000):
            for j in range(0, 10000):
                for k in range(0, 10000):
                    pass

p = Pool()
p.imap_unordered(worker, range(10))
p.close()
p.join()