Hey I have some code in Python which is basically a World Object with Player objects. At one point the Players all get the state of the world and need to return an action. The calculations the players do are independent and only use the instance variables of the respective player instance.
#do stuff, calculate state with the actions array of last iteration
for i, player in enumerate(players):
actions[i] = player.get_action(state)
The most straightforward way is to use multiprocessing.Pool.map (which works just like
import multiprocessing pool = multiprocessing.Pool() def do_stuff(player): ... # whatever you do here is executed in another process while True: pool.map(do_stuff, players)
Note however that this uses multiple processes. There is no way of doing multithreading in Python due to the GIL.
Usually parallelization is done with threads, which can access the same data inside your program (because they run in the same process). To share data between processes one needs to use IPC (inter-process communication) mechanisms like pipes, sockets, files etc. Which costs more resources. Also, spawning processes is much slower than spawning threads.
Other solutions include:
A big issue comes when your have to share data between the processes/threads. For example in your code, each task will access
actions. If you have to share state, welcome to concurrent programming, a much bigger task, and one of the hardest thing to do right in software.