I've been profiling some code using Python's multiprocessing module (the 'job' function just squares the number).
data = range(100000000)
time1 = time.time()
processes = multiprocessing.Pool(processes=n)
results_list = processes.map(func=job, iterable=data, chunksize=10000)
time2 = time.time()
First of all, the question title has a typo: double "are".
About optimal chunksize:
As both rules want different aproaches, a point in the middle is the way to go, similar to a supply-demand chart.