I run across a lot of "embarrassingly parallel" projects I d like to parallelize with the multiprocessing
module. However, they often involve reading in huge files (greater than 2gb), processing them line by line, running basic calculations, and then writing results. What s the best way to split a file and process it using Python s multiprocessing module? Should Queue
or JoinableQueue
in multiprocessing
be used? Or the Queue
module itself? Or, should I map the file iterable over a pool of processes using multiprocessing
? I ve experimented with these approaches but the overhead is immense in distribution the data line by line. I ve settled on a lightweight pipe-filters design by using cat file | process1 --out-file out1 --num-processes 2 | process2 --out-file out2
, which passes a certain percentage of the first process s input directly to the second input (see this post), but I d like to have a solution contained entirely in Python.
Surprisingly, the Python documentation doesn t suggest a canonical way of doing this (despite a lengthy section on programming guidelines in the multiprocessing
documentation).
Thanks, Vince
Additional information: Processing time per line varies. Some problems are fast and barely not I/O bound, some are CPU-bound. The CPU bound, non-dependent tasks will gain the post from parallelization, such that even inefficient ways of assigning data to a processing function would still be beneficial in terms of wall clock time.
A prime example is a script that extracts fields from lines, checks for a variety of bitwise flags, and writes lines with certain flags to a new file in an entirely new format. This seems like an I/O bound problem, but when I ran it with my cheap concurrent version with pipes, it was about 20% faster. When I run it with pool and map, or queue in multiprocessing
it is always over 100% slower.