Problems with multiprocessing v.s. multithreading:
Threads all share the same starting code. Consider adding in initialization logic for all threads, v.s. all processes. With processes, every entry point into the program needed to be updated, which can be cumbersome. Recently I tried adding some signal handlers to a program that aid in debugging. Having lot's of processes (like cron's) made it annoying to track them all down. Additionally, with multiprocessing, it may not be possible to set this up, as you may not be in control of main() everywhere. In my case, rq python spawned the Python VM and invoked my function, preventing any change of setting up the handlers first.
Using fork() (esp. with Python) is an easy way to get around the GIL limitation on CPU bound tasks. However, it brings up some complications.
fork(). The Python docs now explicitly call out that forking with threads in the process is unsupported (per Posix). The main reason I see for using fork() is that you don't need to serialize (pickly) the data shared between forked processes. This is especially useful for sharing lambdas and other closures, which cannot be serialized or imported.
Here are some things that are shared across the process: