开发者

How to use C extensions in python to get around GIL

开发者 https://www.devze.com 2023-01-12 07:22 出处:网络
I want to run a cpu intensive program in Python across multiple cores and am trying to figure out how to write C e开发者_JAVA百科xtensions to do this.Are there any code samples or tutorials on this?Yo

I want to run a cpu intensive program in Python across multiple cores and am trying to figure out how to write C e开发者_JAVA百科xtensions to do this. Are there any code samples or tutorials on this?


You can already break a Python program into multiple processes. The OS will already allocate your processes across all the cores.

Do this.

python part1.py | python part2.py | python part3.py | ... etc.

The OS will assure that part uses as many resources as possible. You can trivially pass information along this pipeline by using cPickle on sys.stdin and sys.stdout.

Without too much work, this can often lead to dramatic speedups.

Yes -- to the haterz -- it's possible to construct an algorithm so tortured that it may not be sped up much. However, this often yields huge benefits for minimal work.

And.

The restructuring for this purpose will exactly match the restructuring required to maximize thread concurrency. So. Start with shared-nothing process parallelism until you can prove that sharing more data would help, then move to the more complex shared-everything thread parallelism.


Take a look at multiprocessing. It's an often overlooked fact that not globally sharing data, and not cramming loads of threads into a single process is what operating systems prefer.

If you still insist that your CPU intensive behaviour requires threading, take a look at the documentation for working with the GIL in C. It's quite informative.


This is a good use of C extension. The keyword you should search for is Py_BEGIN_ALLOW_THREADS.

http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock

P.S. I mean if you processing is already in C, like imaging processing, then release the lock in C extension is good. If your processing code is mainly in Python, other people's suggestion to multiprocessing is better. It is usually not justify to rewrite the code in C for background processing.


Have you considered using one of the python mpi libraries like mpi4py? Although MPI is normally used to distribute work across a cluster, it works quite well on a single multicore machine. The downside is that you'll have to refactor your code to use MPI's communication calls (which may be easy).


multiprocessing is easy. if thats not fast enough, your question is complicated.

0

精彩评论

暂无评论...
验证码 换一张
取 消