I have a PHP script that must run 30 parallel t开发者_Go百科imes each with a different argument. What is the best way to do this so that each script can have as much even exposure to the processor as possible?
Problem description
Like some other users are telling(me too) you should give a little bit more explanation (maybe code samples). For example should these tasks run for ever or just once when php script is being called?
Message Queue
First off I think if possible it should be avoided to run so many tasks at once but schedule(be gentle to PC) them with a message queue like for instance beanstalkd
PHP solution
I don't think PHP is the right tool for your problem because of thread model(no). Threads are lightweight and creating new process is heavy. You could do it like stroncium is explaining. My opinion is that running this code on shared host will not be appreciated because if all users would run long running processes they would over utilize(use too much PC) the server.
Quoto from nettuts
There's no better resource than PHP's creator for knowing what PHP is capable of. Rasmus Lerdorf created PHP in 1995, and since then the language has spread like wildfire through the developer community, changing the face of the Internet. However, Rasmus didn't create PHP with that intent. PHP was created out of a need to solve web development problems.
However, you can't use PHP for everything. Lerdorf is the first to admit that PHP is really just a tool in your toolbox, and that even PHP has limitations.
Better language
Like I said previously I don't think PHP is the right tool. Some languages which I think could solve the problem better:
- java
- python
- C
Off course a lot more languages which support thread model are right tool for the job, but PHP isn't orginally designed for tasks like this. Even the creator of php Rasmus confirms this. You can read about this on this list from nettuts which I think has some pretty good points.
Google app engine
Last I would advice you to have a look at taskqueu api from google app engine. Because this is also a real good option ;). I might even consider it the best option. you have a free quote and the the costs are fair if you exceed quote. The task queue uses webhooks so that the hooks could be coded in PHP.
PHP itself haven't threads support. But you can just run few copies of your script simultaneously by using popen() or proc_open(). Sometimes multicurl is used for this purposes(when popen and alikes are resricted).
I don't think its CPU affinity that you have to worry about (so much), its how I/O bound each process is bound (pardon the pun) to become.
If using a UNIX like operating system, you can try using the nice
command to adjust for processes that you predict will be doing more disk / network / database access, but I don't think you'll see any significant speed up.
If all processes are going to handle the same amount of I/O, you are probably better off just letting the kernel's scheduler do its job.
A little more information regarding what your jobs are actually accomplishing would be extremely helpful.
If you run it CLI you can fork 29-30 child processes and run the code there. You can have one main process with open sockets to each child or serial link them if you want to. You'd mostly have to hope the kernel will balance the processes if they have the same priority.
Given the simplicity of the question, I suggest you look for the simplest answer. Off the top, I'd say you might consider using one instance looping through 30 arguments.
精彩评论