I'm wanting to use multiple threads in Python to calculate pixel values for an image, to be constructed at the end, though I'm having trouble figuring out how to get the result of the thread back and collected. Here's the setup:
A Queue.Queue()
object is created, as well as a threading.Thread()
class child:
q = Queue.Queue()
class myThread(threading.Thread):
def __init__(self, queue):
self.queue = queue
threading.Thread.__init__(self)
def run(self):
while True: # loop forever
task = self.queue.get()
rs = self.do_work(task) # I've got the result; now what to do with it?
self.queue.task_done()
The idea being that I want to collect pixel data for a 500x500 image, which is initially a list of 250,000 (500x500) elements, which will eventually be made into an image with the PIL:
pixels = array.array('B', pixels).tostring()
im = Image.fromstring('L', size, pixels)
im.show()
So I fill the queue with tasks for each pixel, and spawn a pool of threads:
for i in range(5):
t = myThread(q)
t.setDaemon(True)
t.start()
for y in range(500):
for x in range(500):
q.put({'x':x, 'y':y})
q.join()
So how to get the data all collected? I think it would be a bad idea to pass the 250,000 element list to every thread, both for the size of that data array being passed, and since each thread would be missing the data from the other threads then.
EDIT: For those of you wondering whether it's worth it at all to do this in a multi-threaded manner, the work that's being done to calculate the image coordinates are several perlin noise functions. It's generating a perlin 2D noise array of points (a 5x5 grid), plus several octaves (10x10, 20x20, and 40x40 grids), and calculating the pixel values between those points. So for each pixel in the final image, it has to do three math operations per octave (average X points around given point, average Y points around given point, and average those averages), and then do a weighted average between the octave results.
On my 8-core Mac, I see the Python process using 1 thread and 100% of the processor when running. Though I know I have 8 cores and have seen processes indicate 400-600% processor use to show they're taking advantage of other 开发者_C百科cores, and I was just hoping this Python script could do the same.
Python has a global lock for modifying python-level datastructures, called the GIL. This makes it difficult to efficiently do the sort of things you want to do with threads.
But, despair not! The kind developers have given us the multiprocessing module. Replace threading with multiprocessing (to use multiprocessing.Process and multiprocessing.Queue instead) and voila, your application is a multiprocess application.
As to your question, you want to have another queue, going in the other direction.
I think you should use two queues.
One for the jobs/tasks, one for the outputs.
Once a task is completed, put the result on the output queue.
I would have a global list that can be accessed by every thread. I actually had a situation like that and did it that way without problems.
精彩评论