开发者

Moving Parallel Python code to the cloud

开发者 https://www.devze.com 2023-02-09 20:50 出处:网络
Upon hearing that the scientific computing project (happens to be the stochastic tractography method described here) I\'m currently running for an investigator would take 4 months on our 50 node clust

Upon hearing that the scientific computing project (happens to be the stochastic tractography method described here) I'm currently running for an investigator would take 4 months on our 50 node cluster, the investigator has asked me to examine other options. The project is currently using parallel python to farm out chunks of a 4d array to different cluster nodes, and put the processed chunks back together.

The jobs I'm currently working with are probably much too coarsely grained, (5 seconds to 10 minutes, I had to increase the timeout default in parallel python) and I estima开发者_StackOverflow中文版te I could speed up the process by 2-4 times by rewriting it to make better use of resources (splitting up and putting back together the data is taking too long, that should be parallelized as well). Most of the work in done by numpy arrays.

Let's assume that 2-4 times isn't enough, and I decide to get the code off of our local hardware. For high throughput computing like this, what are my commercial options and how will I need to modify the code?


You might be interested in PiCloud. I have never used it, but their offer apparently includes the Enthought Python Distribution, which covers the standard scientific libraries.

It's tough to say if this will work for your specific case, but the Parallel Python interface is pretty generic. So hopefully not too many changes would be needed. Maybe you can even write a custom scheduler class (implementing the same interface as PP). Actually that might be useful for many people, so maybe you can drum up some support in the PP forum.


The most obvious commercial options which come to mind are Amazon EC2 and the Rackspace Cloud. I have played with both and found the Rackspace API a little easier to use.

The good news is that you can prototype and play with their compute instances (short- or long-lived virtual machines of the OS of your choice) for very little investment, typically $US 0.10 / hr or so. You create them on demand and then release them back to the cloud when you are done, and only pay for what you use. For example, I saw a demo on Django deployment using 6 Rackspace instances which took perhaps an hour and cost the speakers less than a dollar.

For your use case (not clear exactly what you meant by 'high throughput'), you will have to look at your budget and your computing needs, as well as your total network throughput (you pay for that, too). A few small-scale tests and a simple spreadsheet calculation should tell you if it's really practical or not.

There are Python APIs for both Rackspace Cloud and Amazon EC2. Whichever you use, I recommend python-based Fabric for automated deployment and configuration of your instances.

0

精彩评论

暂无评论...
验证码 换一张
取 消