开发者

Distributing cpu-bound compression jobs to multiple computers?

开发者 https://www.devze.com 2022-12-25 19:19 出处:网络
The other day I needed to archive a lot of data on our network and I was frustrated I had no immediate way to harness the开发者_开发知识库 power of multiple machines to speed-up the process.

The other day I needed to archive a lot of data on our network and I was frustrated I had no immediate way to harness the开发者_开发知识库 power of multiple machines to speed-up the process.

I understand that creating a distributed job management system is a leap from a command-line archiving tool.

I'm now wondering what the simplest solution to this type of distributed performance scenario could be. Would a custom tool always be a requirement or are there ways to use standard utilities and somehow distribute their load transparently at a higher level?

Thanks for any suggestions.


One way to tackle this might be to use a distributed make system to run scripts across networked hardware. This is (or used to be) an experimental feature of (some implementations of) GNU Make. Solaris implements a dmake utility for the same purpose.

Another, more heavyweight, approach might be to use Condor to distribute your archiving jobs. But I think you wouldn't install Condor just for the twice-yearly archiving runs, it's more of a system for regularly scavenging spare cycles from networked hardware.

The SCons build system, which is really a Python-based replacement for make, could probably be persuaded to hand work off across the network.

Then again, you could use scripts to ssh to start jobs on networked PCs.

So there are a few ways you could approach this without having to take up parallel programming with all the fun that that entails.

0

精彩评论

暂无评论...
验证码 换一张
取 消