开发者

Virtual processor and advanced networking linux & windows

开发者 https://www.devze.com 2023-02-09 02:49 出处:网络
Hello everyone before I launch my question: if you don\'t want to read skip to the question Assume: I have access to both linux(ubuntu)

Hello everyone before I launch my question:

if you don't want to read skip to the question

Assume:

  1. I have access to both linux(ubuntu) & windows (xp and up, except for windows vista) in huge quantities.
  2. I'm familiar with Assembler, and I have a good if not advanced grasp of C++.
  3. I'm familiar with both windows(expert) & linux(intermidiate)
  4. I'm familiar with drivers(windows)
  5. I'm good with networks, and have set-up a few of my own
  6. I'm willing to go to amazing lengths to achieve this(not with money, but with time/effort)
  7. I know about at least a few practical problems (memory locations)

In the course of the years I've gathered a big number of computers(50) that I don't use anymore. Recently I started thinking about what a waste it is, I wanted to recycle those computers/make use of them again. And so this idea was born: I want to bundle them all on a network, and forward their combined cpu speed to my host computer. I already have some sort of cluster computer installed, it's currently only 5 computers big. And the network shares it's hard disks/monitors/keyboards/mouses, I want to add processors to that list.

Question:

  1. Can I -and if I can how should I proceed, combine all the processors of other computers spread over my network, and make it appear to my windows server 2008r2 host computer as a processor(s)? IA. Can I simulate a processor?

  2. How can I get the computer power/result from one computer as quickly as possible over to th开发者_如何学编程e host computer?

  3. How can I share all my physical memory over this network(ram, ranging from ddr 1 to 3)

Thanks in advance ;-)

ps:

I do realize this will almost be impossible to achieve, read assume 6.

EDIT:

I'm aware of distributed programs, I've read and experimented with them. But I find them not suiting my needs since I want to run native PE executables. Not custom built binaries

But thanks for the suggestions everyone :D


If you want to simulate a set of virtual CPU's in Windows 2008, and actually gain more power than you lose in network latency, then I don't think you can do that singlehandedly over your lifetime. Maybe you can, in that case I will reccomend you with every employer I know.

What you could do is choose a uniform linux distrubution and run it (as virtual machine on windows boxes or native) on all systems. Install an implementation of MPI on each, and have them connect to each other. Now you can write distributed applications like they do on supercomputers. I would recommend 10Gb ethernet connections between the nodes. You will have to write special MPI applications to take advantage of the power.


I'd suggest sticking to a distributed programming environment like OpenMP or Beowulf so that your application is distributed rather than the OS itself.


A project I'm aware of implements this under (a modified) Linux kernel. It is called Mosix; there are now several variants.

This is the only one I know of which is a real single-system-image cluster and claims to work.

However, I am extremely skeptical about the ability of your project to do anything useful; in practice it would probably be a better use of (electrical) power to plug an iphone in than a PC which is 5 years old.

Notwithstanding the fact that moving data around a network incurs some costs. A few types of computing problems require little data to be moved around, but a lot of processing - examples are cracking encryption keys and 3d rendering. However, many other problem domains need lots of data to be in lots of places, which in practice seems to warrant the use of very expensive networking. Most scientific clusters seem to use high-speed networks (> gigabit ethernet)

Expensive networking for cheap CPUs seems like a false economy, if the same result can be got more easily by having more expensive CPUs without any networking. You can get something like 16 cores in a 1u rack server nowadays, giving you 16 CPUs with very fast interconnect and no additional cost.


You should look into SSI (Single System Image) software. This software, running on top of several computers (nodes) can give you a picture of single OS running on computer with a lot CPU. But, unfortunately, most SSI systems are for distributing processes between nodes (some of them do such distribution at program start, another can migrate the whole running process to different node). I know no SSI system, capable of distributing threads.

The another variant can be to find a virtualization software, which itself capable of running on top of MPI/TCP/any fast network. I also know no any such system. E.g. there was a thread in qemu mailing list: http://www.mail-archive.com/qemu-devel@nongnu.org/msg18338.html

Rather good text also: http://perilsofparallel.blogspot.com/2009/01/multi-multicore-single-system-image_9797.html


I should also recommend a Kerrighed ( http://www.kerrighed.org/wiki/index.php/Main_Page )

It is a SSI (Single System Image) variant of linux, which can simulate a Shared memory:

Main targeted features are: 
* Support for cluster wide shared memory 

This is turned on via USE_REMOTE_MEMORY Kerrighed capability

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号