andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head ) (05/02/90)
<0093608E.3DCAF480@KING.ENG.UMD.EDU>, sysmgr@KING.ENG.UMD.EDU (Doug Mohney) : > I know this will sound hairbrained, but you only live once... > Couldn't you take a room full of workstations or PCs and (through the > appropriate software) treat them as a one big parallel processing > machine? Of course, you would have to set up a scheduler, figure out > how to parallelize your code/problem accordingly ...... Not harebrained at all. Try IEEE Spectrum this month for the article by David Gelernter, "Stealing Idle Cycles" (or close) in the context of Linda and his "hypercomputer". It seems MIT Media Labs have something on these lines already running..... -- ........................................................................... Andrew Palfreyman andrew@dtg.nsc.com Albania during April!
rackow@abacus.mcs.anl.gov (05/02/90)
In responce to the availablity of netlib, it has moved from argonne to oak-ridge. Netlib is a computer program that responds to electronic mail. In particular it responds to certain key words in the body of the electronic mail message. Netlib is used to distribute public-domain mathematical software. The machine running netlib at Argonne National Laboratory has moved to Oak Ridge National Laboratory. Therefore the old Argonne National Laboratory address, netlib@mcs.anl.gov is no longer valid and mail for netlib should be sent to the new Oak Ridge National Laboratory address, netlib@ornl.gov. A machine at Bell Labs continues to run netlib as well. The address there is netlib@research.att.com. To get started with netlib send a mail message to either netlib address given above and include the single line ``send index'' as the body of the electronic mail message. The contacts for netlib questions are Jack Dongarra (dongarra@cs.utk.edu) at Oak Ridge National Laboratory and Eric Grosse (ehg@research.att.com) at AT&T Bell Labs. --The Argonne MCS division support staff (support@mcs.anl.gov)
Publius@dg.dg.com (Publius) (05/02/90)
In article <0093608E.3DCAF480@KING.ENG.UMD.EDU> sysmgr@KING.ENG.UMD.EDU (Doug Mohney) writes: >I know this will sound hairbrained, but you only live once... > >Couldn't you take a room full of workstations or PCs and (through the >appropriate software) treat them as a one big parallel processing >machine? Of course, you would have to set up a scheduler, figure out >how to parallelize your code/problem accordingly (obviously this wouldn't >work for a linear-type problem), and be able to parse everything out on >the network to each one of your nodes. > >You would also have to tolerate Ethernet as your 10Mb/second bus, something >which the more religious would consider blasphmeous....but I guess it >could be done. > >Need another set of nodes? Shut down a terminal room for a weekend ;-). > > Doug Well, there are two sorts of issues here. One is hardware and the other is software. On the hardware side, a network of workstations and PCs do not share memory at the machine code level. Thus, you have totally different programming model. Of course, this can be overcome by implementing virtual addressing across the network. Then you need a lot of additional software and what kind of performance you will get out of it is a question. On the software side, you have a network of operating systems, instead of a network operating system. Each operating system has control only over its own resources. Besides, there are problems like naming resolution,...... -- Disclaimer: I speak (and write) only for myself, not my employer. Publius "Old federalists never die, they simply change their names." publius@dg-pag.webo.dg.com
schang@netcom.UUCP (Sehyo Chang) (05/02/90)
In article <E!sw=m7@cs.psu.edu> schwartz@numenor.endor.cs.psu.edu (Scott E. Schwartz) writes: >In article <3841@munnari.oz.au> steve@mullian.ee.mu.OZ.AU (Steve Mabbs) writes: > >>Doug Mohney writes: >>>Couldn't you take a room full of workstations or PCs and (through the >>>appropriate software) treat them as a one big parallel processing > >>I have collected the following list of parallel programming packages >>over the net recently. Some of them may also be applicable. Anyone care to >>comment? >> >>Cosmic - ? > >The Cosmic Environment, from Caltech. This basically gives you the >same development environment for your network of workstations as you >get on an Intel hypercube. It's very easy to use, but it can be >fragile when faced with process/host/network failures. The cosmic environment running on workstation is just simulator. That means you can't spawn process across another workstation which is idle. There might be new version of cosmic which might let you hook up multiple workstation. Also cosmic is available for symult s2010 Basically, cosmic provide fully connected network of CSP processes point of view to programmer(even though underlying topology might be hypercube,ethernet,etc). Basic modes of communication is through message passing. The message send is asynchrous while message receive can be both synchrous and asynchrous. One of the problem with cosmic was that it didn't provide transparent byte-ordering(user have to translate each message to host's byte order format). -- Sehyo Chang schang@netcom.uucp Ascent Logic Corp. ucvbax!ames!claris!netcom!schang (408)943-0630
henry@utzoo.uucp (Henry Spencer) (05/03/90)
In article <0093608E.3DCAF480@KING.ENG.UMD.EDU> sysmgr@KING.ENG.UMD.EDU (Doug Mohney) writes: >Couldn't you take a room full of workstations or PCs and (through the >appropriate software) treat them as a one big parallel processing >machine? ... Yes, sort of. Many people have done this sort of thing. The hard part is the usual problem of replacing a pair of oxen with a hundred chickens: figuring out how to parallelize the problem in a way suited to the hardware. -- If OSI is the answer, what is | Henry Spencer at U of Toronto Zoology the question?? -Rolf Nordhagen| uunet!attcan!utzoo!henry henry@zoo.toronto.edu
eugene@wilbur.nas.nasa.gov (Eugene N. Miya) (05/03/90)
Clue: The best way to answer this is to note that if it were this easy, it would have been done by now. Some problems and particular algorithms: it is possible and easy. For problems of general interest the answer is NO. Part of the problem lies in software, another part in algorithms. There is a vast body of literature on why it will and why this won't work. Hypercubes are part of the PC approach. But the software ain't there yet, and may never be. Throwing more hardware onto an already "slow" problem can make it slower. --e. nobuo miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov {uunet,mailrus,other gateways}!ames!eugene
rpw3@rigden.wpd.sgi.com (Rob Warnock) (05/04/90)
In article <892@blenheim.nsc.com> andrew@dtg.nsc.com writes: +--------------- | <0093608E.3DCAF480@KING.ENG.UMD.EDU>, sysmgr@KING.ENG.UMD.EDU (Doug Mohney) : | > I know this will sound hairbrained, but you only live once... | > Couldn't you take a room full of workstations or PCs and (through the | > appropriate software) treat them as a one big parallel processing machine? | Not harebrained at all. Try IEEE Spectrum this month for the article by | David Gelernter, "Stealing Idle Cycles" (or close) in the context of Linda | and his "hypercomputer". It seems MIT Media Labs have something on these | lines already running..... +--------------- Remember, the original Worm was fiction (John Brunner's "The Shockwave Rider") but the first real one (well, first highly publicized operational one) was the Xerox PARC Worm (published over a decade ago in CACM), which was exactly what you describe: A distributed computation that would "borrow" idle machines, then leave when their owners did something on them (touched keyboard or mouse). "Oh, my god, professor! I thought I had all the proper containment safeguards in place, but my distributed F9 factoring program has leaked out onto the Internet! I'm gonna get arrested for sure! What am I going to DOOooo...?!?!?" "Now, now, my son. Here, first call CERT and then the FBI, and then we'll see if we can find you a lawyer who'll take your case." ;-} ;-} ;-} -Rob ----- Rob Warnock, MS-9U/510 rpw3@sgi.com rpw3@pei.com Silicon Graphics, Inc. (415)335-1673 Protocol Engines, Inc. 2011 N. Shoreline Blvd. Mountain View, CA 94039-7311