eugene@ames.UUCP (Eugene Miya) (08/03/84)
[]
The problem right now: in addition to the task decomposition problem
mentioned by others [Using many micros to replace a super computer],
is the shear number of micros needed. Yes, right now supercomputers
are more cost effect, I am having the privilege of running a Cray-X 12
under a shaky S-V.
The people at LLNL say a Cray-1 is equivalent to about 280x a 780 VAXen.
This figure is certainly disputable (problem dependent), other say 90x
a 780. How many 8086s equal a Cray? Now, we also have a Cray XMP 28,
its is only a little bigger, but we practically doubled the power.
You can certainly counter than micros will be cost effective soon. I have
a reject chip from the Massively Parallel Processor with has 8 micros
on it. The MPP has 16,000 processors (rounded), and programming it is a
problem.
On the software decomposition problem: not much work has been done to
distribute [either explicitly or implicitly] programming. Many argue that
running Unix on a Cray is a waste of cycles due to keystroke interrupts
and the like. Okay. Then these people propose either a batch oriented
or a process server oriented model. Well, I have batch using the Cray
with COS and RJE. I have to learn COS, a step back into the stone ages.
No one has really created an integrated model of process servers
in a "high" performance environment. I know about RIG and PARC's work
as well as others. LLNL is trying a system called LINCS/NLTSS, but
FORTRAN still represents a problem--> programming for the workstation or
the super computer. We need distributed programming of utilities
like editors and debuggers with treat the net as a distributed whole
and not a workstation and a process server.
--eugene miya
NASA Ames Res. Ctr.
emiya@ames-vmsb.ARPA
{hplabs,hao,dual}!ames!aurora!eugene