[net.micro] NCUBE supercomputers

delp@huey.udel.EDU (Gary Delp) (07/03/86)

I was recently at a set of classes at the National Center for
Atmospheric Research held for supercompuer users.  One of the other
faculty members was John Palmer from NCUBE.  He presented information
on a very interesting realization of the Hypercube architecture that
the NCUBE  company has put together.

I have no connection with the company, but was facinated with the
cost/performace ratio (as well as the absolute performace possible
with 1024 moderately powered processors in very close communication.)

Mathew Hall is someone at Oak Ridge Natioal Laboratory who is actually
using one of these beasties, and I enclose his message for your
edification.

I am sending this to the bizzare mix of lists (supercomputer -- the
machine qualifies, info-micro -- there is a 4 processor card (~4
vax780's?) for the IBM AT availible, and info-vlsi -- as the processor
is a very elegantly tailored piece of single chip vlsi)  as I think
the interest might be general.

If there is interest,  I will try to find a way to get additional
information on the processor.  If you have experiences with the
machine, or the company, please let me know, and I will sumarize to
interested parties.

------- Forwarded Message
Date:    Mon, 30 Jun 86 17:10:15 -0400 
From:    Matthew Hall <mwh@ornl-msr.ARPA>
Subject: ncube experience

... [thought] you might be able to distribute some of
our enthusiasm for using NCUBE's to potential users... 

      There are now becoming commercially available a number of multiprocessor
computers with impressive nominal computing speeds, and relatively low
price tags.  The question that must be on a lot of people's minds is:
Are these computers just academic curiosities for computer science research,
or are they powerful machines that can really get the job done in demanding
scientific and engineering applications?

      We had a 64 node NCUBE computer delivered at ORNL last January, and have
been applying it to some problems in image processing.  One of the most
striking features about the machine is that it really isn't that hard to
use.  You have to decide how to split up your problem to run on many processors
, buy
processors, but then it is no more difficult to develop your program
than if you were using something like a VAX.  You use standard languages
(FORTRAN 77 in our case), and communication is done using function calls.
Each processor runs its own program (often the same program), and if there
are problems, you can either get some of the nodes to write debug messages
to your terminal, or else go in with the symbolic debugger.  The wall-clock
time for an edit/recompile/reload/execute cycle is about a minute for
a subroutine of about 200 statements.

      The most impressive performance we have got to date is running a
program to analyse electrophoresis gels.  These usually take the form
of a 1536x1536 digitized image of lots of blobs that correspond to different
proteins.  We compared the performance of a morphological background filter
between ORNL's Cray XMP and our 64 node NCUBE.  The Cray XMP took 400 seconds
while the NCUBE took only 200, making the NCUBE something like 50 times
more cost effective.  What is more, it took less than a day to port the
code from the Cray XMP the the NCUBE.  This wasn't a benchmark... it
was a real piece of code that a biologist was absolutely delighted to be
able to run so cheaply.

      The NCUBE contains a lot of bran new hardware and software, which is part
of the reason it is so useful and powerful.  On the other hand ther
have inevitably been some bugs in the operating system and compilers
(the hardware has been remarkably reliable).  What has been encouraging, though
,
is the diligence of both NCUBE and CAINE, Farber & Gordon (the compiler
writers) in fixing bugs, and within a matter of months converging to
a stable and reliable system.

      From using our NCUBE multiprocessor, an underlying message has
become clear:  use of multiprocessors should not be restricted just to people
doing multiprocessor research.  If you have a computationally intensive
problem that you would like to run with an order of magnitude or two 
improvement in speed, economy, or resolution, then parallel computing
could be the answer.  Reliable and usable parallel computers are available
to do the job right now.

                             Matthw C. G. Hall
                             Center for Engineering Systems Advanced Research
                             Oak Ridge National Laboratory


------- End of Forwarded Message