[comp.arch] Gigabytes

lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) (10/25/89)

In article <1418@crdos1.crd.ge.COM> davidsen@crdos1.UUCP (bill davidsen) writes:
>Interesting to note that some of the things we use a Cray2 for are just
>huge, not all that compute intensive. Many could be done on a micro, if
>it had 4-8GB memory. That seems to be a real limit of current
>implementations, that even though the chip can address GB of memory, the
>bus, backplane, or whatever does not give a practical way to add large
>memory.

One of the engaging things about massively parallel machines is that
they can have massive memories, and massive memory bandwidth, and yet
use only midpriced DRAMs. For example, the new NCUBE is designed to
be 8k-or-less nodes. With standard sized nodes (4MB), that's 32 GB.
Every memory cycle transfers 256 K bits: don't Eurocards have an
optional bus-extension connector for that ? :-)

Big memories may turn out to be useful in and of themselves.  The
group at Sandia that won the Gordon Bell Award - the people with the
1,000 X speedup - reported an interesting wrinkle.  They had a
program described as: Laplace with Dirichlet boundary conditions
using Green's function. (If you want that explained, sorry, ask
someone else.) They reduced the problem to a linear superposition,
and then as the last step, they did a matrix multiply to sum the
answers. This took 128 X as much memory as "usual" ( 256 MB instead
of 2 MB ), but made the problem 300 X smaller, in terms of the FLOPs
required.

OS people have learned many lessons about how memory can be a tool.
I wonder, how many lessons will the supercomputer applications people
learn?
-- 
Don		D.C.Lindsay 	Carnegie Mellon Computer Science