[comp.arch] big memories, who needs >~1G

daniel@thumper.UUCP (03/17/87)

Since they haven't spoken for themselves, I'll try.  My information is based
from my vague recollection of one talk given long ago. caveat, caveat . . .

There is a project at Princeton called the Massive Memory Machine.  Their goal
is to build a machine in the near future that has something like a terabyte
of core. I may be off by a couple of orders of magnitude, but you get the idea.
They had a big viewgraph with LOTS of 0's.

It is a one-level store taken to the extreme.
The object of the game is to ALWAYS keep EVERYTHING in core.
Obviously one has to worry about what happens when the lights go out.
Their answer is (I think) dual porting the memory and running disks to
constantly back up the core. They use similar tricks to load all that core
quickly at power-up.

They had lots of examples of programs that can benefit from
lots of core.  For instance, database applications pay a heavy cost for
trying to keep things straight given disk latency.  If all data is kept
in core the problem just goes away and gets rid of lots of fancy programming,
and programs run faster, have simplier operating characteristics, etc.

Someone who knows more about the project should fill in the details
and correct my misrepresentations.

Dan Nachbar
bellcore!daniel

marvit@hplabsc.UUCP (03/22/87)

<Regarding extremely large memories >

A private study last of current and future users of supercomputers asked
a number of interesting questions.  Pertinent to this discussion --
what is the most important attribute for improving the supercomputing
environment?

The suprising answer?  Not more horsepower nor better floating point/vector
operations, nor lower power consumption.  Bigger memory topped the list 
(with faster being far behind).  Would anyone who works with these monsters
(Eugene?) care to comment?

Peter Marvit
HP Labs
marvit@hplabs.hp.com

P.S. I know for LISP work, large stretches of memory reduces the need for
time consuming incremental garbage collects, and sure speeds up the 
stop and copy GC!  On my workstation 8MB real memory was painful with
a 30+ MB virtual image.  With 48MB real memory and a 40+ MB virtual image,
I blink and the GC is done!  Hooray for brute force and lots of real
estate!

jjbaker@phoenix.UUCP (03/22/87)

In article <506@thumper.UUCP>, daniel@thumper.UUCP writes:
> There is a project at Princeton called the Massive Memory Machine.  Their goal
> is to build a machine in the near future that has something like a terabyte
> of core. I may be off by a couple of orders of magnitude, but you get the idea.

There is such a machine here, a VAX 11/750 with 128 M of real memory.
I don't know about future expansions.  The machine's name is Thrash,
because it doesn't ;-)  

I have a friend who's writing a compiler on it, who routinely uses
50-60M.  He reports about 2-3 page faults per session.  Why he gets any
is hard to figure.

                             Thanbo
                             (...!princeton!phoenix!jjbaker)

eugene@pioneer.UUCP (03/25/87)

In article <50400001@hplabsc.UUCP> marvit@hplabsc.UUCP (Peter Marvit) writes:

>The suprising answer?  Not more horsepower nor better floating point/vector
>operations, nor lower power consumption.  Bigger memory topped the list 
>(with faster being far behind).  Would anyone who works with these monsters
>(Eugene?) care to comment?

>Peter Marvit
>HP Labs

Okay.
Some computing applications grow to fill the available space.

Anyone lacking the foresight to see some of this (the 1980s versions of
the 1946 30-computers mentality), forget them..... They are not worth
our time.

From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  {hplabs,hao,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene

eugene@pioneer.UUCP (03/25/87)

In article <506@thumper.UUCP> daniel@thumper.UUCP writes:
>
>There is a project at Princeton called the Massive Memory Machine.  Their goal
>is to build a machine in the near future that has something like a terabyte
>of core. I may be off by a couple of orders of magnitude, but you get the idea.
>They had a big viewgraph with LOTS of 0's.
>
>Dan Nachbar
>bellcore!daniel

I think the project was for a 1 GB machine, a friend is on the review
panel.  This is not impressive when you consider there are 8 or so
Cray-2s with 256 Macho Words (about 2,000 macho bytes) delivered.
They (MMM) basically wanted a souped up VAX as one previous poster noted.

From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {hplabs,hao,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene