[comp.unix.questions] Implications of large memory systems

davidsen@steinmetz.ge.com (Wm. E. Davidsen Jr) (03/25/89)

In article <28819@bu-cs.BU.EDU> bzs@bu-cs.BU.EDU (Barry Shein) writes:

| 5. Fujitsu claims they will be producing 64Mbit memory chips in a
| couple of years. This means a 16Mbyte workstation, with the same chip
| count, becomes a 1GB workstation. Does anything need to be evolved to
| utilize this kind of change? Is it really sufficient to treat it as
| "more of the same"?

  Programs tend to fall into two categories, those needing more memory
than you have, and those which run easily in existing memory.

  AI, modeling, certain database programs, lots of things which could
use a GB (or any other finite memory) might make use of 1GB memory.
Editors, spreadsheets, communications, industrial control, graphics,
compilation, CAD/CAM, are things which usually don't push the limit of
current memory.

  Looking at accounting on some local workstations shows very few
program which need more than 2MB of memory (even GNU emacs). If we are
going to make good use of all that memory we will either need processors
fast enough to drive many programs, or something better to do with all
that memory. Of course I could mention that most people don't really
*need* that much memory, and wouldn't use it at all, much less
productively.

  Now that you're convinced that *you* need more memory, run vmstat for
a day, using something like "vmstat 60 600 > /tmp/stat.log &" to get a
reading every minute. Look at the free memory. If the machine is a
workstation rather than being used for timesharing (many schools try to
put 32 users on an 8MB Sun), the total memory in use is probably 4-12MB.
Do most users need that in a workstation? I don't, as long as I have
access to a large machine for those rare problems which can use that
much memory.

  If a workstation is really going to have 1GB memory something better
than "more of same" is going to be needed to justify the cost.
-- 
	bill davidsen		(wedu@crd.GE.COM)
  {uunet | philabs}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

bzs@bu-cs.BU.EDU (Barry Shein) (03/25/89)

>  If a workstation is really going to have 1GB memory something better
>than "more of same" is going to be needed to justify the cost.
>-- 
>	bill davidsen		(wedu@crd.GE.COM)

Well, to some extent, hoisted on my own petard, I've made similar
arguments.

But consider that 4 or 8MB workstations seemed just as out of sight a
very few years ago (remember, the entire physical address space of an
IBM370 architecture is/was [there are extensions now] 16MB!  And that
was the biggest mainframe architecture on the market.)

Did changing over from 64K or 256K chips to get the larger memories
significantly increase the price of the workstations? I don't think
so, quite the opposite. It's probably safe to assume that when
4Mb/16Mb/64Mb chips come available over the next (very) few years
they'll each be expensive for a short while and then drop down to the
cost level of what they're replacing.

So cost, other than replacing old equipment, will not be much of a
factor.

I don't think it will take too much to make you consider very large
memories on your workstation. Just mapping in all/most of /bin and
/usr/bin (etc) would probably make you want it if the guy next to you
has that.

It makes diskless/dataless workstations much more useful when you can
use, say, 1/2 GB of memory as a read/write-through cache for the
remote file system. Sure, cheap disks are going against that grain,
but many still like the advantages of remote file systems, no noise,
centralized backups and administration etc.

And when large memories show up the disks will cease to be cheap,
things are only cheap when the technology curves go out of kilter.

The current memory rationing makes a 1GB disk for $5K seem very cheap.
When you have a 1GB main memory on your workstation you'll need two or
three of those just for swap space (?!) and we'll be right back where
we started with our strategizing (unless something changes.)

And don't talk to me about the bandwidth to get things in and out of
those memories. And backups? Argh!

As an analogy, who needs a 15MIPS workstation? Very few people, but
they're available now and are cost competitive with slower
workstations so, hey, that's what we all want. The others will wither
on the vine.

And trust the software tribes to eat up all available resources for
you over time (have you seen the 3D window managers? etc)

I do believe we will see a "Software Gap" in the near future with
hardware vendors desparate for software applications which demand the
new generation of hardware they're producing to induce people to
upgrade.

There's nothing more terrifying to hardware manufacturers than
satisfied customers.

	-Barry Shein, Software Tool & Die

barnett@crdgw1.crd.ge.com (Bruce Barnett) (03/27/89)

Another thing to consider about 1 GByte Memory workstations, is that
when the systems have more potential, the creative researcher finds
a way to use that power. They thought 64K was enough. Then 256K was
enough....

	Bitmapped workstations revolutionized the way we work with computers.
Suppose the workstation of the future had:

	Expert systems assiting you in creating new software, tapping
	into the knowledge base of the results of a million person-years of
	software experience.

	Hypertext encyclopedias available via USENET.

	Voice recognition systems, including personaility traits,
	inflections, etc.

	Artificial personalities.

	Real-Time, Real Colour 3D Imaging systems.

When we worked with Punchcards, 64K was a lot.

	Video terminals		640K?

	Bitmapped graphics.	6.4M?

	Expert Systems		64M?

	????			640M?

Give me enough memory, CPU power, tools, and time, and I would come up with
one or two ideas.

--
Bruce G. Barnett	<barnett@crdgw1.ge.com>  a.k.a. <barnett@[192.35.44.4]>
			uunet!steinmetz!barnett, <barnett@steinmetz.ge.com>

jfc@athena.mit.edu (John F Carr) (03/29/89)

In article <13433@steinmetz.ge.com> davidsen@crdos1.UUCP (bill davidsen) writes:

>If the machine is a
>workstation rather than being used for timesharing (many schools try to
>put 32 users on an 8MB Sun), the total memory in use is probably 4-12MB.
>Do most users need that in a workstation? 

Yes.  At the moment, I am using about 27 Meg of virtual memory split between
two workstations (4M & 6M RAM; 16 M swap).  Processes:

   Saber C (a C interpreter running under X):   ~7   M
   Emacs + subprocesses                          2.5 M
   2 large computational programs                2   M
   4 pairs of (xterm+csh)                        1.1 M
   X Server                                       .7 M
   rrn+Pnews                                      .5 M
   random small utilities, subshells

   (plus kernel & system processes)

That is the static load; I also run compilers, the program I am working on,
read mail, write files, etc...  

I find I can't fit all the programs I want to run into 16MB.  I _don't_
have access to a large, fast machine for computation.  Instead, I use X
windows to run two workstations from a single display, and accept that
overhead. 

--
   John Carr             "When they turn the pages of history,
   jfc@Athena.mit.edu     When these days have passed long ago,
   bloom-beacon!          Will they read of us with sadness
   athena.mit.edu!jfc     For the seeds that we let grow?"  --Neil Peart

consult@osiris.UUCP (Unix Consultation Mailbox ) (03/30/89)

In article <13433@steinmetz.ge.com> davidsen@crdos1.UUCP (bill davidsen) writes:
> If the machine is a
>workstation rather than being used for timesharing (many schools try to
>put 32 users on an 8MB Sun), the total memory in use is probably 4-12MB.

We have a pilot system running on a number of single-user diskless Sun 3/50s
and I'll tell you exactly how much memory is in use on each of those
workstations: the entire 4Mb.  We had to double the size of all the server
swap partitions just to keep the systems running.  And even after taking the
-g's and -gx's out of all the makefiles, *and* stripping all the executables,
it's still Page City.

>Do most users need that in a workstation? I don't, as long as I have
>access to a large machine for those rare problems which can use that
>much memory.

I never needed more than the 4Mb in a 3/50 myself.  Of course I was still
doing most of my work on the Pyramids, which helps a lot.  (They've all
got >= 16M main memory and hundreds of Mb swap.  Zippy!)


phil

dan@ccnysci.UUCP (Dan Schlitt) (03/31/89)

In article <68@crdgw1.crd.ge.com> barnett@crdgw1.crd.ge.com (Bruce Barnett) writes:
:Another thing to consider about 1 GByte Memory workstations, is that
:when the systems have more potential, the creative researcher finds
:a way to use that power. They thought 64K was enough. Then 256K was
:enough....
:
At the first usenix conference I attended there were questions from
the audience about how more than 1 Mbyte could be best put to use on a
unix system.  They got answers like "use it as a fast disk" :-)  Make
the memory available and people will find lots of good ways to use it.
And not always the ones that you think of beforehand.

-- 
Dan Schlitt                        Manager, Science Division Computer Facility
dan@ccnysci                        City College of New York
dan@ccnysci.bitnet                 New York, NY 10031
                                   (212)690-6868