[net.arch] Memory Law

bzs@bu-cs.UUCP (Barry Shein) (11/14/85)

Someone suggested that they put a lot of memory on their VAX to help their
load here. I noticed that raising our 750 to 8MB seemed not to help much.
As a matter of fact, under some common conditions I suspect it might have
made it worse although I haven't tried to measure it yet (ie. memory managing
overhead has gone way up, tables etc, sort of making promises you cannot
keep cause you are actually flat out of CPU, not memory.)

Sooooo....I have been trying to come up with a reasonable rule of thumb
for how much memory is too much (?!) It will go something like this:

	Don't buy more memory than your CPU can zero out in N seconds.

For example, the 750 with 8MB running flat out a little loop like:

		clr	r1
	loop:
		clr	(r1)+
		jbr	loop

I figure it would take about, oh, wild guess, 20-30 seconds, maybe a little
less to zero out all of memory. Our 3081 with 24MB of memory, about 2-3 secs
(dual processor, which is fair to use as they will both use the memory,
I figure 7.5Mips/processor, 15mips.) A Cray-II with 4 processors, 2GB memory,
I dunno, total of 500MIPS for just running that loop? So, 8-10 seconds.

I think somewhere in there lies a rule, any good suggestions what N is?
(I know, given enough memory you could use it as a RAMDISK, but let's
forget that kind of usage for now and assume at some point, particularly
on a time sharing system, you would be better off telling a user to come
back later than committing more physical memory and the overhead to manage
it (is this linear? I bet not completely.)

Don't flame my guesses about how much time here (they don't look too good
to me either, but I think they are within reason for the purpose of making
a point.) Just curious if there is a rule of thumb that could be useful.

	-Barry Shein, Boston University

Of course, if it is brilliant, it is "Shein's Rule of Memory", if it's
dumb I'll deny I ever sent this message :-)

eugene@ames.UUCP (Eugene Miya) (11/15/85)

Barry Stein's >8 MB vax problem:

This is Ivan "Sutherland's Corollary" to the von Neumann bottleneck.
He published in Scientific American's special issue on microelectronics:
if you have a process with big memory, size N, then at any instant
N-1 pieces of memory are not being used.  For multiprocessors with n
CPUs with n << N, then N-n is not much better.

For parallel processors, there is also Amdahl's law which says if you
throw infinite parallelism at a problem, the serial portions become
the bottleneck.

From the Rock of Ages Home for Retired Hackers:
--eugene miya
  NASA Ames Research Center
  {hplabs,ihnp4,dual,hao,decwrl,allegra}!ames!aurora!eugene
  emiya@ames-vmsb

cleary@calgary.UUCP (John Cleary) (11/16/85)

> .... I noticed that raising our 750 to 8MB seemed not to help much.
> As a matter of fact, under some common conditions I suspect it might have
> made it worse...
> 
> Sooooo....I have been trying to come up with a reasonable rule of thumb
> for how much memory is too much (?!) It will go something like this:
> 
> 	Don't buy more memory than your CPU can zero out in N seconds.

> (I know, given enough memory you could use it as a RAMDISK, but let's
> .....

I suspect any system where you can speed it up by pretending that part of
RAM is a disk is badly designed/tuned.  It is true on the MacIntosh where it is
a symptom of the naive memory (mis)management on that machine.
Surely tuning an OS by increasing its page size in proportion to memory
size will help with things like memory management overheads.

The main reason for this type of problem seems to be the lingusitic and 
intellectual separation of disk files from other data structures, which is a 
hangover from the days when machines had 8KB of RAM and 10MB of disk.

John G. Cleary, Dept. Computer Science, The University of Calgary,
2500 University Dr., N.W. Calgary, Alberta, CANADA T2N 1N4. Ph. (403)220-6087
Usenet: ...{ubc-vision,ihnp4}!alberta!calgary!cleary
        ...nrl-css!calgary!cleary
CRNET (Canadian Research Net): cleary@calgary
ARPA:  cleary.calgary.ubc@csnet-relay

daveb@rtech.UUCP (Dave Brower) (11/16/85)

> Sooooo....I have been trying to come up with a reasonable rule of thumb
> for how much memory is too much (?!) It will go something like this:
> 
> 	Don't buy more memory than your CPU can zero out in N seconds.

There's handwaving here that comfy values are between 5 and 8 Meg/mip.
Any less and you're plainly memory starved, and above that you may get
diminishing returns.

A maxi-memory opinion is put forth by Steve Wallach in this month's
Unix Review.   He's talking about 100M systems:

   "Do you really have to have all that memory?  Yes, even for UNIX.
   ...  With all this physical memory, we can make the disk cache as
   big as we like, so when we run up against I/O benchmarks, we just
   define a disk cache large enough to keep us from ever having to
   go to disk.  ... Some people cry, 'Foul!  That's not a fair 
   benchmark because I can't do that on my VAX'--to which, of course,
   we respond, "Right."  Then we smile and don't say anything more."

The problem with this is that it doesn't represent very many real job
mixes.   On a 750 with 8 meg you're probably running out of gas in the
CPU, which is where Barry's N sec to clear comes in.  

You probably also need to take into account the I/O bandwidth.  370-ish
systems seem to be able to handle more memeory than the CPU speed would
indicate because of the bandwidth available with the channel I/O.

Remember when 1k S100 cards were $400?

-- 
{amdahl|dual|sun|zehntel}\		|
{ucbvax|decvax}!mtxinu---->!rtech!daveb | "Something can always go wrong"
ihnp4!{phoenix|amdahl}___/		|

roy@phri.UUCP (Roy Smith) (11/18/85)

> > I have been trying to come up with a reasonable rule of thumb
> > for how much memory is too much

> On a 750 with 8 meg you're probably running out of gas in the CPU

	For what it's worth, our 750 came with 2 Meg and we were always
doing a lot of paging.  Now we have 4 Meg and almost never do any.  Typical
load is 3 emacs's, a big bib/tbl/neqn/nroff job, and a compile or some big
number cruncher (not to mention 6 people with lots of idle time).  Keep in
mind the 1 Meg 4.2bsd kernel, so we're talking 1 vs. 3 Meg of user memory.
-- 
Roy Smith <allegra!phri!roy>
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

north@down.FUN (Stephen C North) (11/18/85)

why wouldn't you be delighted to get rid of the slowest part of the
memory hierarchy?  or do you like paging and reading disk files because
it gives the cpu plenty of time to run sendmail?  or do you run just 8
megs of disk, so more memory is superfluous?  or is the problem that
you haven't the vaguest idea how to intelligently manage 128 megabytes
of memory, and running 128 1-meg processes sounds so stupid that you'd
better just unplug all those extra boards and send them back before
anything worse happens?
-- 
Parturiunt montes, nascetur ridiculus mus!

john@frog.UUCP (John Woods, Software) (11/21/85)

> why wouldn't you be delighted to get rid of the slowest part of the
> memory hierarchy?  or do you like paging and reading disk files because
> it gives the cpu plenty of time to run sendmail?  or do you run just 8
> megs of disk, so more memory is superfluous?  or is the problem that
> you haven't the vaguest idea how to intelligently manage 128 megabytes
> of memory, and running 128 1-meg processes sounds so stupid that you'd
> better just unplug all those extra boards and send them back before
> anything worse happens?
> -- 
> Parturiunt montes, nascetur ridiculus mus!
> 
*Only* a mere 128 Meg?  X's Eagle drive is over 3 times that size, and I
have seen systems with 5 Eagles on them.

Someone else posted a reasonable explanation of multi-level memory
hierarchies, so I shall just summarize:  No amount of memory is *ever*
"enough", and fast memory costs more than slow memory.

--
John Woods, Charles River Data Systems, Framingham MA, (617) 626-1101
...!decvax!frog!john, ...!mit-eddie!jfw, jfw%mit-ccc@MIT-XX.ARPA

Out of my way, I'm a scientist!
	War of the Worlds

north@down.FUN (Stephen C North) (11/24/85)

john frog questions north's attention to 128 megs, asking in particular
about eagles.  there's some confusion here.

north is referring to thrash, a local 785 with 128M of ram.  to satisfy
mr. frog, yes, thrash has six ra81's with two eagles on order.

one may blithely assert that no amount of ram is ever enough, nonetheless
some amounts are more enough than others.

	north/honey
-- 
Parturiunt montes, nascetur ridiculus mus!