[net.arch] Page size and the meaning of life

waters@oracle.DEC (Greg Waters, 225-4986, HLO2-1/J12) (10/20/85)

[Forgive me for commenting without having read the literature....  Everyone
else does!]

Rich Hammond, Bellcore, writes:
> You don't have to keep the page tables in dedicated memory (I assume you
> mean fast static RAM), the DEC VAX and the NS32000 family both have caches
> of frequently used page table entries and keep the rest in main memory.
> The thing I would hope that Motorola does better is allow a larger size
> page (VAX and NS320000 use 512 bytes/page).  As dynamic RAM gets less
> expensive per bit it makes sense to accept greater internal fragmentation
> in pages in return for smaller page tables.

I agree, memory is getting cheap.  So why complain about small pages?  The
VAX has small pages, and VMS (perhaps some enhanced UNIXes also) has put
them to good use!  Such creative use of page tables solves about half of
the problems discussed in recent net.arch postings.  To wit:

1.  If you'll accept fragmentation in pages because memory is cheap, then,
    say, a doubling in page table size should also not worry you.  After
    all, even the page tables are virtual.  And you don't need a huge
    translation lookaside buffer when the pages are smaller, because only
    large-array crunching programs can touch lots of pages at once.  Most
    programs would touch only slightly more 512B pages than 2KB pages within
    a very short period of time.

2.  I agree, page faulting is inefficient with 512 byte pages.  That's why
    a VAX OS shouldn't fault 512 byte pages.  The size of a page fault can
    be tuned in software to any multiple of 512 that you like.

3.  The page size determines protection and mapping granularity.  Am I wrong
    in thinking that the smaller the pages, the closer you are to having
    certain benefits of an object-based memory system?  In some applications,
    you may want lots of different memory regions with different protection.
    And the fine-grained mapping lets you share reasonably small data regions
    between processes.  I hear that UNIX doesn't do much with that capability,
    but the multiprocess data sharing support is there under VMS.  No sweat
    sharing read-only code, and sharing data with different protection for
    each process is easy too.

4.  I saw a lot of complaints in net.arch lately about powerful debugging
    features.  People wanted to trap reads of uninitialized data, trace writes
    to a variable to figure out who's trashing it, etc.  In real time systems,
    you do this with a logic analyzer whose trigger output interrupts the CPU.
    But for non-real-time program development, you do it with a debugger that
    knows how to use the page table.  When the pages are small (128 longwords
    per page in VAX), the performance is good enough for program development.
    The debugger can trap accesses to an entire page when the user has enabled
    uninitialized variable checking.  The debugger then checks what variable
    is being touched, and lets the program continue if it's not a watched
    variable or if it has already been written to.  For watchpoints, the
    debugger traps only write accesses to the page.


Almost anything that you can do with a page table becomes a little more useful
when the pages are small.  So, can someone tell me why there's an overwhelming
demand for large pages?

				Greg Waters
				...decvax!decwrl!dec-rhea!dec-oracle!waters

brooks@lll-crg.ARpA (Eugene D. Brooks III) (10/21/85)

Would anyone care to comment on why we need virtual memory at all
with a 256 meg real memory being available in the near future?

I haven't seen a virtual memory system yet that would stand up
to one of my simulation programs when the program size exceeded
the physical memory size.  As soon as the system started paging
the performance was so out to lunch that you would have to wait
days for the program to complete and everyone would be on your
back for sending the machine to lunch.	Back when 1-4 meg is all you
had paging might have been important, with 16 meg on my personal computer
I don't see the need for it.

dvadura@watdaisy.UUCP (Dennis Vadura) (10/22/85)

In article <931@lll-crg.ARpA> brooks@lll-crg.UUCP (Eugene D. Brooks III) writes:
>Would anyone care to comment on why we need virtual memory at all
>with a 256 meg real memory being available in the near future?
>
>I don't see the need for it.

One need for virtual memory is to be able to support relocatable
code which has non-relative addressing references.  A good example
is most C compilers.  They tend to locate the program constants
at virtual address 0 and up, with access being by direct addressing.

In a multitasking environment virtual memory can be used to protect tasks
from illegaly accessing each others data space.  Further it allows tasks
to share in a controled manner portions of their respective address spaces.

Perhaps the question to ask is do we need disk paging?
With large memories becoming available rolling pages out to disk may become
unneccessary, but the concept of virtual memory and its associated attributes
is probably still useful.

-- 
--------------------------------------------------------------------------------
Dennis Vadura, Computer Science Dept., University of Waterloo

UUCP:  {ihnp4|allegra|utzoo|utcsri}!watmath!watdaisy!dvadura
================================================================================

rcd@opus.UUCP (Dick Dunn) (10/23/85)

> Would anyone care to comment on why we need virtual memory at all
> with a 256 meg real memory being available in the near future?

First response:  How near?  1 Mbit chips are real but not quite big-time
commercial stuff yet (that means: not CHEAP yet), but suppose that they
are.  256 Mb = 256*8 = 2K of these chips, which is a fair space-heater in
any technology.  In larger machines, maybe yes; we're a few years away in
small machines.

Second response:  VM is a means for getting more use out of the memory
you've got.  Until that 256 Mb is almost free, there's a cost tradeoff to
be considered for how much memory you put in.  VM sets the "hard limit" of
a process address space independently of the actual physical memory on the
machine, so you don't have to go out and buy more memory to run a program
with a large address space--it just runs slower.  (Yes, in some cases it
runs intolerably slower.  If that happens, go buy more memory, obviously.)

Third response:  Decrease program startup?  (Tentative.)  If you insist on
everything being in physical memory, you gotta load the whole program
before you start execution.  Might take a long time--the case of interest
is where a program has gobs of seldom-used code.  The counter to this
response is that if a program has poor locality of reference--which is
common during startup!--the VM paging behavior is essentially to load a
large part of the program but in random order, which can make it take
longer than loading all of it sequentially.

Fourth response:  Maybe VM is appropriate to a certain balance of process
size, memory cost and size, and backing store cost/speed.  You could argue
that larger machines are now outside the domain of that particular set of
tradeoffs.  Smaller machines are not.

> I haven't seen a virtual memory system yet that would stand up
> to one of my simulation programs when the program size exceeded
> the physical memory size.  As soon as the system started paging
> the performance was so out to lunch that you would have to wait
> days for the program to complete and everyone would be on your
> back for sending the machine to lunch...

This is a lesson that strong advocates of virtual memory seem to have to
keep learning (or rather, that we have to keep pounding at them):  There
are programs which should NOT be run as pageable.  The whole idea of paged
virtual memory is based on the assumption that you can keep the "working
set" of the process(es) of interest in physical memory.  Some processes
have very large working sets--or more correctly, they don't obey the
working-set model.  These need different treatment.  It bears repeating
occasionally that thrashing can happen with a single process (as illus-
trated above); it doesn't have to come from process interactions.
-- 
Dick Dunn	{hao,ucbvax,allegra}!nbires!rcd		(303)444-5710 x3086
   ...Simpler is better.

brooks@lll-crg.ARpA (Eugene D. Brooks III) (10/24/85)

>Perhaps the question to ask is do we need disk paging?
>With large memories becoming available rolling pages out to disk may become
>unneccessary, but the concept of virtual memory and its associated attributes
>is probably still useful.
I'm sorry I was not precise enough.  The question was meant to be do we need
disk paging?  The much needed firewall protection and address space shareing
for programs in a multiprocessor can be provided by a simple {base,limit}
segmentation scheme.  One or course needs several sets of such registers
to establish the several segments, code, static data, stack, shared static
data, ... that one needs in a program.  Do we really need the page oriented
virtual memory systems that occur in todays micros and mini computers?  If
we have more than enough physical memory, do we need the overhead associated
with the page mapping hardware?  It is difficult to make such hardware operate
at supercomputer speeds and poses severe difficulties for non bus oriented
architectures (large N multiprocessors).

jww@sdcsvax.UUCP (Joel West) (10/24/85)

In article <926@decwrl.UUCP>, waters@oracle.DEC (Greg Waters, 225-4986, HLO2-1/J12) writes:
> 2.  I agree, page faulting is inefficient with 512 byte pages.  That's why
>     a VAX OS shouldn't fault 512 byte pages.  The size of a page fault can
>     be tuned in software to any multiple of 512 that you like.

VAX/VMS has such a parameter, clustersize.  It appears to be typically
11 pages.  If VMS is using 11, you'd better hope your program
also clusters at 11, or you get some really nasty fragmentation/page
fault performance hassles.


	Joel West	CACI, Inc. - Federal (c/o UC San Diego)
	{ucbvax,decvax,ihnp4}!sdcsvax!jww
	jww@SDCSVAX.ARPA

jnw@mcnc.UUCP (John White) (10/25/85)

> > Would anyone care to comment on why we need virtual memory at all
> > with a 256 meg real memory being available in the near future?
> 
> First response:  How near?  1 Mbit chips are real but not quite big-time
> commercial stuff yet (that means: not CHEAP yet), but suppose that they
> are.  256 Mb = 256*8 = 2K of these chips, which is a fair space-heater in
> any technology.  In larger machines, maybe yes; we're a few years away in
> small machines.

The 1Mb chips will probably use about as much power as todays 256k chips.
Acording to some specs I have lying around, 256k chips take 70ma grinding
and 4.5ma idle. If you have a 32bit processor, you will have 32 chips
grinding and 2048-32=2016 chips idle. This gives 11.3 amps or 57 watts.
I would hate to have to heat my house with that! :-)
Of course, you will need more power than this because of refresh, parity,
bus drivers, etc. I expect 100 watts would do it, though.
As for "how near?", with 4 jram cards you can put 8Mbytes in a PC.
When the 4Mbit chips come out in a couple of years, 128Mbytes will
fit in a PC. This is enough to avoid demand-paging on a single user system
for most aplications.

> ... VM sets the "hard limit" of
> a process address space independently of the actual physical memory on the
> machine, ...

Main memory of a given size will not cost much more than a paging disk of
the same size in a few years. (At least compared to system cost).
Then, there will be no advantage to having a limit set by a disk rather
than by main memory. Main memory is much faster than disk, and complex
demand paging hardware will not be needed.

Of course, if you replace the paging disk with main memory and main memory
with cash ...

-John N. White {jnw@mcnc, jnw@duke}

franka@mmintl.UUCP (Frank Adams) (10/26/85)

In article <146@opus.UUCP> rcd@opus.UUCP (Dick Dunn) writes:
>> Would anyone care to comment on why we need virtual memory at all
>> with a 256 meg real memory being available in the near future?
>
>Second response:  VM is a means for getting more use out of the memory
>you've got.  Until that 256 Mb is almost free, there's a cost tradeoff to
>be considered for how much memory you put in.  VM sets the "hard limit" of
>a process address space independently of the actual physical memory on the
>machine, so you don't have to go out and buy more memory to run a program
>with a large address space--it just runs slower.  (Yes, in some cases it
>runs intolerably slower.  If that happens, go buy more memory, obviously.)

What leads either of you to believe that 256M will be enough to run your
programs?  Memory used by programs expands to use the space available.
There was a time, not so long ago, when 256K was a lot of memory, and
people didn't understand how any program could use more than 16M.  If
memory becomes sufficiently cheap, there are time/space tradeoffs which
can be made to use large blocks of it.

Or, for a second response, if memory becomes cheap enough, what do you
need *disks* for?  You will need a hardware solution to preserve memory
in the event of system/power crashes, of course.

Frank Adams                           ihpn4!philabs!pwa-b!mmintl!franka
Multimate International    52 Oakland Ave North    E. Hartford, CT 06108

omondi@unc.UUCP (Amos Omondi) (10/27/85)

> >Perhaps the question to ask is do we need disk paging?
> >With large memories becoming available rolling pages out to disk may become
> >unneccessary, but the concept of virtual memory and its associated attributes
> >is probably still useful.
> I'm sorry I was not precise enough.  The question was meant to be do we need
> disk paging?  The much needed firewall protection and address space shareing
> for programs in a multiprocessor can be provided by a simple {base,limit}
> segmentation scheme.  One or course needs several sets of such registers
> to establish the several segments, code, static data, stack, shared static
> data, ... that one needs in a program.  Do we really need the page oriented
> virtual memory systems that occur in todays micros and mini computers?  If
> we have more than enough physical memory, do we need the overhead associated
> with the page mapping hardware?  It is difficult to make such hardware operate
> at supercomputer speeds and poses severe difficulties for non bus oriented
> architectures (large N multiprocessors).


One answer, and probably the only reasonable one, appeared in an earlier
article i.e. the need to deal with storage allocation; specifically, the
need to deal with external fragmentation.

I'm not sure i agree with the speed argument. If you have base-limit
registers then you still have to some checks on the validity of the
virtual address; this takes no more time than on a segmented-paged
system since in the latter it is usual to do checks on all the fields
of the virtual address in parallel. As to the supercomputer bit, the
Cyber 205, a supercomputer in every sense of the word, implements
virtual store and so far its users seem to be quite happy with its
perfomance. Of course for disc transfers they have a very large
"super-page" for efficiency ...

I never heard anyone say they had "enough" physical memory !
Everyone always seems to want more.

I really don't think you'll get a "satisfying" answer. Inspite of
the fact that paging has been around for a while, it is still not
clear that it is the best thing to have and there is no doubt that
more research needs to be done.

henry@utzoo.UUCP (Henry Spencer) (10/27/85)

> ...  The much needed firewall protection and address space shareing
> for programs in a multiprocessor can be provided by a simple {base,limit}
> segmentation scheme.  One or course needs several sets of such registers...

There are a couple of very useful tricks one can pull with paged systems
that cannot be done with base-limit schemes.  For one thing, it is possible
to enlarge a process's stack without having to move the whole thing around
in memory (scatter allocation).  For another, it is possible to do a much
more efficient implementation of fork() using copy-on-write techniques.
Neither of these matters too much for small processes, but they start to
be major considerations for really big ones.

Note that it is possible to overlap page-translation time with memory-
access time, as on the Celerity C1200, so that very little speed penalty is
incurred.  Generalizing this to supercomputers with 50-ns memory is not
so straightforward, admittedly.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

omondi@unc.UUCP (Amos Omondi) (10/27/85)

Why virtual memory ?  Isn't it obvious that the answer is 41 ?

omondi@unc.UUCP (Amos Omondi) (10/27/85)

Perhaps we relly ought to do awy with virtual store all together.
It was afterall a retrograde step in computer architecture; before
someone concocted it every decent programmer knew the latency time,
rotational delay, etc. of his backing store. Armed with this
knowledge and knowing how the flow of control in the program
one would carry out a few calculations to determine the best
locations ( on drum ) for the various parts of the program in
order to achieve the best overlay scheme. Small wonder the
software cost has gone up so much ... 

gdmr@cstvax.UUCP (George D M Ross) (10/29/85)

If the operating system allows you to modify page protection, catch access
violations and define the VM-to-disc mapping on a per-page basis, all from
a user program, then it is possible to do quite a nifty implementation of
differential files (and such-like things).  You need a reasonable page size
do make it work sensibly; a {base, limit}-type segment is pretty useless
from this point of view.

(VMS will let you do all the necessary, BTW.)

-- 
George D M Ross, Dept. of Computer Science, Univ. of Edinburgh
Phone: +44 31-667 1081 x2730
UUCP:  <UK>!ukc!cstvax!gdmr
JANET: gdmr@UK.AC.ed.cstvax

rcb@rti-sel.UUCP (Random) (10/29/85)

In article <1156@sdcsvax.UUCP> jww@sdcsvax.UUCP (Joel West) writes:
>In article <926@decwrl.UUCP>, waters@oracle.DEC (Greg Waters, 225-4986, HLO2-1/J12) writes:
>> 2.  I agree, page faulting is inefficient with 512 byte pages.  That's why
>>     a VAX OS shouldn't fault 512 byte pages.  The size of a page fault can
>>     be tuned in software to any multiple of 512 that you like.
>
>VAX/VMS has such a parameter, clustersize.  It appears to be typically
>11 pages.  If VMS is using 11, you'd better hope your program
>also clusters at 11, or you get some really nasty fragmentation/page
>fault performance hassles.
>

11????? 11!!!!!! I think your machine is not in very good tune. By the
way, clustersize is the blocking factor on the disks. Page fault cluster
default (pfcdefault) is the paging size and the sysgen default is 64
(32Kb per fault).

-- 
					Random
					Research Triangle Institute
					...!mcnc!rti-sel!rcb

omondi@unc.UUCP (Amos Omondi) (10/29/85)

This is in answer to mail received from one Mark Flynn at Washington U. :

1) That really was 41. 42 is the answer to a lot of things, not just
   page size.

2) Something truly worthy of discussion in net.arch: Did Deep Thought have
   virtual memory ? My tentative answer is YES; i calulate the number of
   registers needed for address translation to be about 2 trillion.

sambo@ukma.UUCP (Father of micro-ln) (10/29/85)

In article <406@unc.unc.UUCP> omondi@unc.UUCP (Amos Omondi) writes:
>As to the supercomputer bit, the
>Cyber 205, a supercomputer in every sense of the word, implements
>virtual store and so far its users seem to be quite happy with its
>perfomance.

We are not happy with the Cyber 205's performance.  After spending about
20 hours on the code for this one program, I managed to reduce the execu-
tion time by 25%.  The Cyber 205 was still 5 times slower than the unop-
timized code for the Cray 1 out on Lawrence Livermore.  This is so slow,
that we cannot use the Cyber for anything useful.  We do have some time
on the Cyber left this month, which I will burn up running an infinite
loop, or something like that.  I don't know if this has anything to do
with the Cyber having virtual memory.
--
Samuel A. Figueroa, Dept. of CS, Univ. of KY, Lexington, KY  40506-0027
ARPA: ukma!sambo<@ANL-MCS>, or sambo%ukma.uucp@anl-mcs.arpa,
      or even anlams!ukma!sambo@ucbvax.arpa
UUCP: {ucbvax,unmvax,boulder,oddjob}!anlams!ukma!sambo,
      or cbosgd!ukma!sambo

	"Micro-ln is great, if only people would start using it."

nick@inset.UUCP (Nick Stoughton) (10/29/85)

In article <407@unc.unc.UUCP> omondi@unc.UUCP (Amos Omondi) writes:
>
>Why virtual memory ?  Isn't it obvious that the answer is 41 ?

Errrrr ..... don't you mean 42??
(with thanx to Douglas Adams)

dik@zuring.UUCP (10/31/85)

> (Eugene D. Brooks III in lll-crg.931)
> I haven't seen a virtual memory system yet that would stand up
> to one of my simulation programs when the program size exceeded
> the physical memory size.
Do you mean program size or data size?  It makes quite a difference.

> (Amos Omondi in unc.406)
> ....................................As to the supercomputer bit, the
> Cyber 205, a supercomputer in every sense of the word, implements
> virtual store and so far its users seem to be quite happy with its
> perfomance. Of course for disc transfers they have a very large
> "super-page" for efficiency ...
Count me out, page size (large pages; see below) is 65536 64bit words.
A page fault takes 0.5 secs real time!

> (Henry Spencer in utzoo.6086)
> Virtual memory has always meant some speed penalty, although clever design
> can minimize it.
Yeah.  But it is not CPU cycles that become a problem but real time.

> (Amos Omondi in unc.405)
> In taking the Cray 2 as an example, one should take historical, philosophical
> , etc. considerations into account. The CDC 6600, CDC 7600, CRAY 1, and
> CRAY 2 do not have virtual memory; and Seymour Cray was largely responsible
> for their designs. Other CDC machines, inculding the Cyber 200 series which
> are the in Cray1-Cray2 perfomance range, have virtual memory as do several
> of the new Japanese supercomputers .
Yes they have; is it a feature or a bug?

Now my contribution to this discussion.
First computers like Cray and CDC 200 series are intended for the
processing of data on a large scale.  Would they benefit from a VM system?
My opinions on this are ambivalent, so to clarify:
1.  For instrucion space VM is very good because there is no need for
    overlays and their large family, so it is for data space if
    data access is well behaved (note however the caveat below).
2.  For general data VM is not good; it is nice for the programmer to
    be able to write his solution as simple as possible, but when we
    look at CP vs IO tradeoffs this simple solution is not the best.

Question is: what is well behaved data access?
Is a matrix multiplication well behaved?
One would think so; but on the CDC Cyber 205-611 (One pipe, 1 Mword (64
bits) of memory) the most simple implementation of a 1024*1024 matrix
multiply would require about 6 minutes of CP time versus 92 days of IO.
If we vectorize it according to the standards the times are boosted
to the incredible 20 seconds CP and 2.5 hours IO.
There are better techniques; these will boost the IO performance to
3 minutes; watch out however with 1023*1023 matrices the IO will drop
back to some 10 hours unless your working set contains half of the
machines real memory; when you will have 3 minutes.
(Note: these times are based on the 8 Mbit/s disk transfer rate found
with 65536 64bit word pages; with the smaller pages of 2048 words
at our site; the transfer rate would drop.)
There are better techniques available; but these imply the explicit
bypassing of the VM system.  (With these the IO is boosted to about
1 minute IO but CP is increased.)
This holds not only for a matrix multiply; but also for other problems
in numerical algebra.

In conclusion:
1.  VM is very good for instrucion space.
2.  VM should not be applied to data space; the programmer should
    do his own IO if the volume of data is large.  Otherwise he
    will optimize CP; ignoring IO.
-- 
dik t. winter, cwi, amsterdam, nederland
UUCP: {seismo|decvax|philabs}!mcvax!dik

omondi@unc.UUCP (Amos Omondi) (10/31/85)

Some of my attempts to put a little humour in net.arch seem
to have been misunderstood by people who sent me mail to
try and explain what virtual memory. I sort of had a vague
idea anyway...

Those who missed the point should try and find out what 42
stands for.

We are all entitled to laugh at our profession once in a
while; maybe at our colleagues too if we are willing to
stand the risk.

Phil: some of the subscribers to net.arch actually ENJOYED this.
Well i suppose there is no accounting for taste...

When the discussion ends i'd really like to see a clear argument
for or against paging; i'll stick my neck out and say there is
no decisive one. 41 is as good as any !

omondi@unc.UUCP (Amos Omondi) (10/31/85)

> In article <407@unc.unc.UUCP> omondi@unc.UUCP (Amos Omondi) writes:
> >
> >Why virtual memory ?  Isn't it obvious that the answer is 41 ?
> 
> Errrrr ..... don't you mean 42??
> (with thanx to Douglas Adams)


No ! 42 is the answer to a lot more things than are dreamt of in
net.arch.

omondi@unc.UUCP (Amos Omondi) (10/31/85)

This is my last word on this subject.  Four points:

1) I have received more mail from people ( who object to 41 as 
   an answer ) trying to tell me what
   virtual memory is. I KNOW what it is ! It's when one can't
   remember what they had for breakfast.

2) I still stand by my answer of 41; it is the only one which relates
   paging to the Meaning of Life. If anyone still objects i
   suggest they name a place and post their choice of weapons.

3) After all has been written on net.arch it still is a fact that
   the program behaviour in paged systems is not particularly
   well understood. Whether paging is useful or not depends on the
   particular system and its intended applications; number
   crunchers running batch programs do not really need it whereas
   multiple-user interactive systems do.

peter@graffiti.UUCP (Peter da Silva) (10/31/85)

> In article <407@unc.unc.UUCP> omondi@unc.UUCP (Amos Omondi) writes:
> >
> >Why virtual memory ?  Isn't it obvious that the answer is 41 ?
> 
> Errrrr ..... don't you mean 42??
> (with thanx to Douglas Adams)

You're both wrong. The correct answer to "Why Virtual Memory?" is "4.2".
-- 
Name: Peter da Silva
Graphic: `-_-'
UUCP: ...!shell!{graffiti,baylor}!peter
IAEF: ...!kitty!baylor!peter

jer@peora.UUCP (J. Eric Roskos) (11/01/85)

>> Would anyone care to comment on why we need virtual memory at all
>> with a 256 meg real memory being available in the near future?
>
> What leads either of you to believe that 256M will be enough to run your
> programs?  Memory used by programs expands to use the space available.

Not only this, but the number of users also expands.  One thing often
overlooked, I think, is that with a paging system, assuming adequate
locality of reference, you can have a large number of pages resident that
are actively in use by a large number of users, instead of having great
unused expanses of memory allocated to large programs at any one time,
keeping other users out of memory.
-- 
Shyy-Anzr:  J. Eric Roskos
UUCP: Ofc:  ..!{decvax,ucbvax,ihnp4}!vax135!petsd!peora!jer
     Home:  ..!{decvax,ucbvax,ihnp4}!vax135!petsd!peora!jerpc!jer
  US Mail:  MS 795; Perkin-Elmer SDC;
	    2486 Sand Lake Road, Orlando, FL 32809-7642

rogerh@bocklin.UUCP (11/03/85)

About the worth of virtual memory on a honka-honka computation engine:
seems to me that the real case for virtual memory is that it fails soft.
If you have enough real memory, then you can keep all your pages in-core
and VM costs can be minimized by clever address translation; so you lose
what, 10%?  That's significant, but so is the advantage: with virtual 
memory, if you don't have enough real memory you take a gradual performance
hit.  With direct memory, you scrap the program and start over with some
pretty painful manual data-paging scheme.

Myself, I'm not clever enough to like mapping data to disk manually.

jbs@mit-eddie.UUCP (Jeff Siegal) (11/04/85)

In article <1764@peora.UUCP> jer@peora.UUCP (J. Eric Roskos) writes:
>Not only this, but the number of users also expands.  One thing often
>overlooked, I think, is that with a paging system, assuming adequate
>locality of reference, you can have a large number of pages resident that
>are actively in use by a large number of users, instead of having great
>unused expanses of memory allocated to large programs at any one time,
>keeping other users out of memory.

Before all of you single user machine (workstation) fans start flaming
about the obsolencance of multiuser machines, replace the word "users"
above with the word "processes."  jer's point is just as valid that
way.

Jeff Siegal - MIT EECS (jbs@mit-eddie on the ____net)