[net.arch] page-up problem/question

rcd@opus.UUCP (Dick Dunn) (10/12/84)

By "page-up", I mean the process of getting the initial chunk of pages
loaded for a program to do useful work in a demand-paged system.  If you
have any good direct info on this, I'd like to see it and it might be of
sufficient general interest to post it.  If you have references, please
mail them to me and I'll try to chase them and summarize.

A little more on the nature of the problem:  When starting a program which
is demand-paged, the obvious (but naive) approach is to load the page
containing the starting address and begin execution.  From there, it will
page-fault itself up to a reasonable set of useful pages.  However, the
page up happens in a somewhat haphazard fashion, particularly with programs
which have large collections of utility routines.  It's somewhat as if the
program were being swapped in in near-entirety, except that the pages are
being loaded in a rather random pattern.  This can be waved away in a big
system with fast disks, many users, and a fair number of commonly-used
programs with shared code space.  However, it hurts a lot on a small,
single-user system with a slow disk.  Obviously, if the program is below a
certain size threshold it's better to swap in the whole thing and save the
disk seek time.  Other than this--Any thoughts?
-- 
Dick Dunn	{hao,ucbvax,allegra}!nbires!rcd		(303)444-5710 x3086
   ...Relax...don't worry...have a homebrew.

jc@sdcsvax.UUCP (John Cornelius) (10/21/84)

The 2 extremes of the problem you articulate are:
	1)	bring in the initial startup page and page fault everything
		else in

	2)	bring the entire text into memory (if possible) and allow pages
		to be replaced arbitrarily by the demand paged system as other
		requirements for the space occur.

In the first case you get an arbitrary collection of pages scattered throughout
memory and every time there's a page fault you pay with the time required to
service it. In the second case you pay on the front end by bringing in
everything at once and then you pay again as small chunks (pages) get
deallocated by the requirements of others. 

It's funny how some things never change. We still, after 30 years in this
business, have to trade off speed, money, and memory. 

The whole discussion becomes moot if one has adequate memory for all
requirements but that only encourages more (Parkinsonian) requirements for the
available resources. One has to start with whatever is a given, use whatever
gimmicks one has to overcome the starting limitations, and then go ahead and
have a homebrew because there isn't any best solution.

John Cornelius
Western Scientific