[net.arch] paging and loading: more from D

aglew@ccvaxa.UUCP (09/28/86)

> [Rex Ballard]:
> Think about some of the "tails" that come with most well-written
> applications.
> 
> Argument parsing - do it at start up, probably not used again.
> Buffer initialization - you might not need this more than once.
> File opening - one of those things you don't want to do too often.
> Error handling - lot's of messages you hope you'll never use.
> State machines - typically only one choice executed at a time.
> Nested subroutines - Only the "active path" is needed.
> Default linked libraries - Do you always use the floating point
> 	when you call printf().

Think about how easy it is to recognize at least some of these tails.

Argument parsing - it is obvious when you've executed a bit of code are not
    ever going to use it again, in a straight line int;work;cleanup tight
    program. The automatic overlay systems that were in vogue before virtual
    memory can do this.
Buffer initialization, File opening - ditto, if the program does a bunch of
    opens and then processes; harder to recognize for something like an
    editor, where your file manipulations are distributed throughout time.
Error handling - harder to recognize. Standard routines like perror() can
    be marked `rarely used', and so not always pulled in in swaps, but
    otherwise this would require explicit marking by the programmer, or
    accumulation of profiling information.
State machines, Nested subroutines - once again, easy to recognize usage
    patterns because they are inherent in the code. Now, everybody has his
    own preferred way of implementing state machines, so automatic overlay
    generation would be very application and programming style specific;
    on the other hand, the language contains enough informatin to indicate 
    the `active path' for subroutines.
Default linked libraries - that's more printf's problem than anyome else's.

IE. you can recognize most of the patterns that RB described, and generate
overlays for them automatically. But, the same techniques can be used in
combination with virtual memory and a vadvise() call, giving you the
static advantage of predictive prepaging with the dynamic advantages of
virtual memory. And it's up to you to decide if the slowdown from address
translation bothers you.

---

The discussion of virtual memory is wearing thin. We seem to be dwelling on
the following points:
    - Virtual memory sometimes implies a slowdown in the machine's basic
      cycle time. Sometimes it doesn't, if it can be overlapped with
      something else. If there is really no speed cost with virtual memory,
      then use it if you can afford it.
    - When you can choose between a fast non-virtual machine and a slightly
      slower virtual machine system considerations become more important:
	- small job, multiprocessing systems seem to favour virtual memory
	- one single job systems, big scientific monotasking systems (not
	  scientific multitasking systems), places that run only one program,
	  all the time, with well-known access patterns, may do better without
	  virtual memory, as long as they are willing to pay the costs of
	  tuning.

Is there some way that you can have your cake and eat it too?
    - vadvise() type calls at least give you some of the advantages of 
      good predictive overlaying. That's the software side.
    - Is it possible to do anything for the hardware cost of virtual memory?
      IE. provide a faster non-virtual mode, and a slower virtual mode?
      The machine that I'm working on now has mapped an unmapped modes,
      but the speed difference is negligible.
	I think that the problem is that the slowdown from virtual memory
      is a fraction of a cycle, say 10%, not a multiple. If it was a multiple
      separate modes, or two different sets of instructions, would be
      possible; since the difference is a fraction, the only way to speed up
      in unmapped mode would be to change the clock frequency. Creating all
      sorts of problems.
Just another krazy idea...

Andy "Krazy" Glew. Gould CSD-Urbana.    USEnet:  ihnp4!uiucdcs!ccvaxa!aglew
1101 E. University, Urbana, IL 61801    ARPAnet: aglew@gswd-vms