[net.arch] Page Size - Again

wallach@parsec.UUCP (01/22/84)

#N:parsec:32800003:000:2362
parsec!wallach    Jan 21 14:43:00 1984

page size - more

in choosing a page size, either for hardware or software reasons,
there are several parameters to be used. Among them are: the size of
the logical address space, the amount of working set to be maintained
by the memory management units, the block size of data that is
maintained on file storage, and the amount of main memory
fragmentation that one wishes to tolerate if one wants to support
small objects in one page.

the smaller the page size, the smaller amount of fragmentation.
That's the good news, however, the larger amount of pagetables, if one
uses a indexed table structure for logical to physical translation, or
hash table  entries if one use the translation process used by
the system/38. Also the larger amount of referenced/modified bits to
be examined for page replacement purposes.


the smaller the page increases the likelihood that multi-level indexed
lookups are needed for translation, or the need to hash the page
fields.  the larger the page size means that, for example, for a
32-bit logical address of the vax, only a two level lookup where only
the size of a page need only be resident (i.e., a 4 kb page means that
first level lookup is only one page long, and the second level lookup
is pagetable.  this is contrasted to using a 512 byte page which
requires the first level lookup that is resident to be larger than a
page.

some of the tradeoffs as applied to disc allocation are elegantly
presented in the 4.2 bsd design tradeoffs for that file system.

with the advent of high density rams (16 and 64k statics), the effect
of page size on mmu working set maintenance is becoming moot.

the item that most people miss with virtual memory, mmu design, etc is
that when the mmu misses, one wants the translation to be as fast as
possible.  the same analysis that goes into effective data cache
aacess time is appopriate to mmu design.

Lastly is that virtual memory is simply the exposure of a file system
abtraction to a running process.  the more the file system structure
differs from the address translation process, the more screwed up
things get.  the world has enough of the motorola mmu designs.

There are enough examples of good and bad designs of virtual memory
systems available, that one would assume many of these discussions
would become as moot as choosing one's or two complement arithmetic.

andree@uokvax.UUCP (01/24/84)

#R:parsec:32800003:uokvax:9900004:000:593
uokvax!andree    Jan 22 02:15:00 1984

/***** uokvax:net.arch / parsec!wallach /  2:43 pm  Jan 21, 1984 */
There are enough examples of good and bad designs of virtual memory
systems available, that one would assume many of these discussions
would become as moot as choosing one's or two complement arithmetic.
/* ---------- */

I wasn't aware that which arithmetic to use was moot. I know that the
world has (for the most part) settled on two's complement, but none of
the arguments I have heard as to why satisfy me. Could someone please
explain why this choice was made, or why it is now moot (pointers will
do)?

	Thanx,
	<mike

pkr@unisoft.UUCP (Phil Ronzone) (02/06/84)

..... (why two's complement ....) ... (prevails today) .....

Well, for a binary machine, there have been three major trends:
    - sign/magnitude
      Advantage:     Easy to read as binary.
      Disadvantages: Addition of numbers of opposite sign requires
                     either a one's or two's complement step, or
                     comparision of magnitude, sometimes both.

    - one's complement
      Advantages:    Easy sign reversal. Complementer takes (generally)
                     1/2 the logic.
      Disadvantage:  Slow in compare. Has two zeros (all 0's, all 1's).

    - two's complement
      Advantages:    Carry, arithmetic shifts easier/faster to do in logic.
      Disadvantage:  Complement is slower, takes twice logic as one's compl.

Summary: Two's complement won because in hardware gate count and speed, it
         was faster in compares and arithmetic shifts, easier to do
         synchronous (.vs. ripple) carries, and slighlty faster for +- ops.
         Since * / ops. do a lot of shifts, carries, and compares, they
         tended to be faster also (than one's complements). The only penalty
         paid was the complement operation, and that was only in gate counts
         and wasn't a common operation anyway.