rpw3@redwood.UUCP (Rob Warnock) (03/02/85)
David Schachter says in <77@daisy.UUCP>: +--------------- | Large address spaces are convenient. They are not essential. Moreover, their | convenience can rob you of the incentive to get maximum performance. The | Intel architecture is a dark cloud with a silver lining: the need to keep | within the small address space frequently causes us to find solutions that are | smaller and faster, helping us meet our performance goals. +--------------- As much as I dislike the 8086 architecture (for more reasons than the segment size), I must agree that David has a good point, though it will certainly be counter-intuitive to those who have not "been there". We found the same thing to be true many years ago in the PDP-8/PDP-11 wars -- by comparison to the PDP-8, the PDP-11 has "infinite" linear address space (at least that was the feeling at the time), yet PDP-8 code was quite often faster and smaller. Why? Because -- and this has NOT changed even today -- the PDP-8 prgrammers had to look at their code over and over again, to get it to fit in the 128-word page (and 4096-word field) boundaries. In the process of re-reading, they saw better ways to do things (i.e., algorithms, data structures), which were BOTH smaller and faster (and in many cases more reliable). A similar effect was noticed some years later when DEC first started using BLISS-10 to code compilers and other system utilities on the PDP-10. People started complaining, "BLISS code is so BIG!", even though the compiler generated reasonably good code? What was going on? Two things: (1) by writing in BLISS instead of assembler, programmers were able to conceive of bigger things, fancier code optimization in the FORTRAN compiler for example, so they did them; and (2) the BLISS code was more reliable (during development, at least), so the programmers didn't have to do as much debugging and hence DIDN'T READ THEIR OWN CODE as much. As a result, the usual re-writing that accompanies the re-reading simply didn't happen! (I presented the latter point at a DECUS meeting as "Warnock's Principle of Why BLISS Programs Are Big". ;-} But I wasn't kidding all that much.) We will see the same phenomenon today with "C" on PCs as programmers move from assembler. (This is in addition to the problem that "C" on most PCs generates fat code.) We have already seen it in UNIX. When a program is re-read and tuned and massaged it gets better, and it doesn't matter whether the reason it was looked at was that it didn't fit a space or time constraint, or that it simply had a few bugs in it (although the buggy program is also more likely to have bugs in the shipped version!). When it is simply hacked out and never looked at again, it is huge and slow. The very act of re-reading is "good" for program quality. The challenge is to structure our organizations so that we ensure that programs are read and re-read and that the resulting improvements are incorporated -- DESPITE the pressures to "ship it", even WITHOUT ugly machine architectures squeezing us, WITHOUT funny address space glitches, WITHOUT defects to force us to "de-bug" (remove the mistakes we put in) -- in short, to ensure that we consciously exercise programming discipline even in a hospitable environment (where, too often, we simply "flop"). Then our productivity will be high and our customers happy. Rob Warnock Systems Architecture Consultant UUCP: {ihnp4,ucbvax!dual}!fortune!redwood!rpw3 DDD: (415)572-2607 USPS: 510 Trinidad Lane, Foster City, CA 94404