[net.arch] Missionary Position .vs. 69

david@ztivax.UUCP (07/14/86)

>> jdg@elmgate.UUCP (Jeff Gortatowsky) writes:
>>It was always
>>my feelings that, if a CPU manufacturer were to write the language compilers
>>first, THEN generate a CPU design to run it, we'd all be alot happier.

I do NOT agree.  I first thought it made sense, but no longer.  Here
is why:

I know of one system which was completely developed like this.  Some
software people wrote "the perfect language" and the "perfect OS
concepts" and then some smart HW folks developed the hardware to
support it.  It has some really neat features, but (of course) it has
some problems too.  More good things than bad, but there is one BIG BAD
PROBLEM.  It has got to be the world's most un-portable system.  Since
the whole world is making advances in chips, the state of the art
tends to advance faster (over a long time) than any one company can, no
matter who they are.  If a system is un-portable, then it may be great
for awhile, but over time, it will fail to keep up with the state of
the art, and will end up getting tossed in the trash can of history.

This sort of bothered me, but perhaps this is the reason:

> rb@cci632 (?) writes:
>Most programmers today are "top down" trained, and not used to thinking
>in terms of primitives.

I think we can all come up with dozens of examples of how systems
which are collections of useful primitives are better for developing
new solutions to problems than systems with a poor choice of
primitives.  UNIX is an obvious one.

In other words, a bottom up design allows the top levels (the "user"
language) to work better, nd to be more flexible: the user is not
tied to one language which may be applicable to one use, but can
develop or choose other languages which have better primitives for the
problem at hand.

(slight pause to don flame-proof suit)

Top-down sucks, bottom up is better.


David Smyth

seismo!unido!ztivax!david

faustus@ucbcad.UUCP (07/15/86)

In article <2900019@ztivax.UUCP>, david@ztivax.UUCP writes:
> In other words, a bottom up design allows the top levels (the "user"
> language) to work better, nd to be more flexible: the user is not
> tied to one language which may be applicable to one use, but can
> develop or choose other languages which have better primitives for the
> problem at hand.
> 
> (slight pause to don flame-proof suit)
> 
> Top-down sucks, bottom up is better.

What sort of silly comment is this?  There is no such thing as "bottom
up" or "top down" design -- there is good design and there is bad
design, and a good designer will think about his problem from an
overall standpoint (top down) and then based on this, create the
primitives that are needed (bottom up).  If you design programs by
indiscriminately creating primitives without any thought about what
they are to be used for, or if you think only in terms of high-level
algorithms and don't think about your low-level representations until
you are forced to, you are going to write a bad program.

	Wayne

jerryn@tekig4.UUCP (Jerry Nelson) (07/16/86)

In article <2900019@ztivax.UUCP> david@ztivax.UUCP writes:
>>> jdg@elmgate.UUCP (Jeff Gortatowsky) writes:
>>>It was always
>>>my feelings that, if a CPU manufacturer were to write the language compilers
>>>first, THEN generate a CPU design to run it, we'd all be alot happier.
>
>I do NOT agree.  I first thought it made sense, but no longer.  Here
>is why:
>
>I know of one system which was completely developed like this.  Some
>software people wrote "the perfect language" and the "perfect OS
>concepts" and then some smart HW folks developed the hardware to
>support it.  It has some really neat features, but (of course) it has
>some problems too.  More good things than bad, but there is one BIG BAD
>PROBLEM.  It has got to be the world's most un-portable system.  Since
>David Smyth
>
Hold it!  Are you telling me that there really is such a thing as portability?
Well, assuming that there is, lets introduce the idea of hardware portability.
In other words, if we have the "perfect language"(a premise as likely as 
absolute portability) AND the perfect OS,  won't they become popular enough
to demand hardware upgrades to conform with it.  Imagine the thrill of having
IBM change the PC to make it compatible with Your Program.

OK David, maybe you better lend me that flame-proof suit now......

aglew@ccvaxa.UUCP (07/17/86)

>There is no such thing as "bottom up" or "top down" design -- there is good
>design and there is bad design, and a good designer will think about his
>problem from an overall standpoint (top down) and then based on this, create
>the primitives that are needed (bottom up).

Yes and no. For a particular program top down design followed by bottom up
implementation of the primitives that are needed may be good - but operating
systems aren't programs, they're toolboxes. Same thing for computer
architectures. 

The problem lies with people who create (only) the primitives that are
needed, but no more.  What this leads to in operating systems is situations
where there are three types of multiprocessor locking operations, for
example, but the original implementor only needed two, so he didn't
implement the third. When the need for the third type of lock arises nobody
can understand the original code (which is another problem in itself) so a
slightly different variety of locking mechanism is created, but this time
the second kind is left out. So you end up with four or five or six
varieties of lock, all incompatible, logically redundant, that can't be
mixed. Is this good?

Top down design, right. Discover what primitives you need, right. Then try
to make a system out of the primitives. Imagine what type of top down design 
can use the primitives that you don't need right now. Frequently you'll find
that you can use them. Repeat until things your design stops changing so
quickly.

Andy "Krazy" Glew. Gould CSD-Urbana.    USEnet:  ihnp4!uiucdcs!ccvaxa!aglew
1101 E. University, Urbana, IL 61801    ARPAnet: aglew@gswd-vms

kdd@well.UUCP (Keith David Doyle) (07/17/86)

In article <2900019@ztivax.UUCP> david@ztivax.UUCP writes:
>
>Top-down sucks, bottom up is better.
>
>David Smyth

Personally, *I* prefer to burn the candle at both ends.

Keith Doyle
ihnp4!ptsfa!well!kdd

tad@killer.UUCP (Tad Marko) (07/18/86)

In article <1449@well.UUCP>, kdd@well.UUCP (Keith David Doyle) writes:
> In article <2900019@ztivax.UUCP> david@ztivax.UUCP writes:
> >Top-down sucks, bottom up is better.
> >
> >David Smyth
> 
> Personally, *I* prefer to burn the candle at both ends.
> 
> Keith Doyle

I agree with the latter, also.  The best way is to examine the problem
and do what you need to do to get it done.
--
Tad Marko
..!ihnp4!killer!tad		||	..!ihnp4!alamo!infoswx!ntvax!tad
UNIX Connection BBS AT&T 3B2		North Texas State U. VAX 11/780
If it's not nailed down, it's mine; If I can pick it up, it's not nailed down.

elg@usl.UUCP (07/19/86)

In article <2900019@ztivax.UUCP> david@ztivax.UUCP writes:
>>> jdg@elmgate.UUCP (Jeff Gortatowsky) writes:
>>>It was always
>>>my feelings that, if a CPU manufacturer were to write the language compilers
>>>first, THEN generate a CPU design to run it, we'd all be alot happier.
>I know of one system which was completely developed like this.  Some
>software people wrote "the perfect language" and the "perfect OS
>concepts" and then some smart HW folks developed the hardware to
>support it.  It has some really neat features, but (of course) it has
>some problems too.  More good things than bad, but there is one BIG BAD
>PROBLEM.  It has got to be the world's most un-portable system.  Since
>the whole world is making advances in chips, the state of the art
>tends to advance faster (over a long time) than any one company can, no
>matter who they are.  If a system is un-portable, then it may be great
>for awhile, but over time, it will fail to keep up with the state of
>the art, and will end up getting tossed in the trash can of history.
>

For a somewhat example, see Multics (Honeywell Level 68). They used
special hardware to implement the segmented/ringed architecture of the
operating system, and added instructions to the processor to support
PL/1 (their choice of "perfect" language at the time, much like "C"
might be today). The example is less than complete, because the basic
architecture of the processor dates back to some ancient GE processor
that was particularly brain-damaged, but the net result is the same --
by 1976, a few years after Honeywell introduced it, Multics was for
all intents and purposes obsolete, being too slow, too expensive, and
too inefficient. Honeywell made a belated attempt in 1980 to update
the hardware enough to run at a decent speed, but it was too little,
too late -- the window of technology had passed them by.
   Interesting enough, Prime and Stratus have re-implemented the basic
Multics OS on their computers (Prime with minicomputers, Stratus with
68000-family super-microcomputers designed for fault-tolerant
systems), with no special hardware beyond a MMU. The 68020-based
Stratus handles almost as many users as a Multics unit... pretty much
proving the point that generalized technology is going to move much
faster than you can develop the technology you need to build a
specialized system around a language and operating system.
-- 
-- Computing from the Bayous, --
      Eric Green {akgua,ut-sally}!usl!elg
         (Snail Mail P.O. Box 92191, Lafayette, LA 70509)

rb@cci632.UUCP (Rex Ballard) (07/21/86)

In article <858@ucbcad.BERKELEY.EDU> faustus@ucbcad.BERKELEY.EDU (Wayne A. Christopher) writes:
>In article <2900019@ztivax.UUCP>, david@ztivax.UUCP writes:
>> In other words, a bottom up design allows the top levels (the "user"
>> language) to work better, and to be more flexible: the user is not
>> tied to one language which may be applicable to one use, but can
>> develop or choose other languages which have better primitives for the
>> problem at hand.

There seems to be an opinion here that primitives are a feature of
language rather than of system/application/design.  This is a problem.

>> (slight pause to don flame-proof suit)
>> 
>> Top-down sucks, bottom up is better.
>
>What sort of silly comment is this?  There is no such thing as "bottom
>up" or "top down" design -- there is good design and there is bad
>design, and a good designer will think about his problem from an
>overall standpoint (top down) and then based on this, create the
>primitives that are needed (bottom up).

>If you design programs by
>indiscriminately creating primitives without any thought about what
>they are to be used for, or if you think only in terms of high-level
>algorithms and don't think about your low-level representations until
>you are forced to, you are going to write a bad program.
>
>	Wayne

Unfortunately, in many systems, the designer comes up with a cute design
like this.


		------------	    ------------
		|get record| ----> |print record|
		------------	    ------------
		     |
		     |
		-------------
		record file
		-------------


What happened to open, read, block, deblock, flush, lock,...?
And of course, each member of the record, which may contain as many
as 100 fields, flags, and states, must be accessed and manipulated.

They end up getting written into the "get record" routine.  The end
result is that "get record" becomes a 2000 line "superfunction".

Later on, someone else needs to get records from the record file
but doesn't want the record stripped of information.  So he copies
"get record" and hacks to his hearts delight.  To make it really
interesting, imagine a "file driver" which requires special ioctl
calls, or worse, a non-unix system or network comm program that
requires direct manipulation of hardware registers.

In no time at all, the design becomes a maintenence nightmere
that no one can touch.

Now, since the users needs must change, "record file" structure
must change as well.  What could have been a 10 line change in
one place, may become a 200 line change in 20 or 30 places.

The fundamental problem is one of communications and design cycle.
In many cases, a system analyst or system designer spends months
writing the design documents, and then "hands it over" to the
implementors.  Eventually, the implementation design is returned
in the form of general documentation, and none of this "duplication"
is discovered until the maintenence team begins to discover them.
Even then, each member of this team may only see two or three of
these "duplicates" in his scope of duties.  It is only when they
get reassigned to several tasks, over several years, that they
realize how much effort is spent on duplicate primitives.

The biggest problem comes when such a system is "set in stone", either
in the form of ROM or Micro-code.  Changes are much more difficult to
implement in such a system than their RAM counterparts.

Even the UNIX system, with it's "recompile kernal to add driver" archetecture
come very close to "set in stone" implementation.  In addition, how many
systems make "root" or "bin" the sole means of adding a tool to the bin
directories or to the libraries.  User contributed tools are less than
trivial to get adopted, yet they are often so useful, that people's paths
will often contain several ~user/bin componants, or links.

If the design cycle were a matter of days, and the design/implement/document
cycle is communicated on a weekly or even daily basis, the number of
duplicate "get record member" primitives could be reduced significantly.
The old "60/1/39" (60% design, 1% coding, 39% debug/document) formula is
still valid.  The main difference is that the team might spend the entire
morning "designing", the majority of the of the afternoon coding/debugging
the "known levels" of the design, and come back to indentify needed or
invented primitives.  These primitives can be incorporated into the remaining
design, and even "tuned" in a "universal" manner.

How many times have you seen "bcopy(3)" functionality coded in-line?
How many "convert binary byte to string" routines?
How many "test member of structure" routines?
How many times it the reason give as "speed", "efficiency", or "printf
takes too much space"?.  How many times is the critical factor something
OTHER than the "sped up" routine.

In one example, the code was so fast, it could execute in 4ms, but the
comm time was over 200ms.  An extra 2ms for mainainable code would not
have hindered performance that significantly.  In addition, the so-called
"optimized" code, was, in many cases slower than a functionally identical
primitive.

Disclaimer:
These observations come from a variety of sources, so don't consider them
a reflection on my current employer.