[comp.software-eng] Cynic's Guide to SE, part 2

neff@Shasta.STANFORD.EDU (Randy Neff) (03/18/88)

------		The Cynic's Guide to Software Engineering		------
------ an invitation to dialog, starting with the personal view of	------
------		    Randall Neff @ sierra.stanford.edu			------
------	        	March 17, 1988   	part 2			------
------------------------------------------------------------------------------
	My Software is Better than Your Software, So There !!			

One of the serious flaws in Software Engineering is there is no standard
methodology for determining that one software program or methodology is 
"better" than another.  This makes it hard for me to say "Software Engineering"
with a straight face.  The only available measure is to perform a benchmark
task and time it with a stopwatch.   But there is no way to compare the
abstract "goodness" of programming languages, operating systems, editors, 
windowing systems, etc.   There is no metric for functionality, or usability, 
or user-friendliness, or capability, or reuability, or complexity, or just 
plain "goodness".

This is unlike other engineering fields; say electrical engineering.  There
a audio power amplifier can be judged by its cost, power output, distortion,
frequency response, crossover, phase delay, power in, heat disipated, and
weight.   These are all numeric quantities that most EEs can measure; 
there are no subjective biases or opinions expressed.  Together, the 
values form an absolute "goodness" based on electrical engineering practice.

In constrast, in the software arena, we have feature lists, platitudes,
grandious claims that result in giant flame wars with all of the rationale of
religious dogma and persecution.

Let me give you an example:  The programming language Ada ...
(now admit it, already your internal flames have started)
while not perfect, was designed to assist with portable software, reusable
packages, well defined interfaces, a build-in tasking model, and all compilers
are validated against a standard definition.
Now X windows was designed to be portable, with reusable libraries, well defined
interfaces, a kludgy tasking model built by passing procedure pointers, and
there is no C standard and no compiler validation.

So the question is "why wasn't X windows written in Ada?"  
(Ouch, Ouch, here come the flames.... ARGH!)
The design goals of Ada (which the language comes close to) seems to sort of
match the goals for X windows.  Instead, X windows is written in a language
that is at least fifteen years old, has no standard or compiler validation,
and has to be stretched to provide the pseudo tasking and pseudo inheritance.

I already know some of the flames:  Ada is terrible, Ada compilers cost money,
we don't know Ada, we don't want to learn Ada, Ada compilers generate bad
code (without trying any compilers),  C is the only and best programming 
language, we inherited old dusty deck code, Ada is too long a name to type,
breaking a program into logical modules and well defined interfaces is hard,
etc.  So don't bother sending me any flames!
[Also note I do not want to critice the X window project people.  They made
the correct choice when aiming for the least common denominator systems. 
However, interfacing other languages to Xlib is non trivial.]

Now the point of this article is not the answer to this particular question,
but to show that one of the failing of software engineering is there is no
rational basis for answering that sort of question.   The same sort of 
questions arise over operating systems, editors, languages, tools, 
windowing systems, etc.   Why is spending money to buy bit mapped windowing
workstations a great idea over buying glass teletypes into a timeshared
main frame?  (It is, isn't it?)

A second example:  suppose that I work for a computer manufacturer.  I go
to the top level management and propose a new hardware project.  I want
ten people for two years and three million and at the end there will be a new
implementation  with twice the performance and half the manufacturing cost.
Now compare if I go the same management and propose a new software project.
I want ten people for two years and three million and at the end there will be 
a compiler for a new language with twice the ___blank___ and half the 
___blank___.  Now what goes in the blanks so that I can sell the project?

I have been involved in religious wars about text editors (EMACS, vi, EDT,
several home built), and about operating systems (UNIX 4.3 bsd, UNIX other,
VMS, CMS, TOPS-20, SAIL, Apple Macintosh, DG AOS), and programming languages 
(Pascal & various extensions, C, MAINSAIL, Ada, Lisp & variations, Prolog, 
Fortran, APL).   And they all seem to boil down to "I like this and you're
wrong!".  A message in one of the newsgroups says something like: "we want
to improve programmer productivity.  Does anyone know of any tools for VMS?"
My internal response (admittedly bigoted) was "first drop braindamaged VMS and 
switch to ULTRIX or 4.3!". 

It seems to me, that given a set of requirements, for example, text editing;
there should be an software engineering methodology to determine what is the 
"best" program for the job.   The same should apply to operating systems, 
languages, windowing systems, and other tools.   Once such a methodology was
in place, then it could guide the incremental improvement of those programs.

Engineering, it seems to me, is based on universally agreed methods of 
measurement.   All Software Engineering has now is a stopwatch.

wesommer@athena.mit.edu (William E. Sommerfeld) (03/18/88)

[I revised quotes from the original article to use the correct name
for the X window system]

>So the question is "why wasn't the X window system written in Ada?"  
>(Ouch, Ouch, here come the flames.... ARGH!)
>The design goals of Ada (which the language comes close to) seems to sort of
>match the goals for the X window system.

X started off a lot more like UNIX did than like Ada did: two people
(Bob Schiefler and Jim Gettys) got a hold of some hardware (DEC VS100
graphics _terminals_) which had no software support under UNIX, and
tried to see if they could make the hardware useful.  Some people saw
it, liked it, and it started catching on.  It was ported to MicroVAX,
Sun, Apollo, IBM RT, and numerous other workstations.  Recently, (in
the last year and a half) it has been redesigned and rewritten for
greater portability, just as UNIX, originally written in assembler,
was rewritten in C.

>Instead, X is written in a language
>that is at least fifteen years old, has no standard or compiler validation,
>and has to be stretched to provide the pseudo tasking and pseudo inheritance.

I would guess that you're referring to the X toolkit here, which is
not [yet] formally part of the X standard, and which is admittedly the
weakest part of the current X _implementation_.  The X toolkit does
use inheritance, but it does NOT use tasking (at least in the sense
that I view tasking: several independant threads of control all
running at the same time in the same address space); instead, each
`class' of widgets implements a set of operations, and supplies
pointers to functions which implement these operations.  C++ would
probably have been a better implementation language for the X toolkit;
however, C++ compilers are not (yet) as generally available as C
compilers.

Anyway, back to `X and Ada':

If I understand what I've heard correctly (I'm not sure I believe
it--it sounds like a major deficiency), Ada treats functions as
second- or third-class objects--you can't store a pointer to a
function in a data structure, or pass it as a parameter to another
function.

The current X version 11 `sample' server is heavily dependant on the
use of pointers to functions; the server side of the graphics context
(GC) is implemented as a structure containing pointers to functions
which implement the graphics primitives.  As different parameters in
the GC change (if the font is changed from a variable-width font to a
fixed-width font, or if you change line widths from one pixel to five
pixel, for example), a different procedure can be substituted which
uses a faster, special case algorithm.  This way, the common special
cases (such as fixed height and width nonoverlapping characters in
terminal emulator windows) can be executed much faster, without the
overhead of figuring out what the special cases are for each graphics
operation.  For example, if you wanted to implement circular clipping
regions in the current server, you could do so without slowing down
the 'normal case' of retangular clipping at all.

If Ada does not support functions as objects, implementing the X
server in Ada would probably require an architecture which was not as
easy to extend or port to radically different kinds of graphics
hardware.

On a related note, I believe that Bob Scheifler feels that the `right'
way to develop an interface to X for a language other than C or Common
LISP is to do so at the protocol level, not by trying to interface to
a library which was designed for use from C.  Yes, it's more work, and
you have to spend a lot of time writing procedure stubs (or building a
`stub generator'), but the end result is something which has a much
better fit to the language.  CLX (which is to Common LISP as Xlib is
to C) is probably a good example of this.  While I haven't looked it
all that much, it appears to make the remoteness of the window server
as invisible as possible--for example, you use `setf' to set various
attributes of windows, and the library takes care of converting that
into protocol requests.

				Bill Sommerfeld

msir_ltd@ur-tut (Mark Sirota) (03/18/88)

In article <2586@Shasta.STANFORD.EDU> neff@Shasta.UUCP (Randy Neff) writes:
> One of the serious flaws in Software Engineering is there is no standard
> methodology for determining that one software program or methodology is 
> "better" than another.
>
> This is unlike other engineering fields; say electrical engineering.  There
> a audio power amplifier can be judged by its cost, power output, distortion,
> frequency response, crossover, phase delay, power in, heat disipated, and
> weight.   These are all numeric quantities that most EEs can measure; 
> there are no subjective biases or opinions expressed.  Together, the 
> values form an absolute "goodness" based on electrical engineering practice.
>
> In constrast, in the software arena, we have feature lists, platitudes,
> grandious claims that result in giant flame wars with all of the rationale of
> religious dogma and persecution.

Possibly you've just picked a poor example, but then again the entire
premise may be flawed.  It is completely incorrect to say that audio power
amplifiers can be judged - there is as much religion to it as software.
(If you don't believe me, try fighting your way through rec.audio, or ask
any two audiophiles which amplifier on the market today is best).

Sure, you can measure all those qualities of amplifiers, but determining
what's best from the data is magic.  It's one of the hottest controversies
in the audio kingdom today.  The problem is not in the measurement, it's
in the interpretation of the data.

>  [ "suggestion" about X in ADA deleted ]
> Now the point of this article is not the answer to this particular question,
> but to show that one of the failing of software engineering is there is no
> rational basis for answering that sort of question.

Not only is there no rational basis for answering it, there's no rational
basis for asking it.  There is simply no way to say that a Lincoln is
better than a Porsche (or vice-versa) because they don't cater to the same
market.  Although both will get you from point A to point B, the Lincoln
is actually better than the Porsche for some tasks.  Likewise, C is better
than Lisp for some things, and Lisp is better than C for others.

Your proposed system of measures would have an awful lot to take into
account - there are an awful lot of differing intents and purposes out
there.

> I have been involved in religious wars about text editors, and about
> operating systems, and programming languages.  And they all seem to boil
> down to "I like this and you're wrong!".

What that answer is really trying to say is that "I like this FOR THIS
PURPOSE and you're wrong!"  I'm as guilty of it as anyone - just try to
get me to use another editor.  Because another part of the problem is what
you're used to - I like EMACS-style editors, because I know how to use
them very well.  Why should I learn another editor for this task, when I
can get along with what I already know?  (I know the answer to this, so no
flames - it's a tradeoff, just like everything else.)

> there should be an software engineering methodology to determine what is
> the "best" program for the job.  ... Once such a methodology was in place,
> then it could guide the incremental improvement of those programs.

You said it yourself - FOR THE JOB.  To develo such a methodology, you'd
first have to quantify the job, and that's no easier than doing the
measurements.
-- 

Mark Sirota
 msir_ltd%tut.cc.rochester.edu@cs.rochester.edu (rochester!ur-tut!msir_ltd)

karl@triceratops.cis.ohio-state.edu (Karl Kleinpaste) (03/18/88)

neff@shasta.stanford.edu writes:
   there should be a software engineering methodology to determine
   what is the "best" program for the job.  The same should apply to
   operating systems, languages, windowing systems, and other tools.
   Once such a methodology was in place, then it could guide the
   incremental improvement of those programs.

One problem with applying such methodologies to software development
is that there are very few constraints in what one might do.  In EE,
when one is designing a new amplifier (to use the given example), the
design engineer is severely constrained by the hardware
characteristics in which one works.  There are power limitations, the
set of available parts is not infinite and that set has individualized
limitations on what it can do, there are specified tolerances to all
hardware used, even down to and including the percentage accuracy of
the resistors in use.

In software development, there are frequently no such constraints.
One of the reasons that computer programming can be so addictive is
that the medium provides for almost total creativity.  If something
that I want doesn't exist, I can create it - at will, to my own specs,
without adherence to anything external to myself.  The set of software
`parts' is effectively, if not literally, infinite.  The level of
control one can exercise over one's environment is very very large.
Consider the configurability of one's X environment (to use the other
given example); let's see, there are various diddlings one can do to a
existing widget of some sort to change its behavior, and there are
command line options too many to mention to set up those behaviors,
and there is always the .Xdefaults file so one can do away with the
command line options.  My .Xdefaults file is relatively small; it's
only (only!) 2Kb long.  How many programs are affected (that is,
individualized) by the presence of environment variables?  The very
name "environment variable" says a great deal: One is defining one's
entire environment from scratch, to one's own tastes, to suit one's
personality.

Environment definition like this is easy, in fact it's pretty trivial,
for software.  If you don't like something, fine: you've got source,
or you can write new source, or whatever, so you can change it to make
it do what you want.  But for hardware, one has nowhere near the
freedom.  One is always limited by what the available parts are
capable of doing, and so one must design to fit those parts'
constraints.  Software defines its own `parts' on the fly; the only
limitations are in the area of P=?NP and that ilk.

Now, given all that raw creativity, that environment of total control,
of no constraints, one is then faced with the problem of enforcing
discipline.  As a personal opinion, it can't be done.  As others have
been saying, I'm as guilty as anyone - yasee, I've got this version of
the csh which has contracted that dreaded disease, creeping featurism,
and believe me, the condition is terminal; I'm threatening to convert
to ksh when I can find the time to convert my .login and .cshrc files
to .profile and .env.

But more generally, when one is faced with a set of existing software
`parts,' and one finds that they don't do *quite* what one wants, one
is hard-pressed to justify to oneself that one should put up with
what's there.  Again, the opportunity to be creative is too much to
bear for most programmers.  Most importantly, the concept of "it
doesn't do what I want" is a statement of pure personality, with
little or no relation to technical merits.  There are technical merits
to be considered when looking at "what I want," which leads one to
consider the question more as "what *should* I want?"  Hence, we all
have a concept of "clean code" and its inverse "spaghetti code"; we
all know that user interfaces need to have enough capabilities but not
too much as to be unwieldy or to keep one's screen so busy as to cause
one to lose one's place in one's work.  But these are the religious
issues, after all: Just how many GOTOs can you use before a piece of
code becomes "unclean?"  How much implicit type conversion via
parameter passing can one do before lint's complaints are taken
seriously?  How much menu-meta-button magic is OK before they stop
being useful and start getting in the way?  Again, these are, in large
part, statements of personality, and no amount of discussion of
technical merits can overcome it.

UNIX was developed because the OSes available for small PDPs didn't do
what a couple of folks in NJ wanted.  There is an alarming (and,
occasionally, disgusting) amount of philosophy and religion in the
resulting manner in which the UNIX system works.  All because of the
exercise of raw creativity was a Good Thing in most people's minds
when they were faced with a task and a way of approaching it that
didn't fit the mold of the existing software parts.  Similar thoughts
apply to X, Emacs, shells/CommandLineInterpreters, and most any other
software product.

mjl@ritcv.UUCP (Mike Lutz) (03/19/88)

In article <2586@Shasta.STANFORD.EDU> neff@Shasta.UUCP (Randy Neff) writes:
>This is unlike other engineering fields; say electrical engineering.  There
>a audio power amplifier can be judged by its cost, power output, distortion,
>frequency response, crossover, phase delay, power in, heat disipated, and
>weight.   These are all numeric quantities that most EEs can measure; 
>there are no subjective biases or opinions expressed.  Together, the 
>values form an absolute "goodness" based on electrical engineering practice.

Anyone who has seriously worked with hardware engineers will immediately
recognize how flawed this example is.  Religious disputes rage as fiercely
in the hardware community as in the software world: look at the RISC/CISC
debate, for instance, or the VME vs. Multibus wars of a couple years back.

And, while we're at it, let's look at the performance of the oldest
engineering profession (civil) when applied in new, uncharted territory,
(the construction of nuclear power plants), where pressure from the
user community (DOE, Public Utility Commisions, etc.), caused massive
changes in the spec. during implementation and testing.  Not a pretty picture,
but one that's a lot closer to what practicing software designers face
daily than is the design of (yet another) M68k board.

I do believe we could measure what we do much more precisely, and that such
measurement would give us insights that improve our work, but no metric
will be the oracle telling us the "right" way to solve our problems.

Mike Lutz
rochester!ritcv!mjl
-- 
Mike Lutz	Rochester Institute of Technology, Rochester NY
UUCP:		{allegra,seismo}!rochester!ritcv!mjl
CSNET:		mjl%rit@csnet-relay.ARPA