[comp.arch] Machine-specific vs. Machine-independent code

roskos@csed-1.UUCP (Eric Roskos) (02/02/88)

In article <3081@watcgl.waterloo.edu> tbray@watsol.waterloo.edu writes:
> >>Algorithms si!  Low-level hacking no!
In article <578@srs.UUCP> dan@rem.UUCP (Dan Kegel) writes:
> >Yes... but... sometimes the [assembly-language] payoff is great.  
In article <580@srs.UUCP>, matt@srs.UUCP (Matt Goheen) writes:
> Must agree... [I recoded a square root routine in assembler and found it more]
> than 3 times faster than the cafefully coded C routine...

This is a very old debate (as old as FORTRAN compilers), and there is no
simple answer, which is why we see many testimonials for all
sides.  However, having in the past worked at a place where machine-dependent
code was considered essential, even when the code was written in C, I've
spent many hours trying to understand the problem.

It is inevitable given current technology and current machines (by this I mean
what is generally available in microcomputers, in particular, since that tends
to be the particular interest of people in this group) that coding some things
in assembly-language makes them faster.  Even with good optimization of the
high-level language, in a lot of languages there are things you can't express
in the language which you can express in the machine language in a form that
will make it run faster on that machine (a sort of reverse semantic-gap).

It also seems that if you have virtually infinite resources, in the real world
you will be more successful if you write the machine-dependent code, and
then rewrite the whole thing for a new machine.  I've actually seen this
done, and although it resulted in rewriting the product repeatedly (sometimes
just to produce successive versions for the same machine), have found that
it is impossible to argue against it on purely technical grounds.

Ultimately, what is at issue on the other side of the debate is not the
purely technical considerations, but the "human factors" ones -- not of the
users, but of the programmers (and the project managers).  Machine-dependent
code tends to require a lot more maintenance, since you can't just change
the compiler and recompile to adapt it to architectural changes (even changes
as simple as increasing the address space of the machine).  As code is
maintained, it tends to degenerate, and eventually becomes convoluted, and
eventually has to be rewritten.  Yet, again, if you have essentially
infinite resources, you can consider *that* not to be a problem, and again,
this is the case some places in the real world.

But even when you have infinite resources, there remain other human factors;
people simply find it hard to work on convoluted code.  Consequently, they
get discouraged with it, and don't do as well on it, or leave and go some
place else.  Some of the more determined ones actually get "burned out"
altogether.  This can have long-term effects on the software's quality
eventhough the software technically is very good (runs fast, makes good use
of the hardware, does more than other products, etc.).

The underlying principle of this, though, is fairly simple.  A certain
amount of the architectural simplification that is done occurs not purely
for technical reasons, but because of human limitations.  These are much
harder to quantify (if they are quantifiable at all), but, I would argue,
are considerably important.  If people themselves had infinite capacity,
we could build really simple machines and generate the lowest-level code
for them directly.  Most of the improvements over that (including the 
ones that reflect architectural "elegance") are, it seems, actually artifacts
of limitations in one's ability to understand arbitrarily complex things.
But they are nevertheless necessary, simply because they are real
limitations, and providing ways to overcome them allows one to direct
oneself to more productive things.
-- 
Eric Roskos, IDA (...dgis!csed-1!roskos or csed-1!roskos@HC.DSPO.GOV)
	"Only through time time is conquered."  -- Burnt Norton