[comp.arch] Optimizers, dead code elimination

pardo@june.cs.washington.edu (David Keppel) (07/18/88)

colwell@mfci.UUCP (Robert Colwell) writes:
>[ Why should a compiler check functions zero function & remove ]
>[ zero-function (no effect) functions seldom happen in real code ]

A bad answer is that it makes benchmarks, which often have
zero-function functions, look good.  Case in point: It is rumored that
Perkin-Elmer bragged that their hardware/compiler ran a benchmark
nearly as fast as a <some very fast machine>, and their
hardware/compiler cost <some very small fraction>.  The benchmark?
Drhystone.  The reason?  zero-function function elmination.  Stupidity
*is* its own reward.

A better answer is that if you can discover enough about a function
(e.g., no side effects) then you can move the call (code motion) to
the place that it does the most good.  I know of one program that ran
more than 10 times faster after the following (manual) loop
transformation:

    r = value;
    while( bool ) {
	x[i] = f(x[i], expression using g(r));
	++i;
    }

became:

    r = value;
    t = g(r);
    while( bool ) {
	x[i] = f(x[i], expression using t);
	++i;
    }

There were two reasons for this:
(1) The function g() was being called needlessly inside the loop.
    The compiler couldn't move it outside the loop, because it
    couldn't "tell" the return value was invariant over the loop, even
    though the parameters were all invariant (value parameters).
(2) With the function call out of the loop and "t" known invariant,
    the expression could be further simplified.

	;-D on  ( Clearly an architecture issue :-)  Pardo