[comp.compilers] Optimization tradeoffs

roy@phri.uucp (Roy Smith) (08/02/88)

	Some compilers have options to select what kind of optimization to
do: space or time.  Can somebody give me some idea of how much difference
it makes which you pick?  Lots of optimizations (most?) reduce both object
code size and execution time.  For example, in the fragment:

	X = Y
	X = Z

it's obvious that removing the code generated from the first statement will
do both.  To my naive mind, it seems that the difference between the two
types of optimization would be rather small.
-- 
Roy Smith, System Administrator
Public Health Research Institute
{allegra,philabs,cmcl2,rutgers}!phri!roy -or- phri!roy@uunet.uu.net
"The connector is the network"
[In my experience, except in contrived programs the difference is usually
insignificant.  The one exception that leaps to mind is situations like a
C switch statement that involves looking something up in a sparse table.
The fastest code would subscript a huge array, while the smallest would
probably do a hear or linear search.  The paging overhead associated
with the huge array often negates any speed advantage, but compiler
writers usually figure that's somebody else's problem.  -John]
--
Send compilers articles to ima!compilers or, in a pinch, to Levine@YALE.EDU
Plausible paths are { ihnp4 | decvax | cbosgd | harvard | yale | bbn}!ima
Please send responses to the originator of the message -- I cannot forward
mail accidentally sent back to compilers.  Meta-mail to ima!compilers-request

dplatt@ncar.UCAR.EDU (Dave Platt) (08/06/88)

In article <1853@ima.ISC.COM> roy@phri.uucp (Roy Smith) writes:
> 
> 	Some compilers have options to select what kind of optimization to
> do: space or time.  Can somebody give me some idea of how much difference
> it makes which you pick?

Dead-code removal, such as you give in your example, is one sort of
optimization that is Good in both time and space measures;  it'll probably
occur in both optimization modes, if it occurs in either.

There are certain other optimizations, though, in which there can be a
real tradeoff between time and space.  Some examples:

 	for (i=0;  i<10; i++)
 	  a[i] = b[i];

In the "optimize for time" mode, a compiler might rewrite this as

	a[0] = b[0];
	a[1] = b[1];
	    ...
	a[9] = b[9];
	i = 10;  /* if the value of "i" is used subsequently */

and thus avoid the loop overhead (and, potentially, the overhead involved
in using register indexing to access entries in the "a" and "b" arrays).

A space-optimizing compiler might call a "move memory to memory" subroutine
to transfer the contents of the arrays, and then (if necessary) set i=10.

Similarly, 

	struct foo a, b;
	
	a = b;

may be done in-line with a loop, with unrolled load/store/load/store
instruction sequences, or with a call to a general-purpose
memory-moving subroutine.  The instruction sequence chosen would depend
on sizeof(foo), on the optimization mode, the machine's instruction set
and addressing modes, and the state of the compiler-writer's liver and
the phase of the moon.

I'd tend to believe that optimizations tend to fall into two classes:

1) Those that occur prior to the actual generation of machine-language
   instructions.  These optimizations will often be implemented as
   transformations on the code-generator's internal representation of the
   program (i.e. examining the "for loop" code-tree from the example above,
   and rewriting it as an unrolled set of moves).
   
   These transformations can have a very large effect on the actual code
   sequences generated... at times, it can be difficult to figure out how
   the code actually generated corresponds to the source code!
   
   The biggest time/space tradeoffs are made in these sort of optimizations,
   I believe.

2) Those optimizations made after code is actually generated.  These
   tend to be more local... register optimizations, dead code
   elimination, short-circuiting jumps to jumps, etc.  They tend to be
   Good in both the time and space senses.

Some compilers seem to rely heavily on the class-2 (late) optimizations,
using peephole optimizers and the like.   More powerful, "optimizing"
compilers include class-1 (early) optimizations and transformations as
well.

-- 
Dave Platt                                             VOICE: (415) 493-8805
  USNAIL: Coherent Thought Inc.  3350 West Bayshore #205  Palo Alto CA 94303
  UUCP: ...!{ames,sun,uunet}!coherent!dplatt     DOMAIN: dplatt@coherent.com
  INTERNET:   coherent!dplatt@ames.arpa,    ...@sun.com,    ...@uunet.uu.net
--
Send compilers articles to ima!compilers or, in a pinch, to Levine@YALE.EDU
Plausible paths are { ihnp4 | decvax | cbosgd | harvard | yale | bbn}!ima
Please send responses to the originator of the message -- I cannot forward
mail accidentally sent back to compilers.  Meta-mail to ima!compilers-request

markhall@pyramid.pyramid.com (Mark Hall) (08/06/88)

Anyone interested in the space vs. speed problem in optimization
should (of course) read the classic:

	%A William A. Wulf
	%A Richard K. Johnsson
	%A Charles B. Weinstock
	%A Steven O. Hobbs
	%A Charles M. Geschke
	%T The Design of an Optimizing Compiler
	%P 165
	%I ELSEVIER
	%C New York
	%D 1975

They were interested in the problem since they were compiling for
a PDP-11.  As an example, I believe their Bliss-11 compiler would try to 
recognize common code sequences, and would `hoist' this code into
a subroutine call!

-Mark Hall (smart mailer): markhall@pyramid.pyramid.com
	   (uucp paths  ): 
		{amdahl|decwrl|sun|seismo|lll-lcc}!pyramid!markhall
--
Send compilers articles to ima!compilers or, in a pinch, to Levine@YALE.EDU
Plausible paths are { ihnp4 | decvax | cbosgd | harvard | yale | bbn}!ima
Please send responses to the originator of the message -- I cannot forward
mail accidentally sent back to compilers.  Meta-mail to ima!compilers-request

willc@RELAY.CS.NET (Will Clinger) (08/06/88)

I agree that most optimizations that save time in C-like languages also
save space, and vice versa.  Someone mentioned the C switch statement
as an exception; loop unrolling and procedure integration are others.

In some languages the time/space tradeoff is more dramatic.  For example,
generic arithmetic operations such as addition in Scheme and Common Lisp
are too complicated to code in line.  The compiler has two practical
alternatives.  One is to use a subroutine call.  The other is to generate
code for the most common (fixnum) case in line together with code that
traps to a subroutine for all other cases.  The simple subroutine call
is more compact but is slower.  Arithmetic is so ubiquitous that this
can make a significant difference.

             moderately typical # of instructions for generic (+ x 1)
                            static      dynamic
    subroutine call           2            7
    inline with trap          6            4

William Clinger
Semantic Microsystems, Inc.
--
Send compilers articles to ima!compilers or, in a pinch, to Levine@YALE.EDU
Plausible paths are { ihnp4 | decvax | cbosgd | harvard | yale | bbn}!ima
Please send responses to the originator of the message -- I cannot forward
mail accidentally sent back to compilers.  Meta-mail to ima!compilers-request

gbs@nsc.nsc.com (George Smith) (08/08/88)

In article <1896@ima.ISC.COM>, markhall@pyramid.pyramid.com (Mark Hall) writes:
> Anyone interested in the space vs. speed problem in optimization
> should (of course) read the classic:
> 
> 	%A William A. Wulf
> 	%T The Design of an Optimizing Compiler
> 	%I ELSEVIER
> 	%C New York
> 	%D 1975
> 
It is worse than my worst nightmare!  Every time I read something about
compilers, I am referred to "that classic, Wulf etc"  But is it available
to the general public?  NoooOOOOooo!  It is out of print and those lucky
people who have a copy don't let them out of their sight.  I know that it
does exist because I once saw (and touched) a copy.  I will make an
offer publicly right now, if anyone will lend me their copy, and if I can
get permission from the publishers, I will type the entire text into a
troff format file and make it available to the entire net!  Any takers?

George B. Smith
National Semiconductor
{amdahl! sun! hplabs! pyramid! }nsc!gbs
[Bill Wulf occasionally reads this group -- any possibility of
reissuing it?  -John]
--
Send compilers articles to ima!compilers or, in a pinch, to Levine@YALE.EDU
Plausible paths are { ihnp4 | decvax | cbosgd | harvard | yale | bbn}!ima
Please send responses to the originator of the message -- I cannot forward
mail accidentally sent back to compilers.  Meta-mail to ima!compilers-request