[comp.os.msdos.programmer] Optimization

a269@mindlink.UUCP (Mischa Sandberg) (07/23/90)

There seems to be a broad opinion that optimizations are some
form of cheating on source compilation, and caveat emptor.
Having worked with Fortran G, Bliss-32 and different levels of
optimizing C, it took me a while to realize how this low
view of optimizing came to be.

It's true, "C" allows you to do many quick and dirty things with
pointers. Discounting really gross things (like the assumptions
that implementations of "varargs" like to make), it may come
as shock to many MS-C users that compilers of languages with
pointers and address-of operators actually can be smart enough
not to try an optimization where it may be risky. Bliss, for
example, optimizes the bejesus out of code (it tends to produce
better code than can be assembled "by hand") and is trustworthy
enough to write OS code in. Hmm, that puts into a different
light the rumours of why OS/2 had to be done in assembler :-).
It's a pretty wretched development philosophy if MS has to
document IN THE MANUALS that optimization will barf at
trivial cases like
        for ( i = 0; i < limit; i++ )
                if ( x != 0 )
                        a[i] = 1/x
for x == 0 ( admittedly bozo code, but ...)

I'm not aware of what optimization (as opposed to straight
code-generation) bugs are in TC++, but if they are comparable,
might I suggest that developers VOTE WITH THEIR FEET on
such outrageous abuse by compiler developers. Esp. considering
products that are at version 6.0 (or higher, soon, no doubt).

cgeisler@maytag.waterloo.edu (Craig Eisler) (07/26/90)

In article <2587@mindlink.UUCP> a269@mindlink.UUCP (Mischa Sandberg) writes:
>It's a pretty wretched development philosophy if MS has to
>document IN THE MANUALS that optimization will barf at
>trivial cases like
>        for ( i = 0; i < limit; i++ )
>                if ( x != 0 )
>                        a[i] = 1/x
>for x == 0 ( admittedly bozo code, but ...)

This only happens with the so-called agressive optimizations on; what 
the compiler does then is make brain-dead assumptions about loop-invarient code
(along with other things).  They give you the option of using
these optimizations if you haven't done things like that in your code.

Of course, when developing anything of substance, you can never be sure.
So leave "aggressive" optimizations filed under "stupid things microsoft
has done".  In my opinion, microsoft did this because it helped them
get better numbers on certain benchmarks.

On the other hand, their normal optimizations are fine.  You are mis-represent
what they are doing; what is this "outrageous abuse by compiler developers"?
They have simply said, with this switch on, we will relax a number of 
assumptions we normally make.  No one is abusing you.

craig
-- 
Craig Eisler, still hiding from the real world.
University of Waterloo, Waterloo, Ontario.

ralphc@tekcae.CAX.TEK.COM (Ralph Carpenter) (07/28/90)

In article <1990Jul25.203325.15173@maytag.waterloo.edu> cgeisler@maytag.waterloo.edu (Craig Eisler) writes:
>In article <2587@mindlink.UUCP> a269@mindlink.UUCP (Mischa Sandberg) writes:
>>It's a pretty wretched development philosophy if MS has to
>>document IN THE MANUALS that optimization will barf at
>>trivial cases like
>>        for ( i = 0; i < limit; i++ )
>>                if ( x != 0 )
>>                        a[i] = 1/x
>>for x == 0 ( admittedly bozo code, but ...)
>
>This only happens with the so-called agressive optimizations on; what 
>the compiler does then is make brain-dead assumptions about loop-invarient code
>(along with other things).  They give you the option of using
>these optimizations if you haven't done things like that in your code.
>
>Of course, when developing anything of substance, you can never be sure.
>So leave "aggressive" optimizations filed under "stupid things microsoft
>has done".  In my opinion, microsoft did this because it helped them
>get better numbers on certain benchmarks.
>
>On the other hand, their normal optimizations are fine.  You are mis-represent
>what they are doing; what is this "outrageous abuse by compiler developers"?
>They have simply said, with this switch on, we will relax a number of 
>assumptions we normally make.  No one is abusing you.
>

Being a succesful contenter in the marketplace means finding the right balance:
finding every bug in complex programs probably means getting to market much
later than the users and the boss will tolerate.

The compiler writers at MicroSoft are probably restrained by the marketeers
at MicroSoft from labeling "agressive" optimizations truthfully.  "Unsafe",
"dangerous", etc., would be more honest labels for these forms of optimization.
I applaud those (few) honest companies that so label these options.

Actually, there is nothing inherently unsafe or dangerous about the agressive
optimization techniques.  They do demand more skill though, and the problems
in the released code are a function of the compiler writer's skill, aggravated
of course by market/competitive/schedule pressures.

Years ago I read a book on jogging.  At one point it addressed a common
question:  If excercise is healthy, why do we always see athletes on crutches?
The answer:  Normal excercise *is* healthy and it helps our body achieve its
maximum potential.  Almost always, when you see an athlete on crutches, it is
because that athlete gloried in seeing how much *beyond* his potential he could
get away with, and for how long.

Ralph Carpenter
Tektronix, Inc.
Beaverton, OR

fouts@bozeman.bozeman.ingr.UUCP (Martin Fouts) (08/06/90)

Having spent some time debugging optimizers, I would like to point out
that changing the optimization level changes the semantics of the
language being accepted by the compiler and therefor the code
generated.

A good compiler will document the new assumptions made about code so
that a compiler user will be able to determine if they are appropriate
for the code being optimized, but all compilers generate code for
slightly different languages when the optimizer is on than when it is
off.

Consider the pre ANSI handling of 'volatile' variables.  Turning
register optimization on is telling the compiler that you do not use
volatile variables and that it can determine the liveness of automatic
variables by looking at the local source code.  This will break
applications which depend on 'volatile' variables which are set
outside of the source code.

The "Bugs" that ensue usually fall into the niches left by the
standards writer for the compiler writer to implement.  Often a
compiler will handle an implementation defined behavior differently
when the optimizer is on than when it is off.   This is not a bug, but
a feature, since the standard allows the compiler writer to do so.
--
Martin Fouts

 UUCP:  ...!pyramid!garth!fouts (or) uunet!ingr!apd!fouts
 ARPA:  apd!fouts@ingr.com
PHONE:  (415) 852-2310            FAX:  (415) 856-9224
 MAIL:  2400 Geng Road, Palo Alto, CA, 94303

Moving to Montana;  Goin' to be a Dental Floss Tycoon.
  -  Frank Zappa