gwyn@BRL.ARPA (VLD/VMB) (11/19/85)
The so-called "optimizer" in the older UNIX C compilers is really an "object code improver" that runs over the ALREADY-GENERATED assembly code and substitutes better instruction sequences, rearranges blocks of code to eliminate unnecessary branches, etc. Most compilers also have a very modest amount of expression tree rearrangement, but this is intended more to assist in code generation than to really optimize execution. Several independent compiler vendors have added true "optimization" that works directly with the compiler's internal representation of the program being compiled. One such optimizer for PCC was described a few USENIXes ago (Data General, I think), but that product was not going to be made available to the UNIX community at large. Over-optimization can actually be a bad thing. Suppose one is assigning to a shared memory location or device register or something like that. You want to know that the compiler will generate code that does what you tell it, even though it "looks" pointless. Since C was developed for just this kind of programming, that is an important consideration. However, a hook ("volatile") is being added to the language in the forthcoming ANSI X3J11 standard that will permit highly optimizing compilers and a guaranteed way of telling the compiler to do what you are trying to in such cases. X3J11 also defines the concept of "sequence points" which are those places in code that execution side-effects have to be caught up. ("Optimizers" that turn correct programs into non- working code have been quite common in the past, but that is a different type of problem.) I don't worry much about optimizing code like that of your example, since a programmer would presumably not be performing arithmetic on variables if constants would do. If he does, it "must" be for some good reason.