[comp.arch] complexity

lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) (04/28/89)

schow@bnr-public.uucp (Stanley Chow) says:
>"Compilers can do optimizations", I hear the yelling. This is another
>interesting phenomenon - reduce the complexity in the CPU so that the 
>compiler must do all these other optimizations. I have also now seem any
>indications that a compiler can to anywhere close to an optimal job on
>scheduling code or pipelining. Even discounting the NP-completeness of
>just about everything, theoratical indications point the other way,
>especially when the compiler has to juggle so many conflicting constraints.
>
>It would be interesting to speculate on total system complexity, is it
>higher for CISC or for RISC (with its attendent memory and compiler
>requirements).

You are forgetting separability. Moving complexity from one place to
another may be a big win, or a big lose, simply because the solution
is/isn't separable from the other problems dealt with in those
contexts.

This is the reason that the VAX's addressing modes were such a win
when the machine came out. Assembly programmers, and dumb compilers
(well, compilers that had to run in little memory) both liked the
modes. The argument was that address computation was now more
separated from the instruction selection problem. Small changes to
assembler programs had a pleasantly high probability of not causing
"ripples" off into the distance.

Separability is also the reason that the VAX's addressing modes are a
bad thing now. Compilers have become big and smart, and addressing
computations are now candidates for optimization: common
subexpression elimination, strength reduction, and so on.  In short,
addresses have become first-class values. The trouble is that they
have to be manipulated with both first-class operations (i.e.
instructions) and with second-class operations (i.e.  addressing
modes). Suddenly, the modes are no longer separable from the world of
general computation.  Suddenly, they seem very irregular.

(Oddly enough, autoincrement/decrement modes are still separable, and
therefore OK. You can just pretend they don't exist, and then use a
postpass that tries to fold increment instructions into the
surrounding instructions.)

To return to your complaint, above: compilers view code scheduling as
separable. A postpass can easily take unscheduled code, and produce a
so-so schedule. The scheduling package can be fiddled and tweaked at
leisure, and several variations can be tried, long after the chip
design is frozen. It may be complex, but the complexity is inside an
interface that optimization requirements don't muddy very much. 

-- 
Don		D.C.Lindsay 	Carnegie Mellon School of Computer Science
-- 

les@unicads.UUCP (Les Milash) (04/28/89)

In article <4844@pt.cs.cmu.edu> lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) writes:
>schow@bnr-public.uucp (Stanley Chow) says:
>>It would be interesting to speculate on total system complexity, is it
>>higher for CISC or for RISC (with its attendent memory and compiler
>>requirements).
>
>Moving complexity from one place to
>another may be a big win, or a big lose,

Yes, moving it outside of loops or moving it from runtime 
to compiletime would seem generally advangageous.

jlg@lanl.gov (Jim Giles) (04/29/89)

schow@bnr-public.uucp (Stanley Chow) says:
>"Compilers can do optimizations", I hear the yelling. This is another
>interesting phenomenon - reduce the complexity in the CPU so that the 
>compiler must do all these other optimizations. I have also now seem any
>indications that a compiler can to anywhere close to an optimal job on
>scheduling code or pipelining. Even discounting the NP-completeness of
>just about everything, theoratical indications point the other way,
>especially when the compiler has to juggle so many conflicting constraints.
>
>It would be interesting to speculate on total system complexity, is it
>higher for CISC or for RISC (with its attendent memory and compiler
>requirements).

Actually, increasing the complexity of instructions also _increases_ the
complexity of the compiler.  This happens because the compiler now has an
additional degree of freedom in its optimization search-space: instruction
selection!  The triad of scheduling, register allocation, and instruction
selection all feed back on one another.  If the first two are already 
NP complete, the problem is obviously made much worse by the introduction
of alternate instructions.
.
.
.
.
.
.
.
.