lindsay@k.gp.cs.cmu.edu (Donald Lindsay) (05/26/88)
In article <1035@astroatc.UUCP> johnw@astroatc.UUCP (John F. Wardale) writes: >People claim stack machines can give you fast execution and dense code. >I have two arguments against this: >1: Code Size > Several studies yield overwelming evidence that almost all code > takes on of these three forms: > 1: a=b 2: a=a+b 3: a=b+c (+ is an operation) > > Stack based code gains NOTHING in these cases. Mem-to-mem code wins! A cautionary note. The studies said that the code TOOK THE FORM OF a=b+c but they considered a[i] = b[i,j] + c[j] to have had that form. So, any discussion should talk about addressing modes, about loops, and about effects that architecture have on compiler optimizations. I've been told that some stack machines essentially prevented common subexpression elimination. (Where to put the temporary?) But, I assume this wasn't a problem on the HP3000, which had (very limited) random access down into the stack. -- Don lindsay@k.gp.cs.cmu.edu CMU Computer Science
fpst@hubcap.UUCP (Steve Stevenson) (05/27/88)
> In article <1035@astroatc.UUCP> johnw@astroatc.UUCP (John F. Wardale) writes: >>People claim stack machines can give you fast execution and dense code. >>I have two arguments against this: >>1: Code Size >> Several studies yield overwelming evidence that almost all code >> takes on of these three forms: >> 1: a=b 2: a=a+b 3: a=b+c (+ is an operation) It's also fair to point out that many of these old studies were done on numerical codes which were for compilers which improperly optimized. Due to the nature of numerical codes and numerical error, lots of people wanted to make sure that things happened exactly the way the analysis went. Non sequitor for the day: While speed is fine, correctness is more important-- Steve Stevenson fpst@hubcap.clemson.edu (aka D. E. Stevenson), fpst@clemson.csnet Department of Computer Science, comp.parallel Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell