[comp.arch] "interprocedural analysis useless f

gwp@hcx3.SSD.HARRIS.COM (10/18/88)

Written 11:47 am  Oct 15, 1988 by mac3n@babbage.acc.virginia.edu
> Perhaps the conclusion here is that language design up front simplifies
> optimization in back.  What does this tell us about universal intermediate
> languages?

Not much, other than the fact that any language system containing as
one of it's steps a univerisal intermediate form will not lend itself
to any sort of sophisticated optimization.  But you knew that.  I
thought the idea behind a machine independent intermediate language
was to sacrifice efficiency for portablility.  This idea makes sense
on a large scale because machine cycle costs are always decreasing
while human engineering (if you call porting engineering) costs are
always increasing.

Gil Pilz   -=|*|=-   Harris Computer Systems   -=|*|=-   gwp@ssd.harris.com

mac3n@babbage.acc.virginia.edu (Alex Colvin) (10/20/88)

> > Perhaps the conclusion here is that language design up front simplifies
> > optimization in back.  What does this tell us about universal intermediate
> > languages?
> 
> Not much,

Actually, I was thinking of machine-dependent, but language-independent ILs.
Does using a C back end mean losing a lot of aliasing knowledge?  How about
things like display pointers?

bcase@cup.portal.com (Brian bcase Case) (10/21/88)

>> Perhaps the conclusion here is that language design up front simplifies
>> optimization in back.  What does this tell us about universal intermediate
>> languages?
>Not much, other than the fact that any language system containing as
>one of it's steps a univerisal intermediate form will not lend itself
>to any sort of sophisticated optimization.  But you knew that.  I

Er, I totally dissagree.  The U-code experience from Stanford and LLL
and the Mahler experiment at DEC prove otherwise.

>thought the idea behind a machine independent intermediate language
>was to sacrifice efficiency for portablility.  This idea makes sense

Well, that wasn't my reason for liking it.

>on a large scale because machine cycle costs are always decreasing
>while human engineering (if you call porting engineering) costs are
>always increasing.

The idea behind MIILs, at least as I conceived of them during my
experimentation with re-compiling a binary for machine X so that it
will run on machine Y, is to isolate the application from the processor
architecture!  Why don't we have RISC-based Macs and IBM PCs?  There
are at least a couple of RISC chips, the ARM e.g., that blow away the
68020 and the '386 for less money!  The reason is that the software
won't run, of course.

Once again:  We don't have to have only *one* MIIL.  We can easily
manage to have a few, say one for procedureal langauges like FORTRAN,
Pascal, and C, another for object oriented langauges, and, and, well
that's all I can think of right now.  But the point is that it makes
more sense to have a few MIIL backends on your machine if it allows
you to move to platform Z when you need more performance.  This can
be done, and I think end users, which I am now!  I have to actually
*buy* software!, are being shortchanged.  They are getting:  software
that won't run on the next generation of machines, which is going to
be incompatible, and less-than state-of-the-art technology because
Apple has to stick with the 68000 and the PC world must have its
segments.  We could pull the IBM/Amdahl trick:  increase performance
simply by installing a better back end (IBM and Amdahl used to increase
performance by changing the clock frequency of the processor:  "well
it's done; that'll be $1.3 Million, please." I still remember when
someone at UofI increased the performance of our Vax 780s by 20%:  he
enabled the margin testing mode of the processor.  Unfortunately, all
the clocks ran 20% faster too!  "Jeeze, its midnight already?").