[comp.compilers] Compilers 2000

jsp@milton.u.washington.edu (Jeff Prothero) (09/29/90)

comp.arch has no shortage of prognostications as to what is Right and Wrong
with hardware today, and what hardware will be like in ten or twenty years.
Anyone (John? Preston? Peter? ...) want to offer comments on what is Right
and Wrong with the compiler field today, and what compilers will look like
ten or twenty years from now?

Possible threads:

Problems posed by coming architectures.

Opportunities posed by coming architectures..

How mature are the theoretical foundations of the compiler field?

Will compiler papers ever reference computer algebra papers?

Can declarative languages ever displace imperative ones?  Should they?  Does
this primarily depend on the presence / absence of needed compiler
technology?

 - Jeff Prothero   (jsp@milton.u.washington.edu)
-- 
Send compilers articles to compilers@esegue.segue.boston.ma.us
{ima | spdcc | world}!esegue.  Meta-mail to compilers-request@esegue.

preston@titan.rice.edu (Preston Briggs) (10/02/90)

Jeff Prothero <jsp@milton.u.washington.edu> writes:
>Anyone want to offer comments on what is Right
>and Wrong with the compiler field today, and what compilers will look like
>ten or twenty years from now?

It's easy to name wrongs things.  We can start with mistakes I make,
and assume they generalize.

- Don't know enough of the literature.  This comes from reading too
shallowly and missing important sources.  (Some are hard to find, like
early POPL proceedings, but should be pursued.)


- This leads to massive wheel re-invention.  Cockiness comes into play
here too, along with shallow reading.

	"Those idiots.  They missed this obvious improvement!."

Well, sometimes they did, but usually it's an old rejected idea, probably
already documented somewhere else.  With thought, or experimentation, or
further reading, the reasons behind particular decisions often become
clear.  If not, write it up!


- Ignoring algorithmic efficiency in favor of simple implementation.  This
is tough, and I suppose we can make good arguments for simple and clean;
but the reality is that compilers have to deal with large programs, large
routines, large basic blocks, and complex expressions.  Nowadays we tend
to have large memories and fast machines, but that just means the linear
parts get less important (but should not be ignored!).  O(n^2) or worse is
still scary and a good deal of research effort goes into reducing
complexity.

(Taken another way:  It's possible to imagine all sorts of optimizations.
We can dream up things all day.  It's much more interesting to actually
perform them.)


- Testing, particularly with large test cases to expose the algorithmic
problems (performance bugs).


- Documentation, including publishing good and bad ideas of all sorts
(presumably labeled "good idea" and "mistake").  How can we expect
each other to be well read if there's nothing to read?
We should be as proud of publishing a "mistake" (with an explanation
of why it's wrong) as we are of our latest brain-storm.


- Study a compiler, in depth, for a long time.  Fix all the bugs
(correctness, performance, code quality).

A story I like about IBM...  (Perhaps from Scarborough and Kolsky.)  As
part of a project to enhance the VS Fortran compiler, IBM collected a
large test suite.  They compiled the suite and examined the code for
*every* inner loop and (by hand) wrote the best assembly code they could
for each loop.  Then they worked on their optimizer until it produced code
of equal quality for each loop.

----------------------------------------

For the future...

(Disclaimer: This is my pretty low level and pretty narrow view.
I don't know enough to comment in other areas, and perhaps not this one.)

Good optimizing compilers will do dependence analysis (even for scalar
machines).  Dependence analysis tells us about array accesses in loops.
With it, you can get speedups of 3 on today's workstations by keeping
portions of arrays in registers.

Good compilers will worry about the entire memory hiearchy: disk, main
memory, 1 or more levels of cache, and registers.  The main tool will be
dependence analysis.

As the compilers improve, architectures will evolve towards what can be
handled in the compiler.  RISC is the easy example and VLIW another, but
dependence analysis will change things again.

-- 
Preston Briggs				looking for the great leap forward
preston@titan.rice.edu
-- 
Send compilers articles to compilers@esegue.segue.boston.ma.us
{ima | spdcc | world}!esegue.  Meta-mail to compilers-request@esegue.

rfg@ncd.com (Ron Guilmette) (10/06/90)

In article <9009290105.AA20332@milton.u.washington.edu> Jeff Prothero <jsp@milton.u.washington.edu> writes:
>Anyone (John? Preston? Peter? ...) want to offer comments on what is Right
>and Wrong with the compiler field today, and what compilers will look like
>ten or twenty years from now?

My 2 cents worth of prognostication:

	VLIW, industrial-strength instruction scheduling, and industrial-
	strength alias analysis will be commonplace in 20 years.

	Incremental compilation `while-u-type' will be commonplace 20 years
	from now.  Thus, the issue of compilation speed will be mostly
	moot because by the time you stop editing, your changes will have
	already been compiled.  Impact: fewer coffee breaks. :-)
-- 

// Ron Guilmette  -  C++ Entomologist
// Internet: rfg@ncd.com      uucp: ...uunet!lupine!rfg
-- 
Send compilers articles to compilers@esegue.segue.boston.ma.us
{ima | spdcc | world}!esegue.  Meta-mail to compilers-request@esegue.

stt@inmet.inmet.com (Tucker Taft) (10/07/90)

Re: Compilers in the year 2000

Better optimization would of course be nice, but I think the real leverage
will come from more kinds of static analysis performed at compile-time.
Programmers should be able to augment their program by both global and
local assertions/constraints of various sorts, and the compiler should do
its best to identify all violations of these assertions/constraints at
compile-time.  This should be true whether the compiler operates
incrementally, or in a batch mode.

I think we are still a long way from being able to feasibly prove a large
program to be "correct," but it is still extremely useful and instructive
to be able to establish program-wide statically-checkable constraints.
Having the compiler check these constraints can go a long way toward early
detection of errors.

An even more general approach is to have an entire sub-language for
compile-time analysis.  It would be extremely powerful to be able to write
essentially an arbitrary amount of code which will be executed
(interpreted) at compile-time during the compiler's scanning of a program
tree.  "Constraints" can then be seen as a special case of this,
corresponding to compile-time code which is the equivalent of: "if not
<boolean-expresion> then Report_Error."

Obviously compile-time code must only depend on and manipulate attributes
known at compile-time.  This would generally include the value of
compile-time constants, the type of expressions, etc., but could be
generalized to arbitrary user-defined compile-time attributes.

S. Tucker Taft     stt@inmet.inmet.com    uunet!inmet!stt
Intermetrics, Inc.
Cambridge, MA  02138
-- 
Send compilers articles to compilers@esegue.segue.boston.ma.us
{ima | spdcc | world}!esegue.  Meta-mail to compilers-request@esegue.