[comp.compilers] Incremental Compilers

charlie@genrad.com (Charlie D. Havener) (01/25/89)

I would like to stimulate some discussion in this group about incremental
compilers and fine grained interpreters for block structured languages. Fine
grain means it incrementally changes the compiled or interpretive code ( like
reverse polish ) on a line by line basis instead of a file basis.

I have just acquired the new book "The Synthesizer Generator" by T. Reps and
T. Teitelbaum from Springer-Verlag ( 1-800-SPRINGER $39 ) which describes a
tool that uses attribute grammar algorithms to maintain an abstract syntax
tree as changes are made to the source code. We have the tool on order ( $200
from Cornell Univ. 607-255-7573 ) and I am looking forward to trying it out.
There is a new release due out in February. It would be most interesting to
hear of any experiences pro or con in using this tool for developing an
incremental interpreter.

There seems to be a general lack of good literature on the issues that one
must solve when developing an incremental compiler. Most texts are very much
batch compiler oriented. I enjoyed the "Writing Interactive Compilers and
Interpreters" by Brown ( Wiley 1979 ) but it was mostly common sense. "The
Synthesizer Generator" book has a good bibliograpy but doesn't show any texts
specifically on interactive compilers, just some papers.

I know the Lisp compiler on the Symbolics work station is incremental, I have
heard that Microsoft's QuickBasic is also really an incremental compiler to
machine code (is this true? ) and I have seen ads for one MS-DOS C compiler
that is fine grained incremental but I forget the name. The Safe-C Interpreter
from Catalytix is really a file oriented interpreter. You have to edit the
file and read it in again. I think the Saber-C environment for Sun
workstations is the same. Does anyone know of other examples of fine grained
incremental compilers or interpreters?
--
Charlie Havener - GenRad Inc. 300 Baker Ave, MS 1A, Concord Mass. 01742
..genrad!condor!charlie     charlie@condor.genrad.COM
--
Send compilers articles to ima!compilers or, in a pinch, to Levine@YALE.EDU
Plausible paths are { decvax | harvard | yale | bbn}!ima
Please send responses to the originator of the message -- I cannot forward
mail accidentally sent back to compilers.  Meta-mail to ima!compilers-request

pf@cs.utexas.edu (Paul Fuqua) (01/25/89)

    Date: Tuesday, January 24, 1989  4:12pm (CST)
    From: charlie at genrad.com (Charlie D. Havener)
    Subject: Incremental compilers
    
    I know the Lisp compiler on the Symbolics work station is incremental ...

Incremental compilation on a Symbolics (or TI Explorer, or LMI Lambda)
is made easier by using dynamic linking all the time, and by the fact
that a compiled function is a first-class Lisp object like any other.

To omit lots of details, the name of a function contains a pointer to
the function itself.  When you (re)compile a function, just replace the
pointer with the new one.  Other functions that call this one indirect
through the name, so nobody has to be relinked to see the new
definition.

This handles function-by-function incremental compilation, not
line-by-line, but I don't think you can do it to any finer grain than
functions, or would want to.

I think that Saber does the same sort of name-to-function association
and indirection in their interpreter, but I don't know for sure.

Paul Fuqua                     pf@csc.ti.com
                               {smu,texsun,cs.utexas.edu,rice}!ti-csl!pf
Texas Instruments Computer Science Center
PO Box 655474 MS 238, Dallas, Texas 75265
--
Send compilers articles to ima!compilers or, in a pinch, to Levine@YALE.EDU
Plausible paths are { decvax | harvard | yale | bbn}!ima
Please send responses to the originator of the message -- I cannot forward
mail accidentally sent back to compilers.  Meta-mail to ima!compilers-request

rogerson@PEDEV.Columbia.NCR.COM (Dale Rogerson) (01/27/89)

I believe that in QuickBasic the incremental compiler compiles to p-code which
is then interpreted. From what I understand (I have an old version which is
not really incremental) the compiler will check the lines as you type them.
The p-code is based on token threaded byte codes.

The "perfect" package would allow integration of pre-compiled modules and the
environment would consist of a Turbo/Codeview type debugging environment from
which you not only debug, but you CHANGE variables and functions on the source
level and continue to run. Then after you have finished debugging you preform
a final compile which would compile to machine code. The neat thing about this
method is that the final compile only needs to be done once, therefore, it
does not have to be fast i.e. it can really optomize!

Austin Code works has an interactive C interpreter with source code. This
might be a good place to start some projects.

-----Dale Rogerson-----
--
Send compilers articles to ima!compilers or, in a pinch, to Levine@YALE.EDU
Plausible paths are { decvax | harvard | yale | bbn}!ima
Please send responses to the originator of the message -- I cannot forward
mail accidentally sent back to compilers.  Meta-mail to ima!compilers-request

nick@lfcs.ed.ac.uk (Nick Rothwell) (01/27/89)

Charlie D. Havener (charlie@genrad.com) writes:
>I would like to stimulate some discussion in this group about incremental
>compilers and fine grained interpreters for block structured languages. Fine
>grain means it incrementally changes the compiled or interpretive code ( like
>reverse polish ) on a line by line basis instead of a file basis.
>...
>Does anyone know of other examples of fine grained
>incremental compilers or interpreters?

I presume that an interpreter for a (block structured) expression-based
language would fit your criterion - or are you thinking of a situation
where *a part* of a complete object (say, one line of a procedure) can
be compiled and replaced in isolation?
   Standard ML does incremental compilation (as do many interactive
systems) - expressions/declarations are compiled into functions which
the compiler can then call like any other function, to evaluate them.
Interestingly, the code generator itself does *not* require any
nonstandard features of the language or host system: it is simply a
function from lambda-expressions (an intermediate representation) to
strings, the string containing the correct machine opcodes. There's a
simple bootstrap call to turn a string into a function with that
string's contents as code. ML is statically scoped, so there isn't
really a concept of "replacement" of code - recompiled
functions/modules supercede existing ones for subsequent declarations;
the old declarations vanish by garbage collection as they disappear
from the symbol table.
   Perhaps this is veering from your original subject area. I'll gladly
go into further details if there's interest.

		Nick.
--
Nick Rothwell,	Laboratory for Foundations of Computer Science, Edinburgh.
		nick@lfcs.ed.ac.uk    <Atlantic Ocean>!mcvax!ukc!lfcs!nick
--
Send compilers articles to ima!compilers or, in a pinch, to Levine@YALE.EDU
Plausible paths are { decvax | harvard | yale | bbn}!ima
Please send responses to the originator of the message -- I cannot forward
mail accidentally sent back to compilers.  Meta-mail to ima!compilers-request