harrison@uicsrd.CSRD.UIUC.EDU (11/05/86)
Thanks, Sam, but unfortunately ram@spice is quite correct: I'm totally wrong. Sort of. Allow me to explain my perspective. (Part of the problem is that our site seems to receive and send notes two or three days behind the network at large - I'm forever responding to yesterday's news.) I'm at work on a research compiler for Scheme, for a large multiprocessor. Early in the compilation process, the compiler performs an interprocedural analysis which, among other things, reveals which stack frames must be heap allocated and which must be stack allocated. This analysis is very costly, and is, of course, not something that a practical compiler could afford to do (necessarily). But the result of the analysis is that, at run time, closures and continuation objects can be formed without any copying (most frames requiring only dynamic extent, the others being allocated on the heap in the first place). This is a very natural solution to the problem of forming continuations and closures, given the detailed interprocedural information available. Not being familiar with other compilers, and in particular compilers for pc's, it never occured to me that they would copy the entire stack to form continuations! (I assumed that quick compilers simply allocated stack frames on the heap.) The next step in my reasoning was simple: since our compiler spends so much time analyzing recursive functions, and since a lot of effort is required to optimize a tail-recursive function (remember that, since we never copy bindings we can't ignore the problems of re-using the stack frame; and after the function has been made into an explicit loop, the work has just begun), how is it that these pc implementations can do it so easily?! It sounded miraculous (and impossible). In reality it is simple: make copies of the stack, after making provision (such as a heap-allocated value-cell), for variables which don't obey single assignment, or something to that effect. Thanks to Dr. Baldwin and Dr. Clinger for hitting the nail right on the head. No, our compiler is not 'properly' tail recursive (I think that bindings must be copied to do that in general - am I right?) /* Written 1:27 am Nov 1, 1986 by ram@spice.cs.cmu.edu in uicsrd:net.lang.lisp */ As was earlier mentioned by someone who knew what they were talking about, "call with current continuation" is implemented in a somewhat expensive way so that tail recursion still does work. Basically what you do is copy the entire stack onto the heap. /* End of text from uicsrd:net.lang.lisp */ When I read this, my heart was filled with glee: one can do a lot of interprocedural analysis in the time that it takes to copy the entire stack into the heap (multiplied by the number of invocations of call/cc times the number of times the program is run.) ;-) Luddy Harrison / 217 333 4168 / harrison@uicsrd.csrd.uiuc.edu Center for Supercomputing R&D / 302D Talbot Laboratory 104 South Wright Street / Urbana IL 61801-2932 p.s. What about this (for implementations which don't have the luxury of interprocedural analysis): provide the user with a variant of call/cc, say, call-with-dynamic-continuation which amounts to a promise that the continuation need have only dynamic extent, so that no stack copying will be warranted by its creation?