krulwich@ils.nwu.edu (Bruce Krulwich) (02/19/91)
[A discussion was started about tail call optimization effecting the "correctness" of a program. It was pointed out that tail call optimization doesn't effect what programs will run, only how efficient they will be. In response to this...] hanche@imf (Harald Hanche-Olsen) writes: >On a machine with infinite memory, optimizing tail recursion does not >change the meaning of programs. However, in Scheme you can write a >loop using a tail call and the loop can execute millions of times >without running out of memory if it can do it once. I'd say that >whether or not the program runs out of memory changes its meaning, >whether the semantics say so or not ... In Scheme you can write a program that uses CONS that will run out of memory on a machine with a 5-byte heap. Does that mean that no program that uses CONS can be assumed to work across Scheme implementations? Also, in Scheme it is possible to write a recursive (non-tail) function that will run out of memory on some implementations. Does that mean that any recursive (non-tail) function in Scheme doesn't have the same meaning across implementations? The R3RS semantics (and presumably the more recent ones) include the notion that CONS may run out of memory, and it seems that the other relevent areas do too (e.g., tieval), although I'm not comfortable enough with the semantics notation to say for sure. Given these semantics, the "meaning" of programs seems to be constant across machines with varying amounts of memory. It would seem to me that the same notion should apply to languages which may stack-handle all calls. The conclusion that I draw from this is that the handling of tail-recursion is an efficiency issue but not a program correctness/validity issue. This is not to say, of course, that Scheme implementations aren't required to be tail-recursive, or that tail-call optimization isn't a virtually necessary aspect of a Scheme or LISP system from a pragmatic point of view. Bruce Krulwich Institute for the Learning Sciences
guttman@mitre.org (Joshua D. Guttman) (02/19/91)
If I'm reading the Scheme formal semantics aright, one unappealing, maybe downright wrong, aspect is that it definitely asserts that procedure call (in and of itself) can never lead to memory exhaustion. Thus, a program like: (define (laughable) (1+ (laughable))) should run forever without error on any "correct" implementation, even tho' it is *not* tail-recursive. I myself would advocate taking out all of the memory exhaustion tests from where they are in the current formal semantics, and treating the whole issue of the finiteness of memory at a lower level in modeling the formal behavior of implementations. So I suppose I would advocate saying that proper tail recursion should not change the mathematical *semantics* of a program, tho' it certainly changes something else: call it, perhaps, the pragmatics of the program. Josh
carlton@husc10.harvard.edu (david carlton) (02/19/91)
In article <908@anaxagoras.ils.nwu.edu> krulwich@ils.nwu.edu (Bruce Krulwich) writes: [A discussion was started about tail call optimization effecting the "correctness" of a program. It was pointed out that tail call optimization doesn't effect what programs will run, only how efficient they will be. In response to this...] hanche@imf (Harald Hanche-Olsen) writes: >On a machine with infinite memory, optimizing tail recursion does not >change the meaning of programs. However, in Scheme you can write a >loop using a tail call and the loop can execute millions of times >without running out of memory if it can do it once. I'd say that >whether or not the program runs out of memory changes its meaning, >whether the semantics say so or not ... In Scheme you can write a program that uses CONS that will run out of memory on a machine with a 5-byte heap. Does that mean that no program that uses CONS can be assumed to work across Scheme implementations? Also, in Scheme it is possible to write a recursive (non-tail) function that will run out of memory on some implementations. Does that mean that any recursive (non-tail) function in Scheme doesn't have the same meaning across implementations? [ he then goes on to talk more about how tail recursion isn't a semantic difference. ] Hmmm. I'm not sure that i agree with this, even though i made a post a few days ago saying something like it. After some discussion in e-mail, i think that there is a semantic difference that it is profitable to make: there exist programs that will run on Scheme implementations with sufficiently large amounts of memory that will not run on an implementation that does not optimize tail calls into jumps. While i agree with bruce that merely lowering the amount of necessary for a program to run correctly isn't semantically important, i do think that changing a program from something that will always run out of memory to something that may not run out of memory is important. The simplest example of such a program is: (define forever (lambda () (forever))) Of course, this isn't very interesting. Another example might be (define wait-until-eof (lambda () (if (eof-object? (read-char)) #t (wait-until-eof)))) And while this function itself doesn't do anything useful, with only slight modification you can turn it into a read-eval-print loop, say. Note that under any given (finite) input, this program will run correctly under either under a tail-recursive or non-tail-recursive implementation; the difference is that under a tail-recursive input there exists an amount of memory which will be enough for any input, whereas under a non-tail-recursive implementation for any input there exists an amount of memory which will be enough, but that amount can be different for different inputs. Another example might be (define even? (lambda (num?) (cond ((zero? num) #t) ((= num 1) #f) (else (even? (- num 2)))))) (and no, i wouldn't write even? that way myself, but again there are useful programs which look like that.) david carlton carlton@husc9.harvard.edu "Parentheses in Scheme are like the petals of a cherry blossom; they come for a time, evaluate themselves, and are gone." -- Greg "Moonbeam Thru Prune" Travis
hanche@imf.unit.no (Harald Hanche-Olsen) (02/20/91)
In article <GUTTMAN.91Feb18154856@darjeeling.mitre.org> guttman@mitre.org (Joshua D. Guttman) writes:
So I suppose I would advocate saying that proper tail recursion should not
change the mathematical *semantics* of a program, tho' it certainly changes
something else: call it, perhaps, the pragmatics of the program.
So now I am sure we are all looking forward to seeing a formal
pragmatics section in R^4RS :-) Sure, I used the word semantics in an
informal way in my posting. It sparked an interesting discussion,
though.
- Harald Hanche-Olsen <hanche@imf.unit.no>
Division of Mathematical Sciences
The Norwegian Institute of Technology
N-7034 Trondheim, NORWAY