gat@robotics.jpl (Erann Gat) (02/13/91)
This is a (probably) vain plea to see call-with-current-continuation included in common lisp. There are a number of reasons that call/cc should be included. 1. It cannot be implemented in terms of existing CL constructs, so you can't build it even if you want to. You can build a downward-only call/cc, but see the note below. 2. It is not that hard to implement. Including call/cc would not make compilers that much more complex. All that call/cc does is copy the stack onto the heap. 3. Call/cc makes it INFINITELY less painful to write coroutines. For example, here's a short piece of code that allows you to interleave the execution of an arbitrary number of arbitrary functions: (This is written in T, which allows continuations to take an arbitrary number of arguments and return them as multiple values.) (define (scheduler . fns) (set *queue* (append *queue* fns)) (if (null? *queue*) (escape) (let ( (fn (pop *queue*)) ) (fn) (scheduler)))) (define-syntax (define-coroutine name&args . body) `(define ,name&args (call/cc scheduler) ,@body)) (call/cc (lambda (esc) (define escape esc))) There. Coroutines in ten lines of fairly transparent code, but it requires upward continuations so you can't do this sort of thing in CL as it stands now. BTW: T is NOT just another dialect of Scheme. It provides an object primitive which cannot be implemented in pure Scheme. (You can come very close, but you can't implement it exactly.) The T object construct, besides providing an elegant vehicle for doing object-oriented programming, also has the pleasant side effect of elegantly solving the SETQ/SETF problem. It also lets you do things like redefine a function to give it a different name and still have (SET (new-name ...) ...) work. T also has a structure primitive which lets you access all the slot names and accessor functions in a very neat way. It also extends call/cc as described above. Unfortunately, no one is commercializing T, so it's hard to get support or good development environmnts. So, if we can't have T, can't we at least get call/cc with our CL? (And STYPES? And Objects? And... :-) E.
barmar@think.com (Barry Margolin) (02/13/91)
In article <1991Feb12.233157.20820@elroy.jpl.nasa.gov> gat@robotics.jpl (Erann Gat) writes: >This is a (probably) vain plea to see call-with-current-continuation >included in common lisp. There are a number of reasons that call/cc >should be included. > >1. It cannot be implemented in terms of existing CL constructs, so you >can't build it even if you want to. You can build a downward-only call/cc, >but see the note below. I wasn't in the original Common Lisp design group. However, it seems obvious that they explicitly decided against allowing upward closures to capture control state. Had they not made this decision, call/cc could be implemented in terms of closures or vice versa. I haven't read the archives of the CL design email for about six years, so I don't remember their justification, but I expect it was mostly along the lines of "there isn't enough experience with it, and it doesn't really seem necessary." >2. It is not that hard to implement. Including call/cc would not make >compilers that much more complex. All that call/cc does is copy the >stack onto the heap. There are some architectures where operations on the stack pointer register are restricted, so you can't just set the stack pointer to point to a heap-allocated activation record. And what about the fact that call/cc makes unwind-protect more complicated; don't you have to provide code that runs when re-entering the environment? This means that the addition of upward control closures would not be transparent to implementors of many macros (e.g. WITH-OPEN-FILE would have to be changed to save its state in the closure and reopen the file and seek to the saved place upon re-entry). >3. Call/cc makes it INFINITELY less painful to write coroutines. So do the Lisp Machine's stack group facility, and I expect some other Common Lisp implementations provide similar mechanisms (Lucid and Franz support multitasking within Lisp, in a manner very similar to Symbolics's). What if we were to add a coroutine facility, rather than a primitive for building coroutines? See my previous posting, where I mentioned that we prefer to define user-oriented facilities, rather than implementor-oriented primitives. Additionally, by defining the high-level facility rather than the low-level mechanism, we allow implementors to tailor the implementation to the environment (e.g. Lisp Machine coroutines would use the hardware-supported stack group mechanism), rather than forcing the user to implement the facility using suboptimal primitives. -- Barry Margolin, Thinking Machines Corp. barmar@think.com {uunet,harvard}!think!barmar
jeff@aiai.ed.ac.uk (Jeff Dalton) (02/13/91)
In article <1991Feb12.233157.20820@elroy.jpl.nasa.gov> gat@robotics.jpl (Erann Gat) writes: >3. Call/cc makes it INFINITELY less painful to write coroutines. I would rather have coroutines (a.k.a. stack groups) directly, as several CL implementations have done. I think call/cc is too general and powerful (kind of like goto, only more so). It's right for a language such as Scheme that tried to have a small number of powerful constructs, but that doesn't make it right for other languages. BTW, T is excellent. -- jd
pcg@cs.aber.ac.uk (Piercarlo Grandi) (02/15/91)
On 13 Feb 91 05:59:38 GMT, barmar@think.com (Barry Margolin) said: barmar> I wasn't in the original Common Lisp design group. However, it barmar> seems obvious that they explicitly decided against allowing barmar> upward closures to capture control state. Had they not made barmar> this decision, call/cc could be implemented in terms of closures barmar> or vice versa. This would a be a misdesign. The point about call/cc is that allows you to actualize an arbitrary control graph, instead of a control tree linearized onto a stack (usual case) or on multiple stacks (coroutines as stack groups). Closures are completely orthogonal to continuations; closures allow you to actualize an arbitrary context graph, instead of a context tree linearized onto a stack or multiple stacks. Incidentally, fully general closures also obviate the need for packages. barmar> I haven't read the archives of the CL design email for about six barmar> years, so I don't remember their justification, but I expect it barmar> was mostly along the lines of "there isn't enough experience barmar> with it, and it doesn't really seem necessary." This would be really surprising. The technology for arbitrary control and context graphs has been known for a long time (twenty five years at least, and it has been fifteen years since the more or less definitive works of Baker and Steele). Closures and continuations have also been extensively used in the AI community, under various guises. It can be argued that objects and actors and frames are just some of these guises. As to being necessary, both are simply indispensable; you need both arbitrary context and control graphs in all algorithms that deal with non trivial graph walking, or equivalents. Many programmers do not realize this, they keep reimplementing closures and continuations "manually", simulating closures usually with bunches of global variables and continuations with state machines or "markers". barmar> And what about the fact that call/cc makes unwind-protect more barmar> complicated; don't you have to provide code that runs when barmar> re-entering the environment? Of course! When you have a control graph and a context graph, you want fully general entry and exit functions, not just exit functions. Actually you want them in general, except that they are hidden as "initialization" code. Fully general entry and exit functions are incredibly useful abstractions, because allow clean and flexible modularization and "advising" of code. I remember having read (either in Allen's book or in Baker's paper on shallow binding) that the only known language with fully general entry and exit functions is TECO, of all things :-). barmar> What if we were to add a coroutine facility, rather than a barmar> primitive for building coroutines? See my previous posting, barmar> where I mentioned that we prefer to define user-oriented barmar> facilities, rather than implementor-oriented primitives. In other words you are saying that CL is by design a "programming" language, like PL/1 or PERL, not an algorithmic language like Scheme or C or the Bourne shell. This philosphy is one of the reasons why CL is perceived as a jumble sale of features, a giant and monolithic entity. Coupled with the weak or otherwise vastly underexploited library facilities, and to the tradition to implement Lisp (and even more regrettably, Scheme) in a workspace based way, with little interaction with other host system modules, and we have a fatal combination. -- Piercarlo Grandi | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk Dept of CS, UCW Aberystwyth | UUCP: ...!mcsun!ukc!aber-cs!pcg Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk
gat@forsight.jpl.nasa.gov (Erann Gat) (02/15/91)
In article <1991Feb13.055938.22853@Think.COM> barmar@think.com (Barry Margolin) writes: >What if we were to add a coroutine facility, rather than a primitive for >building coroutines? See my previous posting, where I mentioned that we >prefer to define user-oriented facilities, rather than implementor-oriented >primitives. Common Lisp tries to be 1) powerful, 2) portable, 3) efficient. The trouble is that these three goals are often in conflict. Portability is usually served best by a small language with a few well-defined primitives that everything else is built on top of. Efficiency is usually served best by having a large number of less general constructs which can make lots of assumptions about the way they will be used. Power (e.g. lots of data types and built in procedures to compute with them) is often at odds with efficiency. (Canonical examples of languages which cater to portability, efficiency and power are Scheme, Fortran and Ada repectively.) CL has actually gone above and beyond the call for being all things to all people. However, its philosophy of catering to users rather than implementors leads to one extremely annoying side effect. If the thing you want to do just happens not to be one the ten zillion things that CL does, then you are just screwed. Examples of such things appear in this group regularly - things like coroutines or accessing the slot names of a structure. More often than not people just write implementation-dependent hacks to do what they want. ("Well," says Joe Programmer, "type-of returns the right thing on this machine, and so I'll go ahead and use it in this non-portable way and fix it up later.") What I do not understand is why you can't do both: first start with a set of primitives. Define the hell out of them so everyone agrees what they do. Then build all the user features on top of the primitives. It seems to me that this gives you the best of all possible worlds. Because the whole language is based on a small set of primitives, you can have complete implementations that consist of a small kernel and a huge (maybe PD) library of the user features (in source-code form). On the other hand, nothing prevents a manufacturer from implementing a compiler to optimize all the user features into tight code. Portability is enhanced because if some implementation has a bug, that feature can simply be redefined (albeit less efficiently) in terms of the primitives. Therefore, the answer to the original question is that I would rather have call/cc than coroutines. Tomorrow I might want to build engines (a really cool construct that somehow got lost in the shuffle). If I have call/cc I can do it myself. If I don't have call/cc then I have no recourse other than to go begging to LISP manufacturers or write my own compiler. (Ah! Maybe I have answered my own question there!) E.
barmar@think.com (Barry Margolin) (02/15/91)
In article <PCG.91Feb14184024@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes: >Closures are completely orthogonal to continuations; closures allow you >to actualize an arbitrary context graph, instead of a context tree >linearized onto a stack or multiple stacks. All I know is that I've implemented call/cc using Common Lisp closures. It looks something like this: (defun call/cc (function) (funcall function #'(lambda (x) (return-from call/cc x)))) It would even work if Common Lisp allowed returning from a block multiple times; the implementation of closures would have to include continuations. > Incidentally, fully general >closures also obviate the need for packages. I've seen all the arguments to this effect, and remain unconvinced. Common Lisp packages can be used for other things besides bindings. Symbols are often used as data objects, e.g. as indicators on property lists, and packages can be used to prevent these from colliding. >barmar> I haven't read the archives of the CL design email for about six >barmar> years, so I don't remember their justification, but I expect it >barmar> was mostly along the lines of "there isn't enough experience >barmar> with it, and it doesn't really seem necessary." > >This would be really surprising. The technology for arbitrary control >and context graphs has been known for a long time (twenty five years at >least, and it has been fifteen years since the more or less definitive >works of Baker and Steele). Closures and continuations have also been >extensively used in the AI community, under various guises. It can be >argued that objects and actors and frames are just some of these guises. And in the mid-80's, when Common Lisp was being designed, all these were still hot *research* topics. They hadn't yet passed into common day-to-day use by Lisp programmers. None of MacLisp, Lisp Machine Lisp, nor Interlisp had them. Yes, I realize that this is also true of lexical closures, but I guess these were felt to be so obviously right and relatively easy to implement that they were added. >In other words you are saying that CL is by design a "programming" >language, like PL/1 or PERL, not an algorithmic language like Scheme or >C or the Bourne shell. You've got it. The charter of X3J13 is to produce a specification for an industrial-strength Lisp dialect. Usability for research into programming paradigms is not a goal of the Common Lisp project; it *is* a high priority of the Scheme designers. Different strokes for different folks. -- Barry Margolin, Thinking Machines Corp. barmar@think.com {uunet,harvard}!think!barmar
barmar@think.com (Barry Margolin) (02/15/91)
In article <1991Feb14.223202.11475@elroy.jpl.nasa.gov> gat@forsight.jpl.nasa.gov (Erann Gat) writes: >What I do not understand is why you can't do both: first start with a set >of primitives. Define the hell out of them so everyone agrees what they do. The Common Lisp designers didn't have the luxury of starting from first principles. Read the history chapter of CLtL. The goal of Common Lisp was to unify many of the Lisp dialects that were proliferating by the early 80's. They needed to produce something relatively quickly, or users would get too dependent on their incompatible dialects. A reasonable amount of compatibility with existing dialects, or at least the ability to coexist with them and/or automatically translate from them to CL, was also important. These goals all had higher priority than producing a conceptually beautiful language design. The Scheme people were doing the "clean" Lisp; there was no need to duplicate their work, since the goals were completely different. -- Barry Margolin, Thinking Machines Corp. barmar@think.com {uunet,harvard}!think!barmar
sfk@otter.hpl.hp.com (Steve Knight) (02/15/91)
It's interesting to relate the discussion of call/cc to the experience of using "processes" in Poplog. (The "process" facilities in Poplog are a dual of the Scheme call/cc facilities in that each can be used to (fairly) directly implement the other.) Since the two facilities are very closely related -- I used Poplog "processes" to implement call/cc in my Scheme compiler -- the comparison is useful. From the viewpoint of implementation the key problem is dealing with what in Poplog is called dynamically-localised constructs -- which is a small generalisation of specials & sys-unwind-protect. As Barry points out in an earlier message: "And what about the fact that call/cc makes unwind-protect more complicated; don't you have to provide code that runs when re-entering the environment?" The interaction between thread switching and dynamic localisation really does complicate implementation, at least it does in Poplog. The issue to be decided is that when a process-switch occurs, the dynamic environment is exitted temporarily (or permanently), and dynamic localisations may or may not have to be reversed. Special-variables typically should be restored and sys-unwind-protect typically should be left alone. The solution adopted in Poplog is that by default, dynamic localisations are restored (i.e. put back to the state on entry) at thread-switch. However, there is a simple, rarely used, mechanism for localisations to inspect the kind of switching going on and to make a decision on the basis of that data. This is packaged up for sys-unwind-protect. A third strand to the overall solution that Poplog employs is "destroy actions". In Poplog, it is possible to associate an action with an object that is triggered when that object becomes eligible for garbage-collection. In particular, system resources such as file-handles or X-windows have associated clean-up actions so that if they are "lost" they are quietly shut down at the next garbage-collection (and actually collected next time around). Furthermore, file-opening code can cause a garbage-collection to happen if file-opening fails, in case the GC can reclaim the existing open files. Destroy actions are a slightly complicated topic -- which I've rather skimped on here to keep things short -- but they play an important role in reducing the necessity for sys-unwind-protect and other protective macros. Barry continues: > What if we were to add a coroutine facility, rather than a primitive for > building coroutines? See my previous posting, where I mentioned that we > prefer to define user-oriented facilities, rather than implementor-oriented > primitives. I certainly find programming using the Poplog "process" facilities to be a great deal more intuitive than call/cc. I instinctively find myself using call/cc to build a process-like layer. Since the two are interchangeable in expressive power, I'm inclined to think that the higher-level solution is more suitable for CL. Steve Knight HP Labs, Bristol
jeff@aiai.ed.ac.uk (Jeff Dalton) (02/15/91)
In article <PCG.91Feb14184024@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes: >barmar> I wasn't in the original Common Lisp design group. However, it >barmar> seems obvious that they explicitly decided against allowing >barmar> upward closures to capture control state. Had they not made >barmar> this decision, call/cc could be implemented in terms of closures >barmar> or vice versa. > >This would a be a misdesign. The point about call/cc is that allows you >to actualize an arbitrary control graph, instead of a control tree >linearized onto a stack (usual case) or on multiple stacks (coroutines >as stack groups). > >Closures are completely orthogonal to continuations; closures allow you >to actualize an arbitrary context graph, instead of a context tree >linearized onto a stack or multiple stacks. Incidentally, fully general >closures also obviate the need for packages. I agree that closures and continuations should be regarded as distinct. The spaghetti stack view of things, where the access link and the control link seem very similar things, may be one reason for the "closures subsume continuations" view. However, fully general closures do not obviate the need for packages, because (1) packages are also used for data structuring (ie, controlling the potentially huge number of symbols), (2) it's much easier to make macros work reasonably with packages than with modules that are based on closures / environments. And let's not pretend that there are no costs to call/cc, only benefits. That is simply false. I think the right question to ask is whether a Lisp that has lexical closures and coroutines needs to go further and add call/cc (and take out coroutines or reimplement them with call/cc). My view is that we have by then reached the point of substantially diminishing returns. Moreover, the coroutines that one can implement oneself with call/cc tend to be less usable than those that are provided with the language. >As to being necessary, both are simply indispensable; you need both >arbitrary context and control graphs in all algorithms that deal with >non trivial graph walking, or equivalents. > >Many programmers do not realize this, they keep reimplementing closures >and continuations "manually", simulating closures usually with bunches >of global variables and continuations with state machines or "markers". There are much better "simulations", sometimes of both together. For example, agendas are often used to implement control. And a typical agenda would have entries that each contained a function and some arguments to apply it to (a simulated closure, if you will). Other better simulations for non-trivial graph-walking are streams (ie, lazy lists) and explicit continuation functions (eg, the caller passes a function to be called on success). >barmar> What if we were to add a coroutine facility, rather than a >barmar> primitive for building coroutines? ... prefer to define >barmar> user-oriented facilities, rather than implementor-oriented >barmar> primitives. > >In other words you are saying that CL is by design a "programming" >language, like PL/1 or PERL, not an algorithmic language like Scheme or >C or the Bourne shell. This is pretty amusing. CL is like PL/1 because it aims to be useful for programming. I agree with Barmar about coroutines (though not quite about the user/implementor distinction). And this is in part because I think different things are appropriate for different languages, and that more than one of these combinations can be good. Some people seem to feel that there is essentially only one good way to do things and that any language that does something else must therefore be worse. These people are invited to define the One True Language and then set about convincing everyone to use it. The slightly more generous view in which there is a class of good languages (the "algorithmic languages", say) and a class of bad ones (those that can be claimed to be like PL/1) has similar problems. In particular it still fails to recognize that there can be more than one good way of doing things and so ends up regarding everyone who does things the "bad" way as misinformed, misguided, or worse. Let several language design philosophy flowers bloom, at least. >This philosphy is one of the reasons why CL is perceived as a jumble >sale of features, a giant and monolithic entity. A giant and monolithic entity and a jumble sale of features seem rather different to me, but I would agree that such perceptions often go together. I think that one of the most important reasons why CL is sometimes perceived as a jumble sale is that it is mistakenly supposed that CL was designed by taking every feature from every Lisp and putting it in Common Lisp. But that isn't true. CL is a cleaner and simpler design in many ways than, say, InterLisp; and many of the features that appear most in complaints (loop, format, dotimes, multiple values) were already used, in one form or another, in MacLisp. Indeed, a fairly common view (at least in the UK) is that the scoping rules for CL (where lexical/static and special/dynamic are both supported) are as they are because of some union of features approach. However, what they actually are is a cleanup and generalization of the rules in MacLisp. -- jd
jeff@aiai.ed.ac.uk (Jeff Dalton) (02/15/91)
In article <1991Feb14.223202.11475@elroy.jpl.nasa.gov> gat@forsight.jpl.nasa.gov (Erann Gat) writes: >CL has actually gone above and beyond the call for being all things to all >people. However, its philosophy of catering to users rather than implementors >leads to one extremely annoying side effect. If the thing you want to do >just happens not to be one the ten zillion things that CL does, then you >are just screwed. Examples of such things appear in this group regularly - >things like coroutines or accessing the slot names of a structure. But this is always a problem. There are things I want to do in Scheme that I can't do (eg, define a distnct data type). But the situation is usually much worse in languages outside the Lisp family. Common Lisp is already better than most languages in this respect, and I don't think it's actually much worse than Scheme. Moreover the examples that appear in this group regularly seem fairly few in number. >What I do not understand is why you can't do both: first start with a set >of primitives. Define the hell out of them so everyone agrees what they do. >Then build all the user features on top of the primitives. It seems to me >that this gives you the best of all possible worlds. I agree that this is a good approach. But (as I've been saying with tiresome regularity), Common Lisp can be seen that way to a surprising extent. It's just that lots of user features are presented, implemented, etc as "part of the language". >Therefore, the answer to the original question is that I would rather >have call/cc than coroutines. Tomorrow I might want to build engines >(a really cool construct that somehow got lost in the shuffle). If I >have call/cc I can do it myself. You can? That's not what happens in Kent Dybnig's book. He needs an additional primitive, timer interrupts. However, I agree that it's important to have powerful primitives on which other abstractions can be built. This is one reason I don't quite agree with the user/implementor distinction. In Lisp, users often _are_ implementors. -- jd
sfk@otter.hpl.hp.com (Steve Knight) (02/16/91)
> If I don't have call/cc then I have > no recourse other than to go begging to LISP manufacturers or write my own > compiler. As I mentioned in a previous posting, call/cc and various other coroutine models (Poplog processes are just one example) are equivalent and can be used to implement each other. The difference lies in the implementation technology. (Of course, one may still prefer call/cc over other multi-threading models.) So, provided language designers have aniticipated the need for multi-threading in some shape or form, there is a good chance the begging bowl can stay at home. It's a bit of a shame CL provides no such facilities -- but it gives those of us who find CL the least acceptable Lisp around a really big stick to beat up our managers with :-) Steve PS. My own personal gripes with CL are [1] the type system is far too loose [2] user-defined types aren't guaranteed to be distinct from previous types [3] equality doesn't work the way it does in other lisps & is a nasty source of problems (e.g. portability) [4] there's no multi-threading [5] I don't like keyword parameters/optional parameter etc and dislike paying a cost for a feature I don't use [6] empty list and false still aren't distinct [7] the lack of temporary (transparent) hash-tables (c.f. Poplog CL, T) [8] the lack of destroy actions (c.f. Poplog CL) [9] the poor choice of initial syntax features (defun, do) [10] the dual name spaces for functions and values (bletch!) (use type hints to avoid the usual complaints of efficiency (see Pop11)) [11] type assertions are unchecked & are effectively promises rather than hints (I'd like both, thanks) [12] the lack of a updaters (see Pop11) for an update model that can support abstraction (SETF is a compile-time thing). On the other hand, CL gets my thumbs up for {1} putting lexical binding on the map once and for all {2} its excellent number model {3} the size of its library (implementations all too often are sloppy in doing this, see Poplog CL for an implementation that avoids dragging in every bit of code (even the different cases of FORMAT are separately loaded)) which I suppose could be better thought out {4} for (mostly) successfully uniting the Lisp community behind a single Lisp.
matthias@leto.rice.edu (Matthias Felleisen) (02/17/91)
Here is a semi-formal treatment of the puzzle that was posed to comp.lang.lisp by Steve Knight. GIVEN: An Indiana-style definition of coroutines in Scheme, using call/cc. (extend-syntax (COROUTINE RESUME) [(COROUTINE x body) (letrec ([lcs (lambda (x) body)] [RESUME (lambda (dest val) (call/cc (sigma (lcs) (dest val))))]) (lambda (x) (lcs x)))]) LOOKING FOR: A definition of call/cc in terms of the COROUTINE syntax. SOLUTION: (define callcc (lambda (f) (letrec ([c (COROUTINE x (RESUME f c))]) (c13)))) If you are not interested in a semi-formal proof that callcc is really related to call/cc, skip the rest. The proof informally uses the extended lambda calculus for Scheme that several co-workers and myself have developed over the last few years. -- Matthias Felleisen P.S. A more readable version is contained in public/puzzle.dvi @ titan.rice.edu, which you may obtain via anonymous ftp. -------------------------- cut here ----------------------------------- PROOF: The following is a proof in (a variant of) the Felleisen-Hieb(*) calculus of Scheme. [1] Replace the use of the extended syntax by its definition. (define callcc (lambda (f) (letrec ([c (letrec ([lcs (lambda (x) (RESUME f c))] [RESUME (lambda (dest val) (call/cc (sigma (lcs) (dest val))))]) (lambda (x) (lcs x)))]) (c 13)))) [2] Flatten the letrec definitions. (define callcc (lambda (f) (letrec ([c (lambda (x) (lcs x))] [lcs (lambda (x) (RESUME f c))] [RESUME (lambda (dest val) (call/cc (sigma (lcs) (dest val))))]) (c 13)))) ; Expand the non-rho use of letrec to see that this is correct. [3] Dereference c and use a beta-value step to discharge the resulting beta-value redex. (define callcc (lambda (f) (letrec ([c (lambda (x) (lcs x))] [lcs (lambda (x) (RESUME f c))] [RESUME (lambda (dest val) (call/cc (sigma (lcs) (dest val))))]) (lcs 13)))) [4] Dereference lcs and use a beta-value step to discharge the resulting beta-value redex. 13 disappears: lucky us :-). (define callcc (lambda (f) (letrec ([c (lambda (x) (lcs x))] [lcs (lambda (x) (RESUME f c))] [RESUME (lambda (dest val) (call/cc (sigma (lcs) (dest val))))]) (RESUME f c)))) [5] Dereference RESUME and use multi-beta-value redex to discharge the resulting beta-value redex. (define callcc (lambda (f) (letrec ([c (lambda (x) (lcs x))] [lcs (lambda (x) (RESUME f c))] [RESUME (lambda (dest val) (call/cc (sigma (lcs) (dest val))))]) (call/cc (sigma (lcs) (f c)))))) [6] Expand call/cc such that it takes a lambda-argument: this is one form of continuation-eta rule. (define callcc (lambda (f) (letrec ([c (lambda (x) (lcs x))] [lcs (lambda (x) (RESUME f c))] [RESUME (lambda (dest val) (call/cc (sigma (lcs) (dest val))))]) (call/cc (lambda (k) ((sigma (lcs) (f c)) k)))))) [7] Swap letrec and (call/cc (lambda k. ***)). (define callcc (lambda (f) (call/cc (lambda (k) (letrec ([c (lambda (x) (lcs x))] [lcs (lambda (x) (RESUME f c))] [RESUME (lambda (dest val) (call/cc (sigma (lcs) (dest val))))]) ((sigma (lcs) (f c)) k)))))) [8] Perform the sigma-assignment. (define callcc (lambda (f) (call/cc (lambda (k) (letrec ([c (lambda (x) (lcs x))] [lcs k] [RESUME (lambda (dest val) (call/cc (sigma (lcs) (dest val))))]) (f c)))))) [9] Garbage collect RESUME. (define callcc (lambda (f) (call/cc (lambda (k) (letrec ([c (lambda (x) (lcs x))] [lcs k]) (f c)))))) [10] Dereference c and garbage collect. (Since f is non-assignable it is in an evaluation context.) (define callcc (lambda (f) (call/cc (lambda (k) (letrec ([lcs k]) (f (lambda (x) (lcs x)))))))) [11] Dereference lcs via "fancy garbage collection". (It is no longer assignable so we can substitute its uses via (potentially recursive) values). (define callcc (lambda (f) (call/cc (lambda (k) (f (lambda (x) (k x))))))) [12] "Expand" call/cc lift in empty context. (define callcc (lambda (f) (call/cc f))) And this is as close as we can get since call/cc is an assignable identifier in the global environment, i.e., Replacing the right hand side with call/cc would not be correct. In summary this derivation shows that callcc will always have the same value as call/cc in the global environment. (*) {Felleisen, M. and R. Hieb}. The revised report on the syntactic theories of sequential control and state. Technical Report 100, Rice University, June 1989.
gat@forsight.jpl.nasa.gov (Erann Gat) (02/18/91)
In article <4140@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes: >>Tomorrow I might want to build engines >>(a really cool construct that somehow got lost in the shuffle). If I >>have call/cc I can do it myself. > >You can? That's not what happens in Kent Dybnig's book. He needs >an additional primitive, timer interrupts. Granted, you need timer interrupts to do it properly, but you can fake it to a pretty high degree of fidelity using call/cc alone. I think it's worth relating a theory once told to me by Jim Firby when everyone was griping about a graphics package he had written. He said that the more people complain about a piece of development software, the better it is because if it were really bad people just wouldn't use it and so there would be no complaints. I have been a very vocal critic of CL, but that's only because I use it a lot, and the fact that it's as good as it is makes me wish even more that it could be just that much better. That said, CL really needs call/cc, and engines, and stypes, and... :-) E.
matthias@leto.rice.edu (Matthias Felleisen) (02/18/91)
Let me try to reformulate Erann's plug for call/cc. During the design of a programming language, it is impossible to anticipate all needs for functions. Therefore we build in LAMBDA so that programmers can define their own functions. It is also impossible to foresee the kinds of data, programmers will have to deal with. Languages therefore include tools for structuring data in some form or another. And, in the same vein, it is just as impossible to project the sort of control abstractions a programmer would like to have around. The most natural solution is the inclusion of a powerful control construct such that programmers can build a set of control abstractions for specific problems with a set of macros. With call/cc and first-class continuations (for example), it is easy to implement <> generators, <> Prolog-style backtracking, <> non-blind backtracking, <> all kind of coroutine models, <> system trap doors (see the SML/NJ compiler), <> engines (with a timer or by redefining lambda and set!), <> operating systems (in conjunction with engines), and a few more. And that on top of all the control constructs in Common Lisp. The disadvantage of building a specific high-level control construct into a language is two-fold. First, and again, there are too many. Every recent language from Icon via ML to Prolog has its own (or several). Clearly, all of them were thought to be useful; otherwise they wouldn't be in these languages. Second, a specific control construct comes with a specific programming paradigm, and when thinking about something else it is difficult to think of using it in non-standard ways. Yes, FIRST-CLASS coroutines can be used to define call/cc and therefore other control constructs. BUT, who would think of doing so? Schemers, on the other hand, are trained to think of call/cc as the tool to implement the control construct that is best for a given problem and their way of thinking about it. Restricting the programmer's tool for thinking about problems is wrong: Life is too short to waste hours and days just because some committee wanted to save some cycles and some pain for implementors. I'd love to see the CL committe correct this mistake by the time CLtL3 comes around. Perhaps we could get call/cc PLUS a set of high-level control constructs just like we got lambda PLUS a huge library of functions, def-macro PLUS a number of standard macros, cons and vectors and types PLUS CLOS, etc. Guy, any chances? -- Matthias Felleisen
jeff@aiai.ed.ac.uk (Jeff Dalton) (02/18/91)
In article <1350036@otter.hpl.hp.com> sfk@otter.hpl.hp.com (Steve Knight) writes: > It's a bit of a shame CL provides no such facilities -- but it gives > those of us who find CL the least acceptable Lisp around a really big > stick to beat up our managers with :-) > PS. My own personal gripes with CL are CL is not without it's problems, but it's here now and it's usable. Scheme is better for some things, not for others (though it is being improved). The idea that CL is especially bad as Lisps go seems to me to be wrong. It's better in many ways than quite a few other Lisps. Really. In any case, most of your gripes are about things that don't particularly both me, and here's why. >[1] the type system is far too loose In what way? So general a complaint tells me almost nothing. BTW, my experience has been that when two people make the same complaint about CL they often mean different things by it (especially if they strongly dislike CL). >[2] user-defined types aren't guaranteed to be distinct from >previous types They aren't? (Of course, it's trivially true that they aren't, because they will be a subtype of T.) Some implementations have made defstructs subtypes of vector. Maybe that was a bad thing. In any case, X3J13 decided to make array and defstruct types disjoint. >[3] equality doesn't work the way it does in other lisps & is >a nasty source of problems (e.g. portability) What? Perhaps you mean not the same as in _some_ other Lisps. In any case, I don't know what problems of equality you have in mind. If you mean EQ on numbers, that's a problem in every Lisp I know anything about. The big difference in CL is the addition of EQL. >[4] there's no multi-threading True. >[5] I don't like keyword parameters/optional parameter etc and dislike >paying a cost for a feature I don't use What cost? Especially if you don't use them. There's some cost in implementation size, but for most calls to built-in functions that use keywords the keywords can be compiled away. BTW, I partially disagree with ram+@cs.cmu.edu (Rob MacLachlan), who wrote: Well, I agree that optionals are mostly a loss, but you shouldn't be paying any penalty other than compiler complexity. I am convinced keyword args are a big win for complex interfaces. Positional arguments start to lose it above 5 or 6 arguments. One way to handle the many argument problem would be to make keywords part of the programming environment (eg, the editor would let you specify parameters that way) rather than having them exist at run-time. All parameters could then be specified by keyword rather than positionally. However, there would be difficulties with APPLY (either there are keywords in the list of arguments or else we're back to positional) and we'd be moving towards a situation where source code wasn't readable without the aid of some clever environment. So in this (as in many issues), I think CL has made a reasonable compromise. >[6] empty list and false still aren't distinct True, but not as much a problem in practice as theory might indicate. >[7] the lack of temporary (transparent) hash-tables (c.f. Poplog CL, T) I agree, but there wasn't enough agreement on what the user-level facilities should be. >[8] the lack of destroy actions (c.f. Poplog CL) I think it's reasonable not to have them, although they might be useful from time to time. The model in Lisp is often that objects exist forever, so they'd never be destroyed. This model is probably too simple, but then Lisp (even Common Lisp) is basically a simple language. >[9] the poor choice of initial syntax features (defun, do) This is largely a matter of taste. DEFUN and DO are in CL because they were in MacLisp and because there wasn't a strong enough reason to get rid of them. Many Lisps have something like DEFUN (eg, DE) which I don't think is a big improvement. Some Lisp have only things that are much worse than DEFUN. >[10] the dual name spaces for functions and values (bletch!) >(use type hints to avoid the usual complaints of efficiency >(see Pop11)) One thing that I've often found strange is that some people seem to think it's more important to have a single name space than to have lexical scoping. I would have thought the opposite was true. Dual name spaces is not an efficiency issue as far as calling is concerned, if you're willing to use up a little space and have assignment to globals be a bit slower. There are other reasons to put functions in a separate name space. I think that, on balance, it's better to have one name space for both functions and variables, but there is nonetheless something to be said for the other side. But having to put in a "hint" that something was a function would be a pain. >[11] type assertions are unchecked & are effectively promises >rather than hints (I'd like both, thanks) I agree that CL declarations are a problem, because in practice one may want them for two different purposes (to check and to avoid checking). Some implementations (eg, CMU CL) seem to handle this well. But how is a "hint", if it does anything, different from a "promise". (It's certainly not obvious just from the name "hint", which is all I know about them.) >[12] the lack of a updaters (see Pop11) for an update model that >can support abstraction (SETF is a compile-time thing). I agree that something like this is a good idea. (See previous messages.) -- jeff
alms@cambridge.apple.com (Andrew L. M. Shalit) (02/20/91)
In article <4154@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
BTW, I partially disagree with ram+@cs.cmu.edu (Rob MacLachlan), who
wrote:
Well, I agree that optionals are mostly a loss, but you shouldn't
be paying any penalty other than compiler complexity. I am
convinced keyword args are a big win for complex interfaces.
Positional arguments start to lose it above 5 or 6 arguments.
One way to handle the many argument problem would be to make
keywords part of the programming environment (eg, the editor
would let you specify parameters that way) rather than having
them exist at run-time. All parameters could then be specified
by keyword rather than positionally. However, there would be
difficulties with APPLY (either there are keywords in the list
of arguments or else we're back to positional) and we'd be moving
towards a situation where source code wasn't readable without
the aid of some clever environment. So in this (as in many issues),
I think CL has made a reasonable compromise.
In addition to the problems you mention, this would introduce
order-of-evaluation problems.
Common Lisp specifies left-to-right evaluation, so the following
two calls are different:
(frob :a foo :b (incf foo))
(frob :b (incf foo) :a foo)
-andrew
--