acha@CS.CMU.EDU (Anurag Acharya) (03/22/91)
I am curious about the kind of applications that lazy functional languages are suitable for. What can you do with such languages (in a natural and elegant fashion) that you cannot similarly do with a strict language augmented by a lazy type constructor ? For the sake of concreteness, what can be done in these languages that cannot be done naturally and elegantly in SML augmented by a "lazytype" construct that allows declarations like the following: lazytype 'a stream = LAZY_CONS of 'a * 'a stream | NULL with appropriate lazy semantics. anurag
jgk@osc.COM (Joe Keane) (03/23/91)
In article <ACHA.91Mar21112142@DRAVIDO.CS.CMU.EDU> acha@CS.CMU.EDU (Anurag Acharya) writes: >I am curious about the kind of applications that lazy functional languages >are suitable for. What can you do with such languages (in a natural and >elegant fashion) that you cannot similarly do with a strict language augmented >by a lazy type constructor ? Nothing. Besides the obvious argument of Turing equivalence, given any language i can think of you can show how to implement lazy closures. In C you can use structs and function pointers. In Scheme you can implement `delay' and `force' as lambda expressions. >For the sake of concreteness, what >can be done in these languages that cannot be done naturally and elegantly in >SML augmented by a "lazytype" construct that allows declarations like the >following: > >lazytype 'a stream = LAZY_CONS of 'a * 'a stream | NULL > >with appropriate lazy semantics. Again, you can do anything you could do in a lazy language. If you know when you want something to be lazy, you can use one of these lazy streams and it works fine. But since you said ``naturally and elegantly'', that gives me some more space... First of all, laziness is not an `extra'. It exists in every language, it just may not be apparent from the terminology. Laziness is necessary to protect yourself from code being executed when you don't want it to be. It's just that in most languages lazy functions are kept separate from normal functions. For example, the C operator `&&' is defined to be lazy in its second argument. If it suddenly became strict, then most of the C programs in the world would stop working. In fact, the if-then-else construct can be considered a lazy function. Clearly this can't be made strict either. In Lisp, there's a lot of laziness rules. Normal functions always evaluate their arguments, macros never evaluate their arguments, and special forms do whatever they feel like. Unlike C, this is made somewhat more transparent by the fact that the same syntax is used for each. Now, to the point: strictness is only a matter of implementation. In terms of correctness, making a strict function lazy never hurts. On the other hand, making a function strict can easily break something. This says that, conceptually at least, we only need lazy functions. Lambda calculus and combinators both work this way; they have no strict functions. So it makes a lot of sense to have a language where every function is lazy. This eliminates an artificial distinction, and allows you to get the benefits of laziness everywhere, not just where someone thought it might be useful. So why don't people do this? The answer is always the same: ``It's too slow.'' Some people claim this is an inherent problem with lazy languages. I strongly disagree with this position. Naive implementations do a lot of unnecessary testing and jumping around, and i'm afraid that this has given lazy languages a bad reputation for inefficiency. With the proper techniques, a lazy language can be faster than a traditional one. I don't feel like explaining this in depth, since you could easily write a book on it. Also, it's better to just do something, instead of describing how you think it could be done. So for now, i'll just leave this as an unsubstantiated claim. -- Joe Keane, supercombinator hacker
cs450a03@uc780.umd.edu (03/23/91)
Joe Keane writes: >First of all, laziness is not an `extra'. It exists in every language, it >just may not be apparent from the terminology. ... An interesting point, and a good one. [but later] >With the proper techniques, a lazy language can be faster than a traditional >one. Er.. by lazy do you mean a totally lazy language that never evaluates anything? Those are pretty fast, but um.... not incredibly useful. >I don't feel like explaining this in depth, since you could easily >write a book on it. Now THAT's what I call lazy! ;-) >Also, it's better to just do something, instead of describing how >you think it could be done. But that's not very lazy ?! >So for now, i'll just leave this as an unsubstantiated claim. Get some sleep, Joe. -- Raul Rockwell
spot@CS.CMU.EDU (Scott Draves) (03/24/91)
In article <4682@osc.COM> jgk@osc.COM (Joe Keane) writes: In article <ACHA.91Mar21112142@DRAVIDO.CS.CMU.EDU> acha@CS.CMU.EDU (Anurag Acharya) writes: >I am curious about the kind of applications that lazy functional languages >are suitable for. What can you do with such languages (in a natural and >elegant fashion) that you cannot similarly do with a strict language augmented >by a lazy type constructor ? Nothing. Besides the obvious argument of Turing equivalence, this isn't at all what the question was. given any language i can think of you can show how to implement lazy closures. In C you can use structs and function pointers. you can store-away a function that already existed at compile time, but you can't create new functions (unless you count forking off cc and dynamic loading :). Please show us how, if you think it can be done. In Scheme you can implement `delay' and `force' as lambda expressions. Not really. They have to be special forms. Or macros. In either case, they are not functions. For example, the C operator `&&' is defined to be lazy in its second argument. << more of the same, ie non-strict = lazy >> strict/non-strict is a different issue, although they are related. At best, I think you are stretching the definition of lazy much too far. non-strict can be implemented with lazy, but lazy implies far more than non-strict. In terms of correctness, making a strict function lazy never hurts. not true, side effects. So it makes a lot of sense to have a language where every function is lazy. he he This eliminates an artificial distinction, and allows you to get the benefits of laziness everywhere, not just where someone thought it might be useful. So why don't people do this? The answer is always the same: ``It's too slow.'' damn right Some people claim this is an inherent problem with lazy languages. I strongly disagree with this position. Naive implementations do a lot of unnecessary testing and jumping around, and i'm afraid that this has given lazy languages a bad reputation for inefficiency. laziness has significant cost. laziness is occasionally useful. It only really pays off when you don't evaluate an argument. but this doesn't happen very often. So you rarely benefit from it. But you always pay for it. It's fine in an interpreter. It's a big lose in a compiler. not to mention that it plays hell with side effects. With the proper techniques, a lazy language can be faster than a traditional one. So for now, i'll just leave this as an unsubstantiated claim. no kidding -- christianity is stupid Scott Draves communism is good spot@cs.cmu.edu give up
jpiitulainen@cc.helsinki.fi (03/24/91)
In article <SPOT.91Mar23115151@WOOZLE.GRAPHICS.CS.CMU.EDU>, spot@CS.CMU.EDU (Scott Draves) writes: > In article <4682@osc.COM> jgk@osc.COM (Joe Keane) writes: > > In Scheme you can implement `delay' and `force' as lambda > > expressions. > Not really. They have to be special forms. Or macros. In either > case, they are not functions. > This is true for delay. force is a procedure.
db@cs.ed.ac.uk (Dave Berry) (03/29/91)
In article <4682@osc.COM> jgk@osc.COM (Joe Keane) writes: > >conceptually at least, we only need lazy functions. Lambda calculus and >combinators both work this way; they have no strict functions. There is no intrinsic order of evaluation in the lambda calculus. -- Dave Berry, LFCS, Edinburgh Uni. db%lfcs.ed.ac.uk@nsfnet-relay.ac.uk "So they gave him a general anaesthetic and cleaned him with Swarfega."
fs@caesar.cs.tulane.edu (Frank Silbermann) (04/01/91)
>In article <4682@osc.COM> jgk@osc.COM (Joe Keane) writes: >> conceptually at least, we only need lazy functions. >> Lambda calculus and combinators both work this way; >> they have no strict functions. In article <8413@skye.cs.ed.ac.uk> db@lfcs.ed.ac.uk (Dave Berry) writes: >There is no intrinsic order of evaluation in the lambda calculus. In discussing a language (e.g. lambda calculus), I think we must not confuse the absence of an evaluation-order specification with the belief that all evaluation orders are equally valid. A strict language would be one in which strict evaluation is no less effective than any other order. With this interpretation, lambda calculus would have to be viewed as nonstrict. By the way, does the _formal_ theory of typed lambda calculus accept constructor-based primitive functions defined via rewrite-rules? For example, in the lambda-notation of denotational semantics, one can _force_ any primitive function `f: A->B' to be strict simply by defining f(BOT_A) = BOT_B. One can even use this method to define a strict version of function application. -- Frank Silbermann fs@cs.tulane.edu Tulane University New Orleans, Louisiana USA -- Frank Silbermann fs@cs.tulane.edu Tulane University New Orleans, Louisiana USA
jgk@osc.COM (Joe Keane) (04/03/91)
In article <6853@rex.cs.tulane.edu> fs@caesar.cs.tulane.edu (Frank Silbermann) writes: >By the way, does the _formal_ theory of typed lambda calculus accept >constructor-based primitive functions defined via rewrite-rules? >For example, in the lambda-notation of denotational semantics, >one can _force_ any primitive function `f: A->B' to be strict >simply by defining f(BOT_A) = BOT_B. It's not that simple. In general you can't define f(BOT_A). In a fully lazy system, its value is determined by the values at other points. If don't want to be fully lazy, which is what you're asking for, you can make it less determined. You can never make it more determined. Really this is a semantic argument, rather than something provided in your type system. I've posted about this subject before, so i won't repeat what i said then. Bottom is the least determined value, and it must be consistent with other values which are more determined. Basically, i'd say you shouldn't think of bottom as `fails to terminate', but rather as `status unknown'. In fact you can detect some cases of failure to terminate, and give them new values in your type system. Then if you know that some expression fails to terminate, it's not bottom. Of course you can never hope to catch all cases. >One can even use this method >to define a strict version of function application. I would say that strictness is a property of specific primitive combinators, rather than something you can specify in your type system. You can use strict primitive combinators to force evaluation of some expression, even if you don't use its value. Suppose we define a combinator with HOLD x y = x, but we say that it's strict in y. In other words, it computes x and y in some order or in parallel, and when both values are available it returns the first. There are many ways to simulate this, depending on what primitives you have available. For example, if x and y are numbers you could define it as HOLD x y = 0*x+y. This assumes that the multiplication primtive is strict in both arguments. If it's too smart, this won't work. So let's just assume HOLD is a primitive. This combinator makes a lot more sense when you consider it temporally. It `holds up' the value x until y is ready. This provides a way to synchronize two values, which is useful if we want to write them out to disk together or something like that. The idea is that we only want to write them both out if they're both ready, otherwise we don't want to write anything. You can see the similarity to database transactions. If we extend this to include side-effects (and switch the parameters), we get normal sequential execution. This combinator would be exactly the same as the comma operator in C or the semicolon `operator' in Pascal. If you want to execute a C program with combinators, this is exactly what you do. But then you can change the semantics to be more lazy, by removing the strictness. This gives you dead code elimination and common subexpression elimination, the usual C compiler stuff. The interesting thing is that this makes a `new' C where more things terminate. For example, the following function returns 0: int foo() { while (1); return 0; } -- Joe Keane, supercombinator hacker
fs@rex.cs.tulane.edu (Frank Silbermann) (04/08/91)
>In article <6853@rex.cs.tulane.edu> fs@caesar.cs.tulane.edu (Frank Silbermann) >writes: >>By the way, does the _formal_ theory of typed lambda calculus accept >>constructor-based primitive functions defined via rewrite-rules? >>For example, in the lambda-notation of denotational semantics, >>one can _force_ any primitive function `f: A->B' to be strict >>simply by defining f(BOT_A) = BOT_B. In article <4705@osc.COM> jgk@osc.COM (Joe Keane) writes: > It's not that simple. In general you can't define f(BOT_A). David Schmidt seems to do this in his book, _Denotational Semantics_ (of course, any such definition must maintain continuity). > If don't want to be fully lazy you can make it less determined. > You can never make it more determined. > Really this is a semantic argument. > I would say that strictness is a property > of specific strict primitive combinators > that force evaluation of some expression, > (even if you don't use its value), > rather than something you can specify in your type system. Strictness means only that bottom is mapped to bottom, nothing more. I did not ask whether this could be made part of the _type system_. I asked whether this can be taken care of within the context of typed lambda _calculus_. Certainly there must be more to typed lambda calculus than type theory alone. If typed lambda calculus is to be the basis of functional programming, then its expressions must be executable/simplifiable. If the lambda calculus permits primitive combinators (e.g. strict ones), then the calculus must take into account some sort of rewriting mechanism to simplify expressions using the primitive combinators. How does the _theory_ of typed lambda calculus deal with this issue? Frank Silbermann fs@cs.tulane.edu Tulane University New Orleans, Louisiana USA
farrell@cs.uq.oz.au (Friendless) (04/08/91)
In <6922@rex.cs.tulane.edu> fs@rex.cs.tulane.edu (Frank Silbermann) writes: >How does the _theory_ of typed lambda calculus deal with this issue? Typed or not, lambda calculus expressions are strict in at most one argument, which you can identify by calculating the head normal form, the answer being the head variable. For typed lambda calculus, there is always a head normal form, proved by Barendregt in his book. For more information on strictness of the pure lambda calculus, see the paper by Prateek Mishra in Information Processing Letters in 1988. John