kiran@copper.ucs.indiana.edu (Kiran Wagle) (03/27/91)
quale@saavik.cs.wisc.edu (Douglas E. Quale) writes: >Of course going to far in this direction might lead to specifying the >evaluation order of arguments to procedures, and I don't think that would be >good. >-- Doug Quale >quale@picard.cs.wisc.edu Why not? -- ...kiran __________kiran@copper.ucs.indiana.edu________(812) 331-1710 From the corrections column in a July Fresno, CA _Bee_: "An item in Thursday's [issue] about the Massachusetts budget crisis made reference to new taxes that will help put Massachusetts 'back in the African-American.' The item should have said 'back in the black.'"
manis@cs.ubc.ca (Vincent Manis) (03/27/91)
In article <kiran.670017354@copper> kiran@copper.ucs.indiana.edu (Kiran Wagle) writes: >>Of course going to far in this direction might lead to specifying the >>evaluation order of arguments to procedures, and I don't think that would be >>good. >Why not? Because it can substantially impair the ability of a compiler to produce high-quality code. If a compiler is required to honour a particular order of evaluation, then compiling expressions which require most of the registers available will almost inevitably result in register spills, which then slow the code down. If a compiler can choose the order of evaluation, it can evaluate arguments in whatever order requires the fewest number of registers. As an example, suppose that we want to evaluate (baz (foo 1) 2 3 (+ 4 (bar 5))) on a machine which has four available registers. If the language insists we evaluate from left to right, then about the best we can do is load r0,1 call foo ;; Assume args are passed in lowest numbered regs. load r1,2 load r2,3 load r3,4 store r0,temp ;; Register spill. load r0,5 call bar add r0,4 ;; Very smart compiler notices it can cheat here. load r4,r0 ;; Assume result is returned in r0. load r0,temp call baz If, on the other hand, the compiler is free to evaluate arguments in any order, it can generate load r0,5 call bar load r3,r0 load r0,1 call foo load r1,2 load r2,3 call baz Notice that the compiler had special knowledge that + is commutative and associative (of course, whoever told the compiler that was wrong: Scheme was designed by people who know something about numerical analysis!). But, in the absence of information from the programmer, the compiler had no way of knowing that foo, bar, and baz didn't have side-effects, ill-conditioned numerical properties, or other factors which would depend upon the order of evaluation. Of course, Scheme could be extended with declarations to help the compiler, but the history of such declarations is that they are not used except in desperation. There is almost nothing to be gained, and a lot to be lost, by prescribing order of evaluation. On the subject of what should be returned by set!, define, etc., let me put in a plug for a void object. PC Scheme has something called *THE-NON-PRINTING-OBJECT*, and the next edition of Chez Scheme is planned to have an object returned by (void). These are well-defined objects, except that they have no uses, other than a procedure which generates one, (void), and a type predicate (void?). Top levels don't have to print void results, and set!, define, etc., could return them as values. Note that void in this case is not the same as in C or ALGOL-68, where the implication is that the object requires 0 bits to be stored. Scheme void objects would be the same size as any other object; they would just be useful in indicating that no useful value is given. -- \ Vincent Manis <manis@cs.ubc.ca> "There is no law that vulgarity and \ Department of Computer Science literary excellence cannot coexist." /\ University of British Columbia -- A. Trevor Hodge / \ Vancouver, BC, Canada V6T 1W5 (604) 228-2394
harlan@copper.ucs.indiana.edu (Pete Harlan) (03/27/91)
manis@cs.ubc.ca (Vincent Manis) writes: >In article <kiran.670017354@copper> kiran@copper.ucs.indiana.edu (Kiran >Wagle) writes: >>>Of course going to far in this direction might lead to specifying the >>>evaluation order of arguments to procedures, and I don't think that would be >>>good. >>Why not? >Because it can substantially impair the ability of a compiler to produce >high-quality code. [---demonstration of this omitted---] Clearly Scheme was designed from the start to be compiled efficiently :-) But another point: If I'm reading your code and come across ((foo) (a b) (c d)), I know that the three applications are completely independent (almost---Scheme doesn't allow them to be evaluated in parallel, presumably because of the nightmare of reasoning about call/cc and set! in such contexts). If subexpressions of a combination were necessarily evaluated in some order, then I wouldn't have any way of knowing, and you wouldn't have any way of expressing (short of a comment), that the computations didn't *need* to be evaluated in that order. It is simple enough to force order of evaluation when it is needed, and it is surprising how rarely that occurs. --Pete Harlan harlan@copper.ucs.indiana.edu
hieb@heston.cs.indiana.edu (Robert Hieb) (03/27/91)
manis@cs.ubc.ca (Vincent Manis) writes: >Because it can substantially impair the ability of a compiler to produce >high-quality code. If a compiler is required to honour a particular >order of evaluation, then compiling expressions which require most of >the registers available will almost inevitably result in register >spills, which then slow the code down. Saving registers around a call to an unknown procedure is not likely to substantially slow down code, due to the call overhead. In practice, I don't think an optimizing compiler is going to be significantly handicapped. The compiler can always reorder arguments it can prove are independent of evaluation order, and since these are likely to the be the simplest arguments they are likely to gain the most from such reordering. harlan@copper.ucs.indiana.edu (Pete Harlan) writes: >If I'm reading your code and come across ((foo) (a b) (c d)), I know >that the three applications are completely independent This argument is stronger, but unfortunately also of little use in practice. All you know is that the programmer PROBABLY THOUGHT that the three applications are independent. In fact they may not be, and until program is tested under all possible evaluation orders or completely annalized using all possible evaluation orders there is no way to be sure. If arguments can be evaluated in different orders on different calls (I know the formal semantics in R*S says otherwise, but that is how the game is usually played), neither approach to establishing program correctness is practical. Scheme is hiding its head in the sand with respect to argument evaluation order and unspecified return values. Arguments do get evaluated in some order and values are returned. Wishing that it weren't so is of little use to the producers and consumers of supposedly portable programs.
bobg+@andrew.cmu.edu (Robert Steven Glickstein) (03/27/91)
Vincent Manis has pointed out to me that I am wrong (c.f. my earlier post about order of evaluation of N operands being the same for given N). My misunderstanding arose from the explanation of the semantics of "permute" and "unpermute" from R3.99, section 7.2 (formal semantics). "Permute" and "unpermute" *do* have the behavior I described, but it is also noted that this is only an approximation to the intended semantics of combinations. ______________ _____________________________ Bob Glickstein | Internet: bobg@andrew.cmu.edu Information Technology Center | Bitnet: bobg%andrew@cmuccvma.bitnet Carnegie Mellon University | UUCP: ...!harvard!andrew.cmu.edu!bobg Pittsburgh, PA 15213-3890 | (412) 268-6743 | Sinners can repent, but stupid is forever
daniel@mojave.stanford.EDU (Daniel Weise) (03/28/91)
As an example, suppose that we want to evaluate (baz (foo 1) 2 3 (+ 4 (bar 5))) on a machine which has four available registers. If the language insists we evaluate from left to right, then about the best we can do is load r0,1 call foo ;; Assume args are passed in lowest numbered regs. load r1,2 load r2,3 load r3,4 store r0,temp ;; Register spill. load r0,5 call bar add r0,4 ;; Very smart compiler notices it can cheat here. load r4,r0 ;; Assume result is returned in r0. load r0,temp call baz If, on the other hand, the compiler is free to evaluate arguments in any order, it can generate load r0,5 call bar load r3,r0 load r0,1 call foo load r1,2 load r2,3 call baz Um, you are restricting your compiler far too much. If left to right order of evaluation were mandated, the compiler would merely have to give the illusion of doing things left to right. It doesn't have to do them left to right if the programmer can't tell. In this case, the loads of constants could be performed evaluating the other arguments. Daniel
jinx@zurich.ai.mit.edu (Guillermo J. Rozas) (03/28/91)
Saving registers around a call to an unknown procedure is not likely to substantially slow down code, due to the call overhead. In practice, I don't think an optimizing compiler is going to be significantly handicapped. The compiler can always reorder arguments it can prove are independent of evaluation order, and since these are likely to the be the simplest arguments they are likely to gain the most from such reordering. You've made this argument before, and my usual reply is: Write such a theorem prover and after examining it, and how well it performs, I may finally agree with you, but will not a-priori. I don't believe that your intuition is any better than mine, but I admit the possibility that you may be right, and will let you prove it to me. :-) Scheme is hiding its head in the sand with respect to argument evaluation order and unspecified return values. Arguments do get evaluated in some order and values are returned. Wishing that it weren't so is of little use to the producers and consumers of supposedly portable programs. I think you are getting upset over a relatively minor problem. C doesn't specify the order or argument evaluation, and the writing style is very imperative, and yet this hardly ever affects people porting programs. In Scheme, where the writing style is much more functional, it is likely to affect them even less. Therefore I don't buy your argument on portability. I think that you are trying to prove programs correct and that this gets in your way, so you claim it is a problem for everyone else. You should not use unsubstantiated arguments to get me to agree with you. You should convince me that proving programs correct is important and that this really gets in your way when doing so. BTW, no one has mentioned that implementations disagree on the order of evaluation of arguments and the value returned by SET!, and this is mostly why there was no agreement at the first R2RS meeting, where the concern was NOT porting programs, but reading each other's code. As far as I understand it, the reports are still NOT concerned with portability, but mutual legibility. The IEEE standard is concerned with portability, but that's a whole different story. FYI, MIT Scheme returns the OLD contents of the location on an assignment, and the MIT Scheme interpreter evaluates arguments right-to-left. The MIT Scheme compiler chooses whatever order it finds convenient and makes use of this freedom quite frequently. This trips someone about once every 18 months because they naively expect left-to-right to work. Once they understand what their problem was, they don't get bitten again.
kers@hplb.hpl.hp.com (Chris Dollin) (03/28/91)
Vincent Manis says: In article <kiran.670017354@copper> kiran@copper.ucs.indiana.edu (Kiran Wagle) writes: >>Of course going to far in this direction might lead to specifying the >>evaluation order of arguments to procedures, and I don't think that would be >>good. >Why not? Because it can substantially impair the ability of a compiler to produce high-quality code. If a compiler is required to honour a particular order of evaluation, then compiling expressions which require most of the registers available will almost inevitably result in register spills, which then slow the code down. If a compiler can choose the order of evaluation, it can evaluate arguments in whatever order requires the fewest number of registers. I used to believe this argument; now I'm not so sure. Vincent posted more details, with an example, but the question is *how often* do such cases arise, and *how much* efficiency is wasted, if we insist on prescribed orders of evaluation? Are we talking measurements and results here, or conviction from a small number of examples? And how do we measure the time wasted because a programmer was misled by an implementation into expecting one order of evaluation, and then ported code to an implementation with a different order? Or the time wasted because, without a prescribed (and simple) order of evaluation, certain programming idioms become unavailable? [The argument that such idioms can lead to bad code is not persuasive; languages cannot prevent the writing of bad code; discouragement is better socially rather than at the language level [he said, having himself designed a language designed to prevent certain ``bad styles''!]]. As you can guess, I'm moving toward the prescibe-order-of-evaluation camp. Declarations and compiler inferences can deal with whanging the efficiency up for delivery. -- Regards, Kers. | "You're better off not dreaming of the things to come; Caravan: | Dreams are always ending far too soon."
freeman@argosy.UUCP (Jay R. Freeman) (03/28/91)
Possibly one reason for not specifying the order of evaluation of arguments to a procedure is to make one step toward an implementation that can run on an asynchronous MIMD processor -- the idea is to farm out each argument evaluation to its own processor and collect the results as they come in. Such an implementation would quite likely evaluate the arguments in different orders each time the same code was run. (I'm *not* saying that that's *all* you have to do to make a Scheme that can run on an asynchronous MIMD processor -- but "a journey of 1000 MIPS, er, miles, begins with a single step.") -- Jay Freeman <canonical disclaimer -- I speak only for myself>
jinx@zurich.ai.mit.edu (Guillermo J. Rozas) (03/28/91)
In article <1055@argosy.UUCP> freeman@argosy.UUCP (Jay R. Freeman) writes:
Path: ai-lab!mintaka!olivea!decwrl!argosy!freeman
From: freeman@argosy.UUCP (Jay R. Freeman)
Newsgroups: comp.lang.scheme
Date: 27 Mar 91 19:04:33 GMT
References: <2977@kraftbus.cs.tu-berlin.de> <1991Mar26.155905.12906@daffy.cs.wisc.edu> <kiran.670017354@copper> <1991Mar26.224805.23381@cs.ubc.ca> <harlan.670038398@copper> <1991Mar27.101357.20383@news.cs.indiana.edu>
Sender: news@argosy.UUCP
Reply-To: freeman@cleo.UUCP (Jay R. Freeman)
Organization: MasPar Computer Corporation, Sunnyvale, CA
Lines: 15
Possibly one reason for not specifying the order of evaluation of
arguments to a procedure is to make one step toward an implementation
that can run on an asynchronous MIMD processor -- the idea is to farm
out each argument evaluation to its own processor and collect the
results as they come in. Such an implementation would quite likely
evaluate the arguments in different orders each time the same code
was run.
(I'm *not* saying that that's *all* you have to do to make a Scheme
that can run on an asynchronous MIMD processor -- but "a journey of
1000 MIPS, er, miles, begins with a single step.")
-- Jay Freeman
<canonical disclaimer -- I speak only for myself>
Unfortunately this is not allowed. The intent of the current wording
is to allow arbitrary sequential orders, but not interleaved (or
parallel) evaluation. In other words, the arguments are evaluated
sequentially in an unspecified order.
The reason for this decision is that unless a way of synchronizing
stores (and the corresponding reads) is provided, interleaved
evaluation will generate incorrect results for programs for which any
sequential order will give correct results.
The prototypical example is a structure-copying routine. When copying
a pair it may do something like
(define (hash-copy-pair pair)
(let ((the-car (hash-copy (car pair)))
(the-cdr (hash-copy (cdr pair))))
(hash-cons the-car the-cdr)))
where hash-copy and hash-cons manage a table of associations to
guarantee isomorphism.
Now consider the case of HASH-COPYing a pair with eq? car and cdr.
Any sequential order of execution of the arguments to the LET
procedure will generate the correct answer, but interleaved evaluation
may not since the operation of looking up an object in the table and
generating a copied ID for it should be indivisible, but interleaved
evaluation may break this assumption.
In the presence of a standard test-and-set operation, allowing
parallel evaluation would be reasonable.
manis@cs.ubc.ca (Vincent Manis) (03/28/91)
I guess the reason I feel so strongly on this matter is that I once was a humble peon involved in writing an ALGOL-68 compiler. ALGOL-68 was dedicated to the proposition that no matter how strange a construct was, if some meaning could be found for it, it should be allowed. The compiler my group worked on (never completed) had an entire lexical pass devoted to matching parentheses. Each time we thought we had it working, one of our team members would ask what happened if we had a certain parenthesis, then 50 pages of code, and then another symbol. Sure enough, we had another special case. Languages should be designed in such a way that dubious practices are allowed if they don't interfere with compilation or fast execution, but prohibited if they do. I have to hunt very hard for a case where I'd want to write code which is dependent upon order of evaluation (+ (begin (display "Hello, ") 2) (begin (display "world!") 3)) is an interesting, if stupid, example of interleaving 2 separate code streams. On the other hand, clearly not prescribing the order of execution can make things easier for the compiler. There is little to be gained for the programmer, and something to be lost in efficiency, by prescribing order. I might note that any dialect of Scheme which provides syntax extension allows the programmer to write sequenced procedure call forms, e.g., (+-sequentially (begin (display "Hello, ") 2) (begin (display "world!") 3)). I would encourage some of the pro-prescribers to tell us how frequently they do this sort of thing. -- \ Vincent Manis <manis@cs.ubc.ca> "There is no law that vulgarity and \ Department of Computer Science literary excellence cannot coexist." /\ University of British Columbia -- A. Trevor Hodge / \ Vancouver, BC, Canada V6T 1W5 (604) 228-2394
jeff@aiai.ed.ac.uk (Jeff Dalton) (03/29/91)
What it comes down to is that Scheme is the sort of language that doesn't specify such things. The greater range of optimizations available when the order of argument evaluation is not specified is not, in my view, a sufficient reason for not specifying the order. Indeed, a compiler would often be able to reorder anyway, becuase it could often tell that some expressions involve no side effects. There are already compilers that perform that sort of analysis, at least to some extent. So I do not think the efficiency gains, though real, are sufficiently great. As additional, though not necessarily convincing, evidence, I will note that Common Lisp does specify the order of evaluation, yet I have not heard loud complaints that it is a massive blow to efficiency there. Here are 2 arguments in favor of specifying the order of argument evaluation: 1. Programs will be more portable. Since bugs related to argument order can be difficult to find, this may represent a significant savings in programmers' time, which is more important than machine cycles. 2. The operation of the program will be more closely related to the structure of the source text. (Assuming, of course, that the specified order is left-to-right.) Moreover, LET* will be used to accomplish one thing (values of later variables depend on values of earlier ones) rather than two (as before plus sequential computation of the values). These arguments are not, of course, conclusive; but then neither are the counterarguments. In the end, it becomes a question of which factors are considered most important. For Scheme, one of them was the desire to move in the direction of the Algol/C family. -- jd
sfk@otter.hpl.hp.com (Steve Knight) (03/29/91)
I'm glad to see that the issue of order of evaluation of call-arguments is still an active issue. I argued with Jim Miller a while ago, in favour of a fixed order of evaluation, without much success then I must admit! There seem to be two arguments in favour of leaving evaluation order unspecified. Firstly, it discourages programmers from exploiting it as a feature, which Jim argued was undesirable. Secondly, there is an efficiency issue concerning optimisation of register usage in compilers. There is also a frequently raised concern about making progress towards parallel-processor implementations -- but this disregards the serious issue of synchronisation across shared structures, so I won't discuss this here. The argument in favour of specifying an order is essentially one of robustness. Unspecified order implies that a faulty program, which relies on a particular order (by design or accident), may appear to work and may become highly trusted as a result of its coincidental success. Shifting to a new compiler may silently introduce intermittent problems. A fixed order eliminates this undesirable behaviour. Another solution, which I won't pursue further, is to provide a "lint" facility to diagnose such problems. This would obviously have to work across module boundaries. The structure of the debate resolves itself into a discussion of whether or not a fixed order of evaluation causes unacceptable efficiency costs and whether or not the 'porting problem is serious. It should go without saying that a free order of evaluation must be as-or-more efficient than a fixed order. Neither is there any doubt that the 'porting problem is undesirable in any language. My belief is that a key design principle of Lisp-like languages is that they provide "noisy" protection. A faulty program never executes the fault without complaint. To ensure this principle, the preconditions of all constructs must be checked either at run-time or compile-time. I also believe that this principle always takes precedence over efficiency considerations (where feasible). Free-order of evaluation of call- arguments contradicts this. I've not seen any adequate statistics showing the relative costs of free Vs. fixed orders. Intuitively, the cost is likely to be of a small order, perhaps 10%, and so generally acceptable at development time. So we are mainly concerned with producing high-quality code at delivery time. Any statistics should compare four systems of code generation. (1) free-order (2) a couple of fixed-order variants (3) the same fixed order variants in the presence of intra-module analysis of side-effect free functions (4) as above, but with inter-module module analysis Since I've seen no work that comes remotely close to this -- though it may exist -- I believe we are reduced to arguing on the basis of structural understanding. The seriousness of the 'porting problem is bound to be a matter of judgement since no worthwhile statistics on this exist (to my knowledge). One comment that is occasionally voiced is that imperative languages such as C have the same 'minor' shortcoming and it is not a matter of complaint. This analogy is flawed in two important ways. Firstly, the inability to return structured results in the same free fashion as Scheme changes the coding style. It is common to see sequences such as foo( x, y, &answer1, &answer2 ); bar( answer1, answer2 ); in which the order of evaluation is incidentally defined. Secondly, the C language suffers from such very serious shortcomings that this issue drops below the horizon of concern! (e.g. no automatic store management policy.) As a matter of record, I've seen this problem arise with very serious consequences in the context of a 100,000 line FORTRAN program. So there is no doubt in my mind that the consequences of this problem can be extremely severe. What is more arguable is that this situation is unlikely to arise in Scheme because of different design tradeoffs. The most serious scenario, in my view, is where a reliance on order of evaluation is accidentally introduced. Deliberate reliance, after all, it can be argued is simply a matter of poor education. The scenario I have in mind is in the maintenance of a large program, where it is not practical to have a complete understanding of every component one is dealing with. In this case, it is plausible to add a side-effect to a routine which is self-evidently side-effecting, without noticing that the original side-effect is *local*. The addition may create a non-local side-effect and imply that some, but not all, calls of the routine are illegal. There are several points which need to be expanded on here. A "local" side effect is meant to be one which has no synchronisation consequences. A good example would be a 1-value cache which optimises repeated calls with the same arguments. It should also be noted that an ordinary cross-reference listing would not be adequate to check that such a defect was not introduced. Not only must all calls to the modified routine be checked, but all calls to the callers must be checked, and all their callers' calls, etc. It would be possible to construct such an analysis, of course, which would lead to the "lint" style solution. Generally, I believe that the imperative nature of Scheme is under-supported with undesirable consequences. The free-order of evaluation is an example of this. Steve
carlton@husc10.harvard.edu (david carlton) (03/31/91)
In article <1991Mar27.101357.20383@news.cs.indiana.edu> hieb@heston.cs.indiana.edu (Robert Hieb) writes:
Scheme is hiding its head in the sand with respect to argument
evaluation order and unspecified return values. Arguments do get
evaluated in some order and values are returned. Wishing that it
weren't so is of little use to the producers and consumers of
supposedly portable programs.
I simply do not understand what you are trying to say here.
"Producers and consumers of supposedly portable programs" certainly
can get what they want without having an order of evaluation specified
- they simply shouldn't write code which depends on such things. I
for one am perfectly capable of writing code which doesn't depend on
order of evaluation, as is any competent programmer, and Scheme is
hardly the only language (C comes to mind) to refuse to specify an
order of evaluation.
It would be useful sometimes, to be sure. Very rarely, at least in my
code, but such situations do arise. For example, consider the
following function:
(define reverse!
(lambda (list)
(let loop ((acc '())
(list list))
(if (not (pair? list)) acc
(let ((end (cdr list)))
(set-cdr! list acc)
(loop list end))))))
What I might prefer to write would be something like
(define reverse!
(lambda (list)
(let loop ((acc '())
(list list))
(if (not (pair? list)) acc
(loop (begin (set-cdr! list acc) list)
(cdr list))))))
(or perhaps even
(define reverse!
(lambda (list)
(let loop ((acc '())
(list list))
(if (not (pair? list)) acc
(loop (set-cdr! list acc)
(cdr list))))))
if we are going to have set-cdr! return something useful), which would
of course depend on a certain order of evaluation. In particular, it
would depend on the second argument being evaluated before the first
one. And here we have a problem, because I think that almost all
proponents of a specified order of evaluation would want to specify a
left to right order of evaluation, and it is simply not the case that
your side effects are always going to want to be performed in that
order. Here, of course, the fix would be a trivial one, namely to
reverse the order that loop takes its arguments, but you can't always
do that - the function being called might be a library function, or
called a thousand other places in your code before you realize that in
one case you really want the arguments to be in the other order for
purpose of side effects, or might be a dotted lambda, or you might
even want the arguments to come in one order for purposes of side
effects in one call and in another order in another call. So what we
have is one order of evaluation arbitrarily specified, an order which
once in a blue moon will actually turn out to be useful because of
side-effects but will in most cases not be the least bit useful, even
if side effects are going on and another order of evaluation might
allow you to express your procedure with a little less typing.
Furthermore, a good compiler should be able to produce just as good
code either way in those cases where an explicit order of evaluation
will actually help, and, as many people have pointed out, can produce
better code in general.
Here's a comment on this that I found in the LaTeX source for the
R^{3.99}RS:
\todo{Freeman:
I think an explanation as to why evaluation order is not specified
should be included. It should not include any reference to parallel
evaluation. Does any existing compiler generate better code because
the evaluation order is unspecified? Clinger: yes: T3, MacScheme v2,
probably MIT Scheme and Chez Scheme. But that's not the main reason
for leaving the order unspecified.}
I am curious what their 'main reason' is - can anybody on the
committee enlighten us as to that? I would guess that they find a
specified order of evaluation aesthetically displeasing and that they
think it leads to unclear code, but of course I can't tell.
What do people think about the suggestion to have procedures such as
the mutation procedures that return an unspecified value in fact return
no value at all? This was mentioned as a possibility in the proposal
to add multiple return values for Scheme, and T seems to be leaning in
that direction. I'm not sure what I think about it or, for that
matter, about the whole multiple return value concept - while there
are often situations where I want to return multiple values, it is
hard for me to think of good notation for that sort of thing. Could
someone (if anyone has read this far in my posting) post a list of
pros and cons for multiple return values in general and for having
mutation procedures in particular return no value? And just what
would (or would be allowed to) happen if something returned no value
in a situation where a value was expected?
david carlton
carlton@husc10.harvard.edu
sfk@otter.hpl.hp.com (Steve Knight) (04/02/91)
Daniel's example of code generation for (baz (foo 1) 2 3 (+ 4 (bar 5))) assumes a very strong interpretation for "insists we evaluate from left to right". As a consequence, even integers are evaluated from left to right. If we take the more relaxed and, it seems to me, more typical view that the compiler preserves apparent order of evaluation, then we only have one constraint to worry about -- namely "(foo 1)" is evaluated before "(bar 5)". We could even dispose of this constraint given that either foo or bar were known to be side-effect free (or, more generally, side-effect independent). Under the worst assumption, that foo and bar might interfere with one another, we get load r0,1 call foo ;; Assume args are passed in lowest numbered regs. load r2,r0 ;; SAVE load r0,5 call bar load r1,4 call + load r4,r0 ;; Assume result is returned in r0. load r0,r2 ;; RESTORE load r1,2 load r2,3 call baz This causes two extra register-to-register instructions. The overall balance here is 4 calls 4 load immediates 4 load registers Assuming that all instructions take one clock cycle (which is ultra- pessimistic) this adds on a 20% slowdown. More typically, this would be a 10% slowdown, not counting the fact that the cost of this sequence is likely to be swamped by the costs of full-arithmetic "+" and the contents of "foo" and "bar". Examples like this are helpful in that they help to gauge the likely penalty of a fixed order of evaluation. Of course, we don't have statistics for the likelihood of being able to prove that "foo" and "bar" are independent, which eliminates the overhead entirely. From looking at simple examples of this kind, I've come to the ROUGH estimate that in development mode there might be an additional 10% slowdown and code growth due to fixed order of evaluation (and other omitted optimisations of the same type.) In delivery mode, I reckon that the overall cost will be on the order of 1%. This rough-and-ready estimate assumes some simple inter-module information is created and used and modern compiler techniques, too. Because I've encountered problems due to free-order of evaluation of arguments in industrial programs and have found them unacceptable, and because I find the efficiency arguments unconvincing, I believe that the argument order should be fixed. An acceptable, though less attractive, alternative would be a lint-like tool for detecting likely violations. Steve
jinx@zurich.ai.mit.edu (Guillermo J. Rozas) (04/04/91)
Because I've encountered problems due to free-order of evaluation of arguments in industrial programs and have found them unacceptable, and because I find the efficiency arguments unconvincing, I believe that the argument order should be fixed. An acceptable, though less attractive, alternative would be a lint-like tool for detecting likely violations. I'm just curious, but would all of you who advocate for fixing the order of argument evaluation accept the following order: All the even numbered arguments are evaluated first from right to left, and then the odd numbered arguments are evaluated from left to right. It would make all implementations consistent, it might make the task of those who care about proving theorems simpler (they seem to claim that a fixed order does), and would probably prevent people from abusing the order of argument evaluation, so it seems to have all the properties that we all want, except for freedom to the compiler.
jeff@aiai.edinburgh.ac.UK (Jeff Dalton) (04/05/91)
> Date: 4 Apr 91 04:11:49 GMT > From: "Guillermo J. Rozas" <jinx@zurich.ai.mit.edu> > I'm just curious, but would all of you who advocate for fixing the > order of argument evaluation accept the following order: > > All the even numbered arguments are evaluated first from right to > left, and then the odd numbered arguments are evaluated from left to right. No. There's no reason not to pick a fixed order that's easier for human readers to follow.
manis@cs.ubc.ca (Vincent Manis) (04/05/91)
In article <JINX.91Apr3231149@chamarti.ai.mit.edu> jinx@zurich.ai.mit.edu writes: >I'm just curious, but would all of you who advocate for fixing the >order of argument evaluation accept the following order: > >All the even numbered arguments are evaluated first from right to >left, and then the odd numbered arguments are evaluated from left to right. I was going to post a suggestion that arguments be evaluated from right to left consistently, but, on the whole, I prefer Jinx's suggestion. The only improvement I would make is to replace 'even' with 'prime', and 'odd' with 'composite' (plus perhaps a footnote that all Mersenne numbers are to be considered prime for the purposes of Scheme specification). Rather more seriously, I would really like to see an example where insisting on the order of evaluation IN AN APPLICATION results in demonstrably improved readability. (Scheme isn't a purely functional language, and we all love sequential evaluation in our special forms...) What I have generally seen in the past is shorter code, not more readable code. What I would like to see is a specific example, in two versions, one with and one without sequential evaluation. Since the trend in programming languages since at least the late 1960's[*] has been not to specify order of evaluation, I think the onus is on those who would change this practice to demonstrate compelling reasons for doing so. --- * ALGOL-68 was the first language of which I was aware which adopted this rule. I remember a paper by Charles Lindsey in 1969 explaining why this was a good thing. --- I recently posted a peculiar example of the form (+ (begin (display "hello") 1) (begin (display "world!") 2)) and I do remember David Parnas once saying that the control system he developed for the Tomcat [? Some US Navy plane, anyway] interleaved code this way (they didn't want a multitasking kernel, because they were worried about interrupt response), though that system was written in assembly language. I do hope that advocates of sequential evaluation have a better example than this. -- \ Vincent Manis <manis@cs.ubc.ca> "There is no law that vulgarity and \ Department of Computer Science literary excellence cannot coexist." /\ University of British Columbia -- A. Trevor Hodge / \ Vancouver, BC, Canada V6T 1W5 (604) 228-2394
oz@yunexus.yorku.ca (Ozan Yigit) (04/05/91)
Vincent Manis writes: >... I think the onus is on those who >would change this practice to demonstrate compelling reasons for doing >so. This is not a debating society, nor Scheme is trivial pursuit. The onus is on all of us to bring forth conjectures on issues that clearly divide the community, and see if they withstand critical examination. Haven't we had enough of this silliness ? oz --- What ought to disturb us are not mistakes | Internet: oz@nexus.yorku.ca in general, but only those of them that we | Uucp: utzoo/utai!yunexus!oz are powerless to correct. -- David Miller | Phone: 1+416-736-5257-33976
hieb@heston.cs.indiana.edu (Robert Hieb) (04/06/91)
jinx@zurich.ai.mit.edu (Guillermo J. Rozas) writes: >I'm just curious, but would all of you who advocate for fixing the >order of argument evaluation accept the following order: >All the even numbered arguments are evaluated first from right to >left, and then the odd numbered arguments are evaluated from left to right. >It would make all implementations consistent, it might make the task >of those who care about proving theorems simpler (they seem to claim >that a fixed order does), and would probably prevent people from >abusing the order of argument evaluation, so it seems to have all the >properties that we all want, except for freedom to the compiler. I'll bite. In fact, I will take it as a serious suggestion. No. It would be perverse joke at the expense of those who must actually use or explain the order of evaluation (implementors, provers, instructors, authors). However, I would be happy with a right-to-left evaluation order. This would suggest strongly that the (unavoidable!) order of evaluation of the procedure and arguments on procedure call is not meant to be used for sequencing. Only the perverse would do so. This would also tend to catch reliance on argument evaluation order. It is natural for a naive programmer (and who isn't at times?) to assume a left-to-right order of evaluation. However, programs that actually depend on such an order wouldn't work, and thus the dependency would be flushed out. This reflects the trend that seems to be evolving with respect to unspecified return values---returning some "useless," but consistent, value. Although there is no way to prevent the use of such a value, again only the perverse would do so. As an example of the sort of program where evaluation order can be important, consider a program that uses pseudo-random numbers. One might innocently write: (alpha (random) (random)) Unfortunately the mysterious: (let ([r (random)]) (alpha (random) r)) is necessary if one wishes to ensure that the program will be consistent across different implementations or even different versions of the same implementation. If Olivier Danvy is listening out there, he might wish to relate his sad tale of the difficulties he encountered due to the interaction between "gensym" and variant evaluation orders using a single version of a single implementation.
markf@zurich.ai.mit.edu (Mark Friedman) (04/06/91)
I believe that there is an expressiveness argument in favor of keeping the order of evaluation of combinations unspecified. Scheme has a number of mechanisms for indicating a specific order of evaluation (e.g. LET* and BEGIN). Scheme also has some mechanisms for indicating that the order of evaluation is unimportant (e.g. combinations and LET). The language is more expressive currently than if order of evaluation where always specified because a programmer would have no way to express the fact that order of evaluation was not important. -Mark -- Mark Friedman MIT Artificial Intelligence Lab 545 Technology Sq. Cambridge, Ma. 02139 markf@zurich.ai.mit.edu
manis@cs.ubc.ca (Vincent Manis) (04/06/91)
In article <22253@yunexus.YorkU.CA> oz@yunexus.yorku.ca (Ozan Yigit) writes, in response to me: >>... I think the onus is on those who >>would change this practice to demonstrate compelling reasons for doing >>so. > >This is not a debating society, nor Scheme is trivial pursuit. The onus >is on all of us to bring forth conjectures on issues that clearly divide >the community, and see if they withstand critical examination. Huh? Some people propose a fairly radical change to Scheme, moving its behaviour away from the norm in modern programming languages. They do so on the fairly vague basis that `it would be more consistent' or `it's more natural' or some such thing. The change would legitimize a programming style which is known to be problematic, and has been known since the mid-1960's, at the very least. If it has any effect on program performance, it is likely to be a negative one. I would say that indeed the proponents of any change ought to be responsible for putting together firm reasons why that change is necessary. I see no reason why I should make their case for them. If their response is `if you don't make that change, I won't use Scheme!' I will certainly respect them for it, but it doesn't help me see the force of their argument. I certainly remain open to cogent arguments on this subject. I'd just like to see their reasons. However, the arguments presented so far simply have not been compelling. -- \ Vincent Manis <manis@cs.ubc.ca> "There is no law that vulgarity and \ Department of Computer Science literary excellence cannot coexist." /\ University of British Columbia -- A. Trevor Hodge / \ Vancouver, BC, Canada V6T 1W5 (604) 228-2394
sfk@otter.hpl.hp.com (Steve Knight) (04/08/91)
Guillermo J. Rozas asks: > I'm just curious, but would all of you who advocate for fixing the > order of argument evaluation accept the following order: > All the even numbered arguments are evaluated first from right to > left, and then the odd numbered arguments are evaluated from left to right. I *know* I shouldn't bite but I can't resist .... I would prefer it to unconstrained order but would consider it a ludicrous choice of order. Trying to "sell" this order to my colleagues would give me an ulcer! Steve
andy@DEC-Lite.Stanford.EDU (Andy Freeman) (04/09/91)
In article <1991Apr5.110615.20429@news.cs.indiana.edu> hieb@heston.cs.indiana.edu (Robert Hieb) writes: >This would also tend to catch reliance on argument evaluation order. >It is natural for a naive programmer (and who isn't at times?) to >assume a left-to-right order of evaluation. However, programs that >actually depend on such an order wouldn't work, and thus the dependency >would be flushed out. If the order is defined, then why is it worthwhile to "flush out" reliance on that definition? The difference that this illustrates is not unlike the difference between someone who collects tools based on their beauty and someone who measures "tool beauty" in terms of function. There are tools that satisfy both, but most don't. Yes, I know that the trend is towards unspecified argument order. That being the case, the arguments for it should be especially compelling, certainly more so than "everyone else is jumping off that bridge too". -andy -- UUCP: {arpa gateways, sun, decwrl, uunet, rutgers}!neon.stanford.edu!andy ARPA: andy@neon.stanford.edu BELLNET: (415) 723-3088