[net.lang] Gotos; tail-recursion

gumby@mit-eddie.UUCP (David Vinayak Wallace) (05/31/84)

    Date: Wed, 30-May-84 00:53:10 EDT
    From: nessus@mit-eddie.UUCP (Doug Alan)

    Actually, gotos in a programming language (with labels) are winning as
    long as the language only allows one to goto in a forward direction.
    They're even better if the language provides for the passing of
    arguments to the goto destination.  If you can only goto in a forward
    direction, then you can't get spaghetti.  Both Lisp and CLU have these
    gotos.  In Lisp, they are called "throw"s and in CLU they are called
    "exit"s.  Both allow you to pass arguments.

Quite the contrary; throw (*throw) UNWINDS the stack, jumping backwards.
The two big features of throw are 1> that it unwinds the stack (rather
than just setting the PC) 2> That it has a scope -- it's only applicable
within the lexical (Scheme) or dynamic (Lisp) scope of the corresponding
catch.  In my opinion, the dynamic throw is almost as bad as ordinary
GOTO -- you can't clearly see the flow of control by just reading the
code.

    What do people think about tail recursion?  ... A tail
    recursive procedure call is in effect a generalization of a goto (though
    it looks like a procedure call), because you never have to return from a
    tail recursive procedure call if you don't want to.  It is a goto that
    allows the passing of arguments.  Is this a good thing?  Or should it be
    shunned like gotos?

The big problem with tail-recursive code is that it's almost impossible
to debug because new frames aren't pushed onto the stack.  Unless you
shun other structured iterative constructs (e.g. 'for' in c) I don't see
why to shun tail-recursion.  It's not a generalisation of goto; it's
only useful in one place goto is.

    If in an introductory programming course, you teach anything other than
    an object oriented programming language, such as Lisp, CLU, or Smalltalk
    (sorry folks, Pascal does not count), you are brain-damaging your
    students almost as much as if you taught them Basic.

It's not clear I'd call Lisp an object-oriented language.  The worst
confusion endengered with the Lisp class we're teachine at Stanford
right now is with object-oriented systems.  My complaint with teaching
langages like Pascal is that all the features designed to make it easy
to program in actually get in the way of the studen't understanding of
the nature of programming, by presenting arbitrary distinctions between
"system" and "user" code (functions and operators, and ...), and by
presenting system-dependent abstractions (say , the io system in
Pascal).  Whatever else its failings, at least lisp is more consistent
in its view of the world.

nessus@mit-eddie.UUCP (Doug Alan) (06/05/84)

	From: gumby@mit-eddie.UUCP (David Vinayak Wallace):

>	Quite the contrary; throw (*throw) UNWINDS the stack, jumping
>	backwards.  The two big features of throw are 1> that it unwinds
>	the stack (rather than just setting the PC) 2> That it has a
>	scope -- it's only applicable within the lexical (Scheme) or
>	dynamic (Lisp) scope of the corresponding catch.  In my opinion,
>	the dynamic throw is almost as bad as ordinary GOTO -- you can't
>	clearly see the flow of control by just reading the code.

It doesn't matter that the label comes before the jump.  The jump is
still in a forward direction.  (I'm using "forward" to mean the
direction that can't possibly cause you to loop.)  The following two
pieces of code do the same thing, even though one has the label at the
beginning, and one at the end:

(1)	(print (*catch 'foo
	          (let ((x 1))	
	            (do-forever (print x)
	     			(setq x (1+ x))
	     			(if (= x 10.)
				    (*throw 'foo 3.1416))))))


(2)	po: stream := stream$primary_output()
	x: int := 1
	while true do
	   stream$putl(po, int$unparse(x))
	   x := x + 1
	   if x = 10 then exit foo(3.1416)
			  end
	   end
	   	except when foo(y:real):
		   stream$putl(po, real$unparse(y))
		   end

In both the jump goes in the same direction.

And who says that GOTOs only set the PC?  If you have a language that
allows GOTOs, and it also lets you GOTO out of a block that has it's own
local variables, then it better do more than just setting the PC (unless
the compiler has already accounted for this by preallocating the
variables).  Here part of the stack must be unwound.

And, not only that, but who says that GOTOs can't have a scope?  Lisp
has GOs, but you can't GO out of the PROG statement.  In some Pascals
there are GOTOs, but you must declare a label, and you can't jump to
that label if you are not in its scope.

It seems to me that a GOTO is any statement that involves disrupting the
natural flow of control of the program, and transfering control to
someplace else that has been named.  Unrestricted GOTOs, if used
unrestrictedly are definitely bad things.  But some types of GOTOs are
definitely useful (like loop exits, etc.)  We have to decide what
restrictions on GOTOs make them palatable.

I agree that dynamic throws are incredibly bad.  I should have been more
restrictive in my specification of good GOTOs.  Not only are the labels
for both THROW and CATCH computed at run-time, but even if they weren't,
you still wouldn't always be able to tell just by looking a program
where a THROW is THROWing to.  ADA also has this problem with its error
system.  CLU does NOT have this problem with its EXIT or SIGNAL (for
signalling errors) system.  I haven't yet decided whether lexical THROWs
are very good things.  They're much better than dynamic THROWs, but you
can still do some pretty weird things.

>    Unless you shun other structured iterative constructs (e.g. 'for'
>    in c) I don't see why to shun tail-recursion.  It's not a
>    generalisation of goto; it's only useful in one place goto is.

No, you're wrong.  Tail recursive procedure calls are a generalization
of GOTO.  They are at least as powerful.  You could implement GOTO with
tail recursive procedure calls.  Following is a simple translation of a
BASIC program with GOTOs to a (Bad!) Lisp program with tail recursion:

(1)
	3 FO = 0
	5 GOTO 10
        7 END
	10 PRINT "Hello"
	20 FO = FO + 1
	30 IF FO = 10 THEN 50
        40 GOTO 10
        50 GOTO 7


(2)	(defun L3 () (setq fo 0) (L5))
	(defun L5 () (L10))
	(defun L7 () nil)
	(defun L10 () (print "Hello") (L20))
	(defun L20 () (setq fo (1+ fo)) (L30))
	(defun L30 () (if (= fo 10) (L50)
	 	      	(L40)))
	(defun L40 () (L10))
	(defun L50 () (L7))

Of course, you could do the same with without tail recursion, but you
would probably get discouraged pretty fast when you started getting
stack overflows if your program ran for more than a few thousand steps.
Without tail recursion, this would never discourage you, so if you might
be tempted to do horrible things like this.  I'm not saying that tail
recursion is bad.  Actually I think it is a good idea.  But I think this
problem is something to think about.

>     It's not clear I'd call Lisp an object-oriented language.  The worst
>     confusion endengered with the Lisp class we're teachine at Stanford
>     right now is with object-oriented systems.  My complaint with teaching
>     langages like Pascal is that all the features designed to make it easy
>     to program in actually get in the way of the studen't understanding of
>     the nature of programming, by presenting arbitrary distinctions between
>     "system" and "user" code (functions and operators, and ...), and by
>     presenting system-dependent abstractions (say , the io system in
>     Pascal).  Whatever else its failings, at least lisp is more consistent
>     in its view of the world.

Well I know that Smalltalk people use "object oriented" to mean that
operations are run-time polymorphic on the type of the argument.  But
CLU people generally use object oriented to mean that something like
that there are no second-class objects (though they've stopped using the
term as much ever since Smalltalk people stole the term out from under
them. . . .)  In Pascal you can create integers, reals, and booleans,
and return them to a higher scope than that in which they were created,
but you can't do that with arrays, records, etc.  In CLU, Lisp, and
Smalltalk you can do this with an object of any type.  I like this
definition of "object oriented" better, because it's clear why it's name
is is "object oriented", and I don't see what there is in the name that
implies generics.  Though I think generic operations are great, I think
that this concept should be called something other than "object
oriented".
-- 
				-Doug Alan
				 mit-eddie!nessus
				 Nessus@MIT-MC

				"What does 'I' mean"?

 

mwm@ea.UUCP (06/07/84)

#R:mit-eddi:-198600:ea:5400008:000:1655
ea!mwm    Jun  7 11:21:00 1984

/***** ea:net.lang / mit-eddi!nessus /  8:55 am  Jun  5, 1984 */
Of course, you could do the same with without tail recursion, but you
would probably get discouraged pretty fast when you started getting
stack overflows if your program ran for more than a few thousand steps.
Without tail recursion, this would never discourage you, so if you might
be tempted to do horrible things like this.  I'm not saying that tail
recursion is bad.  Actually I think it is a good idea.  But I think this
problem is something to think about.

				-Doug Alan
 /* ---------- */

I'm completely confused. As I understand it, tail recursion is an
implementation technique that allows languages to generate faster
code. People may write code to take advantage of this, but it still
doesn't change the semantics of the language. Hence comparing it
to gotos, loops and etc. doesn't seem to make sense.

You also have a logical fallacy. You seem to think that being able to
do bad things with a language feature is a good reason to consider
taking it out of a language. Consider the following piece of ALGOL-W
that I've had handed in to me:

	IF x = y THEN ELSE
	    BEGIN
		<I forget what goes here>
	    END

Now, this is a pretty horrible thing. This doesn't make the if-then-else
construct bad, it makes the programmer bad. Let me go out on a limb by
extending this:

	There are *no* bad language constructs, merely bad uses for them.

There are, however, good language constructs: those that make it easier for
me to express my algorithms. If-then-else is better than if-goto for that
reason. Likewise, data structures and data abstraction are better than arrays.

	<mike

neal@denelcor.UUCP (Neal Weidenhofer) (06/13/84)

**************************************************************************

>	There are *no* bad language constructs, merely bad uses for them.
>
>There are, however, good language constructs: those that make it easier for
>me to express my algorithms. If-then-else is better than if-goto for that
>reason. Likewise, data structures and data abstraction are better than arrays.
>
>	<mike

	I have to take issue with this.  Their appear to be certain
language constructs (e.g., GOTO) that make it easier to express an
algorithm but make it more difficult or time-consuming to maintain a
program.

	That, to me, is the most important lesson of the last 10-20 years
of software research and development--expressing the algorithm is only the
tip of the iceberg of software costs.  I'm not even considering the rest
of the initial development in this context, just that maintenance costs
(debugging and improving) in many cases completely swamp the initial
development costs.

	For this reason, I consider language constructs (or any other
environmental features) "bad", or at least "attractive nuisances", if they
make it easy to develop a program but hard to maintain it.

			Regards,
				Neal Weidenhofer
"The law is for protection	Denelcor, Inc.
	of the people"		<hao|csu-cs|brl-bmd>!denelcor!neal

nessus@mit-eddie.UUCP (Doug Alan) (06/14/84)

>	From: mwm@ea.UUCP

>	I'm completely confused. As I understand it, tail recursion is an
>	implementation technique that allows languages to generate faster
>	code. People may write code to take advantage of this, but it still
>	doesn't change the semantics of the language. Hence comparing it
>	to gotos, loops and etc. doesn't seem to make sense.

Well, it depends on what you mean by "the semantics of the language".
If you mean by that, some pure mathematical notion, then tail-recursion
doesn't change the semantics.  But in that case you don't need loops and
gotos: you can do gotos with procedure calls and loops (even infinite
loops) with recursion, because you don't care about efficiency and you
don't care about stack overflows -- when your stack is infinitely big,
you can't have an overflow.  But in real implementations, you can't
program like that because you have to worry about efficiency and stack
overflows (well you don't HAVE to worry about efficiency, but you do
HAVE to worry about stack overflows).

Tail-recursion allows you to pretend that your implementation is a pure
mathematical model, and still get efficiency and no stack overflows.
You can do infinite loops with recursion and gotos with procedure calls,
and have it all work fine in the real world.  In this sense,
tail-recursion does change the semantics of the language.

>	You also have a logical fallacy. You seem to think that being
>	able to do bad things with a language feature is a good reason
>	to consider taking it out of a language.

You have a logical fallacy.  The alleged fallacy you're talking about
isn't in the domain of logic -- it's a matter of taste and practicality.
I'd like to see, though, a logical proof that I'm wrong.  Then I can
show you some proofs for the existence of God.  Aquinas had a couple
good ones.

Besides, I never said that just because you can do bad things with a
language feature, you should remove it from the language.  I just think
it's a factor to consider just like everything else.  When you design a
language, you don't want to put in everything in the world.  If you do,
you get PL/1.  You want to put in the minimum number of features that
integrate well together and let you do everything you want to easily.
You should consider the negative effects of a feature as well as its
benefits.  One of your design considerations may very well be that your
programming language is going to be used to do large software projects
by programming teams, and in that case the language should do it best to
encourage good programming practices without being limitting.  There are
always trade-offs, and they should be considered.

And like I said before, I think that tail-recursion is a good feature.
In fact, I think it is a great feature.  I think its advantages far
outweigh its disadvantages.  I was just pointing out its major
disadvantage and asking if anyone thinks this disadvantage is important.
-- 
				-Doug Alan
				 mit-eddie!nessus
				 Nessus@MIT-MC

				"What does 'I' mean"?

 

ed@mtxinu.UUCP (06/18/84)

While I must agree that there are no bad language constructs,
only bad uses for them, I must also say that I can often find
a valid use for a "bad" construct.  Consider the following
implementation of a debugging printf macro:

	#define	Dprintf	if(!cond); else fprintf

It can be inserted anywhere in the code.  The obvious implementation,
namely

	#define	Dprintf	if(cond) fprintf

can't be put just anywhere without affecting other statements.

While I do not think that including this construct in the
middle of the text of a program is good, it seems reasonable
to encapsule it in one place.  Then, although it may be
a confusing construct, the reader needs only to understand
it once and can then decide that it works, rather than trip
over it in several places.

-- 
Ed Gould
ucbvax!mtxinu!ed

mwm@ea.UUCP (06/18/84)

#R:mit-eddi:-198600:ea:5400009:000:982
ea!mwm    Jun 17 20:59:00 1984

/***** ea:net.lang / denelcor!neal / 10:19 am  Jun 13, 1984 */
>	There are *no* bad language constructs, merely bad uses for them.

	For this reason, I consider language constructs (or any other
environmental features) "bad", or at least "attractive nuisances", if they
make it easy to develop a program but hard to maintain it.

				Neal Weidenhofer
/* ---------- */

I don't see any contradiction in this. A claim a construct is good if
it makes it easier for me to code an algorithm; meaning that it is close
to the way I think of the algorithm. You claim an construct is bad if
it makes a program easy to develop but hard to maintain. By my definition
of "easy to develop", there should be no such constructs. A construct
that is close to the way you think about an algorithm should be both
easy to develop and maintain, and one that is hard to maintain should
be (in some sense) far from the way you think of the algorithm, hence
it should make developing code harder.

	<mike

cjl@iuvax.UUCP (06/20/84)

> There are *no* bad language constructs, merely bad uses for them.

I shall give some counter examples :

   In Pascal the strict declaration order of LABEL, CONST, TYPE, VAR
essentially precludes the possibility of writing an abstract data type
in one page. Frequently, we see the type declaration in page 1,
var declaration in page 9, and the related procedures in page 100 !

   In a language like Modula-2 where goto is not supported, there is 
no way to code the exceptional handling procedures nicely as there is
no exceptional handling construct either.

From the above examples, we can see some consequence of premature adaptation
of restrictive language constructs to enforce certain software disciplines
which were generally accepted at the time the language was designed. They
become BAD as the theory of software engineering evolves.

A language like C with emphasis on flexibility has less problem. Usually
they won't hurt you like as the above, but they won't help you as the
above languages either.

C.J.Lo
cjl@Indiana@CSNet-Relay

mwm@ea.UUCP (06/25/84)

#R:mit-eddi:-198600:ea:5400010:000:831
ea!mwm    Jun 25 15:34:00 1984

/***** ea:net.lang / iuvax!cjl /  5:13 am  Jun 21, 1984 */
> There are *no* bad language constructs, merely bad uses for them.

I shall give some counter examples :

C.J.Lo
cjl@Indiana@CSNet-Relay
/* ---------- */

You gave some good counterexamples of bad language design: constructs were
taken out (goto from Modula-II, flexibility in data declarations in
Pascal), resulting in bad things happening.

These aren't cases of bad language constructs, these are cases of languages
forcing bad useage of language constructs.

I will retract my earlier statement, as I have since been reminded of a bad
language construct: [Those of you with weak stomachs should stop reading
now!!!] the thrice-damned FORTRAN arithmetic IF. Since any use of this
construct is a bad use, I can only conclude that it's a bad language
construct.

	<mike

mwm@ea.UUCP (06/29/84)

#R:mit-eddi:-198600:ea:5400012:000:1613
ea!mwm    Jun 29 15:09:00 1984

>About the Fortran arithmetic IF:
>
>I disagree that "any use of the arithmetic IF is a bad use".  I have seen
>several programs that used the three branches for three distinct actions,
>which I feel is as good a use for a three-way branch as can be made.

I agree, that is the best use for the silly thing. I think breaking the
test out into an else-if chain is both easier to read, and closer to what
the implementor was thinking about. If the test is on floating point
numbers, you probably should use the else-if chain to take care fuzzy
zeros. I'd be *very* interested in seeing a use for the arithmetic IF that
wasn't better written as the else-if chain.

>Mind you, I'm not defending the arithmetic IF in particular, but I think the
>statement that all uses of a particular construct are bad is a bit rash.

You're right, but I'm prone to rashness :-).

>And yes, I know it's not self-documenting, but then again neither are many
>constructs in other languages.  For example, reading C programs can be a
>real headache sometimes.

Any programming language can be a headache to read sometimes. (See what I
mean about being prone to rashness? :-). Ever seen Pascal or ALGOL written
to look like FORTRAN - no indentation, if-then-else's with null then
statements and inverted tests, lots of GOTOs, etc.?

The point is that *any* programming construct can be uglified (gee, I seem
to be on a binge today :-). The question is whether there are programming
constructs that are always ugly. I think the arithmetic IF qualifies, and
the code would invariable be better if the thing were written out of it.

	<mike