lew (03/23/83)
A noun clause can be substituted for a noun. If the predicate nominative of the noun clause is replaced by another noun clause, we have an example of the recursive application of noun clauses. The cheese stands alone. The cheese the cat takes stands alone. (standard English) The cheese the cat the dog takes takes stands alone. (recursive, nonstandard) These noun clauses are like polish postfix expressions, with the verb as the operator. However, we can form the English analogue of infix while retaining the recursive property. The cheese which is taken by the cat which is taken by the dog stands alone. (This does seem closer to standard English.) This is like: cheese + cat + dog = alone The iterative example (the dog takes the cat etc.) is more like: dog += cat cat += cheese cheese = alone The iterative version only uses "statement ::= variable op variable". The recursive versions use "statement ::= expr op expr" and "expr ::= expr op expr || variable". That is, they use the recursive definition of an expression (or noun clause.) Lew Mammel, Jr. ihuxr!lew
donn (03/30/83)
References: ihuxr.{366,367,370,373} Two comments about Lew Mammel's (ihuxr!lew) recursion discussion. (1) Lew notes that the following sentence is hard to parse: The cheese the mouse the cat the dog the child the nurse the wife the farmer takes takes takes takes takes takes takes stands alone. This sentence seems to have a recursive or embedding character to it. It might seem reasonable to say that the reason for its difficulty is that recursion is just intrinsically difficult. Lew made this kind of claim but retracted it in a later article: I sort of painted myself into a corner trying to defend my recursive "cheese stands alone" formulation. I put myself in the position of calling "a+b+c+d" recursive, since it can be recognized by a recursive syntactical rule. But non-embedding ("regular") languages are a subset of embedding (e.g. context-free) languages, so you can always find recursive rules for non-embedding languages. The question is not whether you CAN use a recursive rule but whether you MUST. In terms of parsing, the difference is whether you need to use a stack to store a recursive environment. It has been argued that the difficulty of sentences like Lew's (sometimes called "multiply center-embedded sentences") shows that human beings have trouble using stacks in parsing. When a sentence contains right- or left-embedded clauses it is presumably possible to omit the environment of the containing clause, so that a stack is not needed; for example the following variation of Lew's sentence uses only right-embedding and is easier to process: The farmer takes (the wife (that takes (the nurse (that takes (the child (that takes (the dog (that takes (the cat (that takes (the mouse (that takes (the cheese (that stands alone)))))))))))))) This effect has no direct relationship to "infix" vs. "postfix" structure. The problem is that in English relative clauses embed to the right of nouns but subjects appear to the left of verbs, allowing a clause to interpose between a subject and its verb. If relative clauses instead appeared to the left of the nouns they modify, we would have no problem; Lew's sentence would have the left-embedded structure: ((((((((((((((the farmer takes that) the wife) takes that) the nurse) takes that) the child) takes that) the dog) takes that) the cat) takes that) the mouse) takes that) the cheese) stands alone Languages like Japanese and Tamil have relative clauses to the left of the nouns they modify and subjects that appear to the left of verbs, so translations of Lew's sentence sound perfectly okay to speakers of these languages... Instead they have problems with relative clauses which modify OBJECTS. (2) As Lew Mammel observes, we must distinguish understanding the principle of recursion from being able to use it ourselves: ... [T]he difficulty of mentally parsing my "cheese" example which was formulated as a deeply nested noun clause, says nothing about the difficulty of understanding the principle by which it is generated. I think this was the point of those who criticized my example. While this is a useful point, I think it misses something. People learn by example more efficiently than they learn by rule. Since recursion is so hard to "use" (as the examples above demonstrate), it can be very difficult to identify an application for recursion without the practice of many simple examples. Programming techniques like loops require less memory load to simulate and so they can be picked up by experimentation; recursion requires more memory load and hence the cost of experimentation is higher. It is ironic that the factorial function is so often cited as a classic example of recursion when it is so easy to do it with iteration; I think that the mapping between the iterative solution and the recursive solution for the factorial often serves as a mnemonic crutch for learning recursion. Donn Seeley UCSD Chemistry Dept. RRCF ucbvax!sdcsvax!sdchema!donn UCSD Linguistics Dept. ucbvax!sdcsvax!sdamos!donn PS The opinions expressed in this article are not necessarily those of this site, or the USENET News Network... Comments are welcome.
pavel (03/31/83)
Wait a minute, Donn. Lew's sentences are NOT an example of the "use" of recursion. When one programs a recursive procedure, one does NOT mentally run the the program and try to deal with the stack. The whole point of using recursion is that you don't have to deal with the mechanics of the implementation. The use of recursion in Lew's example is in the rule of the grammar which allows the construction of the sentence, not in the sentence itself. The nicest thing about recursion is that one can look at and understand the program \statically/, without mentally executing it. Such thought processes aren't possible with iteration unless one is pro- vided with an invariant predicate for the loop. Recursion is the great simplifier, freeing the programmer from having to think about anything but a small number of base cases (usually one) whose implementation is usually trivial and a single inductive case. This is much easier than trying to come up with the invariant for the iterative implementation. NOTE: There are certain operations on certain data-structures which are most easily and clearly done using iteration; I'm not advocating the exclusive use of recursion at all. NOTE2: I should also point out that I make every student go through a small example or two of a call to a recursive procedure, keeping track of the various stack frames, etc. I do this for the same reason that people are encouraged to take a course in assembly language: they profit from being able, in necessity, to understand exactly what's going on. However, for the same reasons that most people rarely go back to writing assembler programs, programmers should avoid thinking in terms of the stack implementation of recursion and concentrate on the abstraction. Comments? Pavel Curtis Cornell University
donn (04/01/83)
References: sdchema.468 cornell.4198 Pavel's remarks about my article show that I failed to make clear that I was trying to distinguish between "use of recursion" meaning "application of a certain problem-solving strategy" and "use of recursion" meaning "execution of a certain kind of algorithm". My point was just that the difficulty of the second kind of "use" could lead to delays in learning the first kind of "use"; in other words, the effort of mentally simulating a recursive algorithm could lead to trouble applying a recursive strategy to a problem. This was just an attempt to reflect my experiences in learning recursion. I have vivid memories of being unable to factor a problem recursively, and then trying to hand simulate a recursion based on algorithms I had seen earlier. This was just applying another strategy (a meta-strategy?) which says to try transforming previous problems to see if they work on the current one (sometimes called "hit and miss"!). The idea was that in the context of using an old algorithm I might see a new algorithm that led to an elegant solution of the problem. After much agony and many punched cards I found an algorithm that worked, and I was able to abstract from the algorithm and see the elegance of the recursive solution (and I applied this new view and made the program elegant too). The main difficulty I found was that it was hard to keep everything in mind when hand-simulating a recursive algorithm and this made it hard to see what changes in the algorithm would make it work. Factoring a problem for recursion is really not too different from finding an invariant so that an iterative solution strategy can be applied: in the case of recursion it is the induction step which acts as the "invariant". The main difference between using the two strategies is knowing when to take advantage of the implied context stacking of recursion. The principle may not be hard to apply when you have had practice with it, but the memory overhead in hand-simulation of context stacking is enough to make it hard to learn. Notice that an "implementation" of the stack need not be explicit in the hand-simulation; but you will still need one to run the simulation even if you call it something else... Beating a dead horse, Donn Seeley UCSD Chemistry Dept. RRCF ucbvax!sdcsvax!sdchema!donn
mark (04/01/83)
Donn makes the point well. I think the key is that, in order to learn to program using technique X, you have to get the message across to the students just how X works. This normally means a hand simulation at the blackboard, and the students have to follow it. (Even teaching induction will involve a brief hand simulation of the first few steps of an inductive proof.) Also, the poor student then has to go out and write and debug a program using X, and of course it won't work the first time. At that level, hand simulation is a valuable tool for debugging, and they can't hand simulate without understanding about the stack. If you try to hide the stack, I fail to see how you can explain to a student how it works, other than to pass it off as "magic". And this will lead to other "magic" solutions that won't work (e.g. forgetting the initialization step, or not reducing the problem in the recursion step.) Another gripe is that I have not yet seen a language that makes recursion as convenient as most languages make iteration. I can put a for loop in the middle of a function, with code on either side; I can even nest for loops. But to use recursion, I have to introduce a function that I didn't really want to, and think of a name for it. While I can imagine a "recursive block" structure, I fail to see how this can be more straightforward than a for loop.