cgs@umd5.UUCP (12/03/87)
Would someone please explain to me what it is good for/what it models ? Besides testing how long a function call takes, since it is nested very deeply in some parts of the evaluation... I keep getting 'introduced' to it, but never any history of it is provided. adthanksvance -- --==---==---==-- .. O frabjous day! Callooh! Callay! .. ARPA: cgs@umd5.UMD.EDU BITNET: cgs%umd5@umd2 UUCP: ..!uunet!umd5.umd.edu!cgs
twr@otter.UUCP (12/04/87)
I guess the ackermann function appears because it is an example of a computable function which is not primitive recursive.
kers@otter.UUCP (12/04/87)
ackermans function is also notable, I seem to remember, for GROWING faster than any primitive recursive function. ackerman(i,i) gets VERY big VERY fast as i grows to largish values like 4, 4 etc ...
ekwok@cadev4.UUCP (12/05/87)
In article <2093@umd5.umd.edu> cgs@umd5.umd.edu (Chris Sylvain) writes: > >Would someone please explain to me what it is good for/what it models ? >Besides testing how long a function call takes, since it is nested very >deeply in some parts of the evaluation... >I keep getting 'introduced' to it, but never any history of it is provided. > It is good for generating overflow/underflow exceptions in machine where the representation of numbers are of finite length. Other than that, I don't know.
brent@maine.bitnet.UUCP (12/05/87)
In article <2093@umd5.umd.edu>, cgs@umd5.umd.edu (Chris Sylvain) writes: > >Would someone please explain to me what it (Ackermann's function) >is good for/what it models ? >... >I keep getting 'introduced' to it, but never any history of it is provided. Ackermann's function was one of the first functions developed which is not a Primitive Recursive function. These are the initial functions: a) The 0-place zero function given by: Z() = 0 b) The i'th k-place projection function given by: k P (n ,...,n ) = n i 1 k i c) The successor function given by: S(n) = n+1 A function is a Primitive Recursive function if it is one of the initial functions above, or if it can be generated by compositions of initial functions by Primitive Recursion. (If you want to know what Primitive Recursion is, I'd be happy to show you.) ANYHOW, Ackermann's Function is not a Primitive Recursive function, and I believe it was the first one ever shown not to be. This makes Ackermann's function special, because most everything else -- essentially all useful, computable functions -- is Primative Recursive. HOWEVER, you probably keep getting "introduced" to Ackermann's function because of the interesting way it behaves when it's first parameter is large. Good way to show CS students how recursion can cause problems. ------- .. Sine here... . . Brent C.J. Britton <Brent@Maine.bitnet> Computer and Data Processing Services . . . University of Maine System Orono, ME 04469 . . 207/581-3557 ..
reid@decwrl.dec.com (Brian Reid) (12/06/87)
The easiest way to *understand* Ackermann's function (rather than defining it or characterizing it) is as follows. 0: If you take the number "N" and add 1 to it "K" times, you do an operation called "addition" of N plus K. 1: If you take the number "N" and add it to itself "K" times, you do an operation called "multiplication" of N times K. 2: If you take the number "N" and multiply it by itself "K" times, you do an operation called "exponentiation" of N to the power K. 3: If you take the number "N" and raise it to its own power "K" times, you get some operation whose name I can't remember. Let's call it "hyperexponentiation" of N to the power K. 4: and so forth. If you define "operator number" to be 0 for addition, 1 for multiplication, 2 for exponentiation, 3 for ?hyperexponentiation, and so forth, then you can imagine that the higher-order operators are pretty fierce. Ackermann's function A(N, J) means to perform operation number J on the number N, N times. Other respondents have explained admirably why Ackermann's function is interesting. It is not at all useful.
agw@eastend.columbia.edu (Art Werschulz) (12/08/87)
Hi. Pardon me, but this is a little sketchy. I have part of a photocopied version of Aho, Hopcroft and Ullman's book "The Design and Analysis of Computer Algorithms" easily accessible, and I don't really have time to look this all up properly. Consider the problem of doing n operations of the form (*) UNION(S,T,V): replace the sets S and T by a new set V, which is the union of the old sets S and T (*) FIND(x): determine the set to which some element x belongs. (Motivation? Think about FORTRAN EQUIVALENCE statements.) The catch is that UNION and FIND operations could conceivably be in any order. In the Bibliographic Notes of Chapter 4, Aho et al. state that a fast algorithm using path compression on binary trees used to represent the sets was "apparently" first used by McIlroy and Morris. In a 1974 [?] article in J. ACM, Robert Endre Tarjan showed that this algorithm had running time Theta(n*G(n)) where G(n) is an "inverse Ackermann function". If memory serves me correctly (and it probably doesn't), G(n) = inf {m: A(m, m DIV 4) >= n } or some such. Now, since A(m, m DIV 4) grows like crazy with m (as pointed out, faster than any primitive recursive function), it's clear that its inverse function G grows *extremely* slowly. In fact, one could almost say that G(n) is a constant for all reasonable values of n . Hence, Tarjan showed that the running time of this algorithm is almost linear. Furthermore, this is a "Theta" result, and not a "big-Oh" result, meaning that he derived both upper and lower bounds of the form constant*n*G(n) for this algorithm. I don't recall whether or not anybody's beaten Trajan's result. Art Werschulz ARPAnet: agw@columbia.edu USEnet: ... seismo!columbia!agw BITnet: agw@columbia.edu CCNET: agw@garfield ATTnet: Columbia University (212) 280-3610 280-2736 Fordham University (212) 841-5323 841-5396
aglew@ccvaxa.UUCP (12/09/87)
I hate to say something like this, especially after somebody as eminent as Reid has posted, but I vaguely remember that a function related to Ackermann's was used as a complexity bound in a data structures course I sat in on. Somehow munged to grow super slowly instead of super fast. Thanks to Brian for the intuitive explanation of A(n,j).