cgs@umd5.umd.edu (Chris Sylvain) (12/14/87)
[] A hearty thanks to all who replied.. First a recap, then the summary: In article [..] cgs@umd5.umd.edu (me) writes: > >Would someone please explain to me what it is good for/what it models ? >Besides testing how long a function call takes, since it is nested very >deeply in some parts of the evaluation... >I keep getting 'introduced' to it, but never any history of it is provided. ============================================================================ From: Alex Colvin <mac3n@babbage.acc.virginia.edu> Ackermann's function is of interest to function & computability theorists because it grows (as a function of input) faster than any primitive recursive function. Primitive recursive functions are well-behaved and use only bounded (downward) recursion. "Ackermann's function is one of the simplest examples of a recursively defined (and effectively computable) total function that is not primitive recursive." -- F. Hennie, "Introduction to Computability" Ackermann's function is used to introduce unbounded (upward) mu-recursion. ============================================================================ From: RAMontante <bobmon@iucs.cs.indiana.edu> Organization: Indiana University, Bloomington I believe it was created specifically to prove-by-example a theorem about computability of functions. It demonstrates that a function can be total/ recursive, yet not primitive-recursive. The proof was published in 1928. (I'm not good at this, but I think it means that a function can be defined for all its inputs and return a value in finite time, yet "grow" too fast to belong to the class of primitive-recursive functions that can be defined in terms of a small base set of functions.) ============================================================================ From: Moshe Cohen <@UMD2.UMD.EDU:moco@WISDOM.BITNET> The Ackermann function is an example of a function which is recursive, which means it always halts after a finite amount of computation for all inputs, but is NOT primitive recursive, which means it cannot be coded in a language in which for all loops, a bound on the number of iterations of the loop is to be known (computable) before entering the loop - Languages without while loops or repeat loops or goto's, only 'for x form i to n' type loops. Essentially all functions (algorithms) we encounter in practice are primitive recursive. Having an asymptotic complexity, even super super exponential, implies we can write the program with 'for loops' running for a number of iterations that equals the upperbound, possibly not doing anything in surplus iterations. Of course practically we may use 'whiles' to make the program more readable and/or efficient and/or aesthetic. If we try to write a bounded loop program for the Ackermann function it turns out that to compute the bounds themselves we must use the Ackermann function. As far as I remember ( I couldn't check) a good reference on the subject is Brainerd & Landweber's "Theory of Compuation". ============================================================================ From: Kevin T. Likes <likes@silver.bacs.indiana.edu> Organization: Indiana University BACS, Bloomington The main importance of Ackerman in mathematics (as opposed to CS) is to show a function that grows even faster then exponential functions. In fact, that is how it was developed originally. Its major use is of course to test programming languages, and it really doesn't model anything in particular. ============================================================================ From: cvl!mimsy!rutgers!cmcl2!phri!dasys1!jsb (Jim Baunbach) Organization: The Big Electric Cat Ackerman's function has no practical use, merely mathematical use. It's an example of something in recursive function theory, which if I remember correctly (its been a long time) is a function which is recursive but can't be rewritten as itterative. Jim Baumbach {uunet}!mstan\ Big Electric Cat Public Unix {bellcore,cmcl2}!cucard!dasys1!jsb New York, NY, USA {sun}!hoptoad/ or uunet!actnyc!fred!jsb ============================================================================ From: ekwok@cadev4.intel.com (Edward C. Kwok) Organization: John Q. Public & Sons It is good for generating overflow/underflow exceptions in machine where the representation of numbers are of finite length. Other than that, I don't know. ============================================================================ From: twr@otter.HP.COM (Tony Rush) Organization: hplabs-brc Bristol,UK. I guess the ackermann function appears because it is an example of a computable function which is not primitive recursive. ============================================================================ From: kers@otter.HP.COM (Christopher Dollin) Organization: hplabs-brc Bristol,UK. ackermans function is also notable, I seem to remember, for GROWING faster than any primitive recursive function. ackerman(i,i) gets VERY big VERY fast as i grows to largish values like 4, 4 etc ... ============================================================================ From: BRENT@MAINE.BITNET (Brent C J Britton) Organization: University of Maine System Ackermann's function was one of the first functions developed which is not a Primitive Recursive function. These are the initial functions: a) The 0-place zero function given by: Z() = 0 b) The i'th k-place projection function given by: k P (n ,...,n ) = n i 1 k i c) The successor function given by: S(n) = n+1 A function is a Primitive Recursive function if it is one of the initial functions above, or if it can be generated by compositions of initial functions by Primitive Recursion. (If you want to know what Primitive Recursion is, I'd be happy to show you.) ANYHOW, Ackermann's Function is not a Primitive Recursive function, and I believe it was the first one ever shown not to be. This makes Ackermann's function special, because most everything else -- essentially all useful, computable functions -- is Primative Recursive. HOWEVER, you probably keep getting "introduced" to Ackermann's function because of the interesting way it behaves when it's first parameter is large. Good way to show CS students how recursion can cause problems. ------- .. Sine here... . . Brent C.J. Britton <Brent@Maine.bitnet> Computer and Data Processing Services . . . University of Maine System Orono, ME 04469 . . 207/581-3557 .. ============================================================================ From: pixar!fishkin@ucbvax.Berkeley.EDU (Ken Fishkin) Organization: Pixar -- Marin County, California It is defined as follows: a(0,x) = ( 0 if x == 0, 2 if x == 1, x + 2 if x > 1) a(y+1,0) = 0 a(y+1,x+1) = a(y,a(y+1,x)) It was defined by Ackermann in 1928. It's claim to fame is that it was the first function shown to be not primitive recursive. In fact, it grows faster than any primitive recursive recursive function. I used to remember all the wonderful things that "primitive recursive" mean. Basically, I think it means that you can tell in a known finite # of steps whether y is an element of the range of the function or not. ============================================================================ From: Ady Wiernik <ady@Taurus.Tau.Ac.IL> Comments: You can reach this host both as TAURUS.TAU.AC.IL and TAURUS.BITNET The first form is preferred. Apparantly ackerman function appears in a lot of places. If you look in the (now seminal) Tarjan book of algorithms, you'll find out it somehow related to the time complexity of union-find algorithms. >From this it turns out there is a connection between the function and the estimates on the complexity of boundaries of region created from many curves crossing each other (what really appears in the estimates is usually the inverse Ackerman function). Because of this the function appears in the answers to a few question in Geometric visibility, robot motion planning and general computational geometry. For reference, look for a large number of tech papres published by the NYU (in the CIMS/Robotics lab) by Micha Sharir, Jack T. Scwartz, Klara Kedem, Sifroni, myself and a few others. ============================================================================ From: reid@decwrl.dec.com (Brian Reid) Organization: DEC Western Research The easiest way to *understand* Ackermann's function (rather than defining it or characterizing it) is as follows. 0: If you take the number "N" and add 1 to it "K" times, you do an operation called "addition" of N plus K. 1: If you take the number "N" and add it to itself "K" times, you do an operation called "multiplication" of N times K. 2: If you take the number "N" and multiply it by itself "K" times, you do an operation called "exponentiation" of N to the power K. 3: If you take the number "N" and raise it to its own power "K" times, you get some operation whose name I can't remember. Let's call it "hyperexponentiation" of N to the power K. 4: and so forth. If you define "operator number" to be 0 for addition, 1 for multiplication, 2 for exponentiation, 3 for ?hyperexponentiation, and so forth, then you can imagine that the higher-order operators are pretty fierce. Ackermann's function A(N, J) means to perform operation number J on the number N, N times. Other respondents have explained admirably why Ackermann's function is interesting. It is not at all useful. ============================================================================ From: agw@eastend.columbia.edu (Art Werschulz) Organization: Columbia University Computer Science Dept. Pardon me, but this is a little sketchy. I have part of a photocopied version of Aho, Hopcroft and Ullman's book "The Design and Analysis of Computer Algorithms" easily accessible, and I don't really have time to look this all up properly. Consider the problem of doing n operations of the form (*) UNION(S,T,V): replace the sets S and T by a new set V, which is the union of the old sets S and T (*) FIND(x): determine the set to which some element x belongs. (Motivation? Think about FORTRAN EQUIVALENCE statements.) The catch is that UNION and FIND operations could conceivably be in any order. In the Bibliographic Notes of Chapter 4, Aho et al. state that a fast algorithm using path compression on binary trees used to represent the sets was "apparently" first used by McIlroy and Morris. In a 1974 [?] article in J. ACM, Robert Endre Tarjan showed that this algorithm had running time Theta(n*G(n)) where G(n) is an "inverse Ackermann function". If memory serves me correctly (and it probably doesn't), G(n) = inf {m: A(m, m DIV 4) >= n } or some such. Now, since A(m, m DIV 4) grows like crazy with m (as pointed out, faster than any primitive recursive function), it's clear that its inverse function G grows *extremely* slowly. In fact, one could almost say that G(n) is a constant for all reasonable values of n . Hence, Tarjan showed that the running time of this algorithm is almost linear. Furthermore, this is a "Theta" result, and not a "big-Oh" result, meaning that he derived both upper and lower bounds of the form constant*n*G(n) for this algorithm. I don't recall whether or not anybody's beaten Trajan's result. ============================================================================ From: aglew@ccvaxa.UUCP I hate to say something like this, especially after somebody as eminent as Reid has posted, but I vaguely remember that a function related to Ackermann's was used as a complexity bound in a data structures course I sat in on. Somehow munged to grow super slowly instead of super fast. Thanks to Brian for the intuitive explanation of A(n,j). ============================================================================ -- --==---==---==-- .. Came whiffling through the tulgey wood, .. ARPA: cgs@umd5.UMD.EDU BITNET: cgs%umd5@umd2 UUCP: ..!uunet!umd5.umd.edu!cgs