[net.math] Beyond Exponentiation

eklhad@ihnet.UUCP (K. A. Dahlke) (01/28/85)

<  where no function has gone before  >

what is the next function in the following sequence?
Allow me to use '^' for exponentiation.
	y = 2
	y = 2*x
	y = 2^x
	???
Each operation evolve from an iteration of the previous (originally).
Consider a new operator '~', such that  a~b is equivalent to:
	a^ (a^ (a^ (a^ ... b times ... ) ) ).
Note, the parentheses are important, since the previous operator
(exponentiation) is not commutative.
The alternative:   a^a^a^a^... b times,
could simply be written   a ^ (a^(b - 1)).
Now consider the function  y = 2~x,  assuming y(0) = 1.
The function increases amazingly, taking on the values:
	1	2	4	16	65536	2^65536	huge	wow
I am having trouble with this function though.
What is  y(2.5)?  I am assuming, of course, continuity and infinite
differentiability and other nice things.
Perhaps I could come up with a differential equation which produces
the appropriate values for the integers.
If this function is f(x), then  f(x) = 2^f(x-1).
Differentiating,  df(x) = log(2) * f(x) * df(x-1).
If we substitute df (x-1) with the right side(using x-1 instead of x),
A pattern begins to develop.
df(x) = log(2)^x * f(x)*f(x-1)*f(x-2)*...
I find all this quite interesting, but i haven't been able
to go much further with it.
Any help would be appreciated.
what is  2~2.5?
-- 

Karl Dahlke    ihnp4!ihnet!eklhad

abdali@tekchips.UUCP (Kamal Abdali) (01/30/85)

-----------------------------
> what is the next function in the following sequence?
> Allow me to use '^' for exponentiation.
> 	y = 2
> 	y = 2*x
> 	y = 2^x
> 	???
> Each operation evolve from an iteration of the previous (originally).
> 
> ihnp4!ihnet!eklhad

Addition, multiplication, exponentiation, ... form a sequence of
operations each of which is defined in terms of repeated applications
of the previous operation.  Additional operations of the sequence
don't seem to have any standard names because they are not of much use.
In fact, I don't know of any use of these except in compuatability and
complexity theories.  Somewhere I saw the names tetration, pentation,
hexation, etc. for the successive operations after exponentiation.  The
whole family of operations is represented by the (generalized, or
original) Ackermann's function described, for example, in ``Elementary
Computability, Formal Languages, and Automata'' by R. McNaughton,
Prentice-Hall, 1982.  I like to call theses operations n-ation, with
1-ation, 2-ation, 3-ation being the same as addition, multiplication,
exponentiation, respectively.  0-ation is a special form of successor
operation.  These should formally be defined so that we will have
 
      a OP  b    =    b+2 if a=b, b+1 otherwise
          0

      a OP   b   =    a OP  (a OP  (a OP  ... (a OP  a)) ... ))
          n+1             n      n      n          n
                      -----------------------------------------
                                (b occurrences of a)

The notation G(n) is used in ``Design and Analysis of Computer
Algorithms'' by Aho, Hopcroft, and Ullman for what we may call ``4-log
base 2 of n'', i.e. G(n) = the least k such that 2 OP  k >= n.
						     4

tet@uvaee.UUCP (Thomas E. Tkacik) (01/30/85)

> <  where no function has gone before  >
> 
> what is the next function in the following sequence?
> Allow me to use '^' for exponentiation.
> 	y = 2
> 	y = 2*x
> 	y = 2^x
> 	???

K. A. Dahlke poses the problem of the function 

        f(x) = 2~x = 2^ (2^ (2^ ... x times ...) ) ).

What is f(2.5)?  How is this function defined for nonintegers?

The function f(x) has the recurrence relation

        f(x) = 2^f(x-1)

thus we need only define g(x)=f(x) only over the interval (0,1)
and give f(0)=1.
This, along with the recurrence relation will then define f(x)
for all real numbers.

Ignoring for the momemt that we want continuity for the function and its
derivatives, we can assume any function we wish for g(x).

Let us pick  g(x) = 0.   If we do then  f(x) = 0  for any noninteger,
but will be what we want for all integers.  Thus f(2.5)=0.

The next step is to assume continuity at integer values only from the left.
The function    g(x) = f(0) = 1  over the interval  (0,1).
Now  f(x) = f(int(x)) for all x, and   f(2.5) = f(2) = 4.

Now impose the restriction that f(x) be continuous over all x.
Let
        g(x) = c0 + c1*x   
We can find c0 and c1 by forcing   g(0) = f(0)   and  g(1) = f(1).
Solving, we get  c0 = 1  and  c1 = 1.
        g(x) = 1 + x
With this g(x),  f(x) will be continuous everywhere, and we
find f(2.5) from the recurrence relation.
        f(0.5) = 1.5; f(1.5) = 2^f(0.5) = 2.82; 
and f(2.5) = 2^f(1.5) = 7.1

Next impose the resrtiction that f'(x) be continuous everywhere.
Now
        g(x) = c0 + c1*x + c2*x^2
We now need to define the first derivative of f(x).
        f'(x) = log(2) * f(x) * f'(x-1).
However, this is a function of f(x) (which we do not yet know), so using
        f(x) = 2^f(x-1)
we get
        f'(x) = log(2) * 2^f(x-1) * f'(x-1)
g(x) is found by setting
        g(0) = f(0);  g(1) = f(1);  and g'(1) = f'(x) = log(2) *2^g(0) * g'(0).
Solving we get
        c0 = 1;   c1 = 1/(log(2)+0.5);   c2 = (log(2)-0.5)/(log(2)+0.5).
                  c1 = 0.838             c2 = 0.162
Again we find f(2.5) from f(0.5),
        f(0.5) = 1.46;  f(1.5) = 2.75;   f(2.5) = 6.73.

This is as far as I got with this approach.  To find the exact solution
we set
             infinity
               ---        i
        g(x) = >     c * x
               ---    i
               i=0
    
and solve for the c's  by setting  g(0)=f(0), g(1)=f(1) and
forcing all of the derivatives of g(1) equal to the derivatives of f(1).
This might be difficult as it appears that the higher order derivatives
of f(x) get more difficult to calculate.

When this is done, the power series of g(x) will then define f(x) for
all x, not just over the interval (0,1).

----

Tom Tkacik    mcnc!ncsu!uvacs!uvaee!tet
Jeff Labuz                ...!uvaee!jl

ech@spuxll.UUCP (Ned Horvath) (01/31/85)

I suggest you look at Ackermann's classic function, at least over the 
naturals.  The first few "rows" of that function correspond roughly to
"take the successor", "add", "multiply", "exponentiate", etc.  The recursion
is (for n,m >= 0)

	A(0, n) = n+1
	A(n+1, 0) = A (n, 1)
	A(n+1, m+1) = A (n, A (n+1,m))

How to generalize this to the reals is not obvious...

=Ned=

lambert@boring.UUCP (01/31/85)

> [...]
> Now consider the function  y = 2~x,  assuming y(0) = 1.
> The function increases amazingly, taking on the values:
> 	1	2	4	16	65536	2^65536	huge	wow
> [...]
> If this function is f(x), then  f(x) = 2^f(x-1).
> [...]
> what is  2~2.5?

It is funny that this comes up now, since there is an
obvious relationship with the discussion on the square root
of exp.  For if g is the square root of exponentiation with
base two, we can obviously take f(x+1/2) = g(f(x)).
-- 

     Lambert Meertens
     ...!{seismo,philabs,decvax}!lambert@mcvax.UUCP
     CWI (Centre for Mathematics and Computer Science), Amsterdam

paulv@boring.UUCP (01/31/85)

In article <186@ihnet.UUCP> eklhad@ihnet.UUCP writes:
><  where no function has gone before  >
>
>what is the next function in the following sequence?
>Allow me to use '^' for exponentiation.
>	y = 2
>	y = 2*x
>	y = 2^x
>	???
>Each operation evolve from an iteration of the previous (originally).
>
>Karl Dahlke    ihnp4!ihnet!eklhad

It is perhaps interesting that such matters were the main subject
of Archimedes' "The Sand Reckoner". In that treatise (<100BC),
Archimedes addresses the question of very big numbers, basically
by continueing the process above. He convinces king Golon that
the number of grains of sand needed to fill up the universe tightly
is not infinite but can easily be counted. A quick way to obtain
an upper bound on the needed number is by using the above process.
	On the way he enunciates the so-called "Axiom of Archimedes",
definition 4 Book 5 of the "Elements" of Euclid, which serves
to derive the whole theory of proportion and is at the very
foundations of the calculus. The row between Newton and Leibnitz
about whether or not to accept this axiom (resulting in a
final victory for Newton & Archimedes by the success of Weierstrass's
epsilon-delta method for defining limits and so ground the
calculus accepting the Axiom) has its present day revival with the emergence of
Nonstandart Analysis which rejects the Axiom.
	Archimedes also mentiones in passing the theory of
Aristarchos of Samos (<300BC), who claimed that the sun stood still like the
fixed stars and the earth revolved around her.

tet@uvaee.UUCP (Thomas E. Tkacik) (02/01/85)

> I suggest you look at Ackermann's classic function,

Ackermann's function is an example of a generally recursive function.
I have been told that any generally recursive function can be 
rewritten in an interative form, and will execute faster (on a computer).

Does anyone know how Ackermann's function can be defined without
using recursion?

---
Tom Tkacik     decvav!mcnc!ncsu!uvacs!uvaee!tet

grunwald@uiucdcsb.UUCP (02/05/85)

You can define ackermanns function as:

S[1] := m;
S[2] := n;
i := 2;
while (i != 1) do
	if (S[i-1] == 0) {
		S[i-1] := S[i] + 1;
		i := i -1;
	}
	else if ( S[i] == 0 ) {
		S[i-1] := S[i-1] - 1;
		S[i] := 1;
	}
	else {
		S[i+1] := S[i] - 1;
		S[i] := S[i+1];
		S[i-1] := S[i-1] - 1;
		i := i + 1;
	}
}
z := S[1];

As in "Mathematical Theory of Computation" by Manna, Page 235.

jfh@browngr.UUCP (John "Spike" Hughes) (02/08/85)

    The super-exponential is an interesting object--Ed Nelson (of Princeton)
has been giving talks about fundamentals of mathematics--logic and such--
questioning whether, without the induction axioms, certains things
are 'computable' oe even 'expressible'. Super-exponentiation of
large numbers seems not to be 'expressible' in his terms. If
you can catch him giving a lecture on this, you should go--when he
gave it at Haverford last year itwas called "Do the integers exist?"
It's thought provoking.

   By the way, as for defining S-exponentiation, you might want to
look at how the Gamma Function (an extension of factorial) was
built. In Calculus, by Michael Spivak, there is a mention of
a cxondition which uniquely defines the gamma function, indicating
what a 'nice' interpolation it is: gamma(x) = (x-1)! for x an \
integer (positive), and gamma is smooth on its domain, and
log(gamma(x)) is convex, which in some sense describes the extreme
smoothness of gamma. You might want to try a similar interpolation
mechanism...
  -jfh

steven@boring.UUCP (02/12/85)

In article <616@spuxll.UUCP> ech@spuxll.UUCP (Ned Horvath) writes:
> I suggest you look at Ackermann's classic function, at least over the
> naturals.  The first few "rows" of that function correspond roughly to "take
> the successor", "add", "multiply", "exponentiate", etc.  The recursion is
> (for n,m >= 0)
> 
> 	A(0, n) = n+1
> 	A(n+1, 0) = A (n, 1)
> 	A(n+1, m+1) = A (n, A (n+1,m))

I have often seen this referred to as Ackermann's function, even in books,
but I believe that it is actually Peter's function (with an acute accent
over the first e), named after the late Rosza Peter of Hungary (with acute
accent over the o), and that Ackermann's function is the following:

	ack(x,y,n) {y>=0, n>=0}
	  n=0 -> y+1
	  y=0 and n=1 -> x
	  y=0 and n=2 -> 0
	  y=0 and n>2 -> 1
	  n<>0 and y<>0 -> ack(x, ack(x, y-1, n), n-1)

Can anybody confirm this for me?

Steven Pemberton, CWI, Amsterdam; steven@mcvax.

debray@sbcs.UUCP (Saumya Debray) (02/13/85)

> 
> Ackermann's function is an example of a generally recursive function.
> I have been told that any generally recursive function can be 
> rewritten in an interative form, and will execute faster (on a computer).
> 
> Does anyone know how Ackermann's function can be defined without
> using recursion?
> 
> ---
> Tom Tkacik     decvav!mcnc!ncsu!uvacs!uvaee!tet

Often, it's possible to convert linear recursion to iteration using
associativity of operators to transform the computation tree.  With
Ackermann's function, this doesn't work, since we don't have linear
recursion.  The examples I've seen of iterative forms of Ackermann's
function all amount to maintaining an explicit stack, i.e. effectively
writing an interpreter for the function.  It's obvious that any
computable function can be rewritten without recursion if we maintain an
explicit stack (it amounts to the fetch-decode-execute cycle of the nearest
universal Turing machine).
-- 
Saumya Debray
SUNY at Stony Brook

	uucp: {allegra, hocsd, philabs, ogcvax} !sbcs!debray
	CSNet: debray@sbcs