msb@lsuc.UUCP (Mark Brader) (03/24/85)
panos@utcsri.UUCP (Panos Economopoulos) writes: > > 2) what is the derivative of factorial x? > > Factorial x or x! for short, is in general defined as > > x! = G (x+1) = integral from 0 to inf (t to the power of x > times e to the power of -t) dt > > where G(x) is the Gamma function. Wrong. x! is a function of whole numbers (nonnegative integers) only. It is therefore discontinuous and has no derivative. However, the Gamma function is the generalization of factorial to real numbers, so the question of its derivative is of some interest. Let's continue, pretending that the above was correct... > The derivative of x! is, therefore, the derivative of the integral > with respect to x. Because the function g(t,x) (t to the power ... -t) > which is to be integrated is continuous w.r.t. t and x, and > because its derivative dg(t,x)/dx is also continuous, we can > interchange the integral and the derivative, i.e. integrate > the derivative of g(t,x) w.r.t. x (Leibnitz?? thrm??) > Doing this we get: > > dx!/dx = integral from 0 to inf ( x * t to the power of (x-1) * > e to the power -t) dt No we don't. "x * t to the power (x-1)" is the derivative of "t to the power x" with respect to t, not x. The derivative with respect to x is "ln t * t to the power (x - 1)". Therefore dx!/dx = integral from 0 to inf (ln t * t to the power (x-1) * e to the power -t) dt This is beyond me. > That is, the derivative of x! is x! This conclusion from the incorrect expression above could have been rejected by a plausibility check. Remember that 0! = 1! = 1. Since we're talking about a continuous function, its derivative must therefore be zero somewhere on this interval (Rolle's theorem). But then between 0! = 1 and the first a! = 0, the function must have a negative derivative somewhere since it is decreasing, yet must be positive since the first a!=0 has not been reached. Therefore the function can't be its own derivative. > I wasn't expecting this result, since the one function you can easily > get that equals its derivative is the exponential. A question one > could ask is can you really find ALL functions, defined in any way, > that equal their derivatives and which are they? I think the exponential is the only one, but I don't know. Mark Brader
panos@utcsri.UUCP (Panos Economopoulos) (03/25/85)
Mark Brader correctly pointed out an elementary error in my calculations of the derivative of the Gamma function. It seems that the resulting integral is not as easy to evaluate as it initially (erroneously) appeared However, 1. Of course, x! is generally defined as x(x-1)(x-2)..2*1 for non-negative integers x (0!=1). In that case it is a discrete variable function, and the only thing resembling a derivative is a difference function D(x) = [ f(x+1) - f(x) ] / 1 which in our case is (x+1)! -x! = x * x! 2. If, however, we consider the values x! to be just the values of the Gamma function at the integral values of the variable x, then it is of interest to derive the derivative (which doesn't seem trivial). In this case, the values of the derivative G' at the integral values of x, will give us the slope of the Gamma at these points. This will be smaller than the corresponding D(x) found above, because in this case D(x) represents the slopes of the linear cords joining the points (x,x!) on the plot of the Gamma function and the Gamma is a concave function. 3. Looking up a Schaum's Mathematical Handbook of formulas and tables, I found that the derivative of Gamma is given by the following formula G'(x) / G(x) = -g + ( 1/1 - 1/x ) + ( 1/2 - 1/(x+1) ) + .... + ( 1/n - 1/(x+n-1) ) + ..... where g is Euler's constant. For x=1, G'(1) = -g. I think I'll stick with x * x! :-) -- Panos Economopoulos UUCP: {decvax,linus,ihnp4,uw-beaver,allegra,utzoo}!utcsri!panos CSNET: panos@toronto
gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (03/28/85)
> > ... A question one > > could ask is can you really find ALL functions, defined in any way, > > that equal their derivatives and which are they? > > I think the exponential is the only one, but I don't know. Suppose f(.) and g(.) are any two such functions. Df = f Dg = g D(f - g) = Df - Dg (by linearity of D) = f - g So (f - g) is another such function. So what? Well, it's cute. I think a uniqueness proof could be based on it. However, let's assume the availability of a text on ODEs: Let f(.) be such a function of its single (real) argument. Df = f Df - f = 0 (D - 1.)f = 0 (linear (not just affine) operators) Two C-infinity solutions: (A) f == 0 (constant-zero function) (B) f == c exp(.) where c is any constant ((B) includes (A) as a special case, actually) The theory of linear ODEs tells us that (B) is the most general solution under very general continuity assumptions.