[net.math] value of an integral

ljc@drux2.UUCP (ClerLJ) (02/20/86)

Can someone out "there" show me an elementary (but, perhaps
involving "tricks") method for integrating the following
function:

int from 0 to inf s sup 3 over {e sup s - 1} ds

or "written" out:

      ( oo
      |        3
      |       s
      | -------------- ds
      |     s
      |    e  -  1
      ) 0

By elementary, I mean you can't use the fact that the integral
is related to both the Gamma Function and the Riemann Zeta Func-
tion or use techniques from the theory of complex variables.
Essentially, using only techniques from the standard undergraduate
calculus sequence, find the value of the integral.

However, techniques as used to find the value of:

int from 0 to inf e sup x sup 2 dx

are acceptable.

The value of the integral, if my memory has not failed me, is pi^2/15.

This integral results when considering the radiated power of a black
body, integrated over all wavelengths.  Hence, the cross posting to
net.physics.

Note to net.math readers (if it applies to you read it, otherwise
no offense intended!):

I know I'm in for some scathing remarks that this is not mathematics,
(high school algebra: yes, calculus: maybe) in that there is no group
theory, no topology, and no set theory involved.  Well, those of us
in applied mathematics are already tired of comments of that sort,
so please don't bother :-).

Maybe what is needed is net.math.appld, and our pure mathematics
brethren wouldn't have to read such (from their perspective)
tom-foolery as this article.

Thanx in advance!
				larry cler
				ihnp4!drux2!ljc

percus@acf4.UUCP (Allon G. Percus) (02/22/86)

> However, techniques as used to find the value of:
> 
> int from 0 to inf e sup x sup 2 dx
> 
> are acceptable.

I'm afraid to say it, but as it happens,

				   inf
				  /	 2
				  [     x
 				  I    e   dx
				  ]
				  /
				   0

diverges.

What you want is:

				   inf
				  /	   2
				  [     - x
 				  I    e     dx
				  ]
				  /
				   0

To solve, observe that this is just:

			 |-----------------------------------
			 |     inf	       inf
			 |    /	       2      /	       2
			 |    [	    - x	      [	    - y
			 |    I	   e     dx   I    e     dy
			 |    ]		      ]
			 |    /		      /
			\|     0	       0
Which is:
			 |-----------------------------------
			 |     inf    inf
			 |    /      /	      2	   2
			 |    [	     [	   - x  - y
			 |    I	     I    e	     dx dy
			 |    ]	     ]
			 |    /	     /
			\|     0      0

Now convert this to polar coordinates:

			 |-----------------------------------
			 |     pi/2   inf
			 |    /      /	      2
			 |    [	     [	   - r
			 |    I	     I    e     r dr dtheta
			 |    ]	     ]
			 |    /	     /
			\|     0      0
Which is:
			 |-----------------------------------
			 |     pi/2  [		] inf
			 |    /      |	     2	|
			 |    [	     |	  - r	|
			 |    I	     |   e	|     dtheta
			 |    ]	     |   -----	|
			 |    /	     |	  - 2 	|
			\|     0     [		] 0

Or:
				 |-----------------------
				 |     pi/2
				 |    /
				 |    [	     1
				 |    I	     -  dtheta
				 |    ]	     2
				 |    /
				\|     0



Which finally becomes:

					sqrt(pi)
					--------
					    2

Using a similar technique, you should be able to solve your problem.

           .
        -------
        |-----|             A. G. Percus
        |II II|      (ARPA) percus@acf4
        |II II|       (NYU) percus.acf4
        |II II|      (UUCP) ...{allegra!ihnp4!seismo}!cmcl2!acf4!percus
        |II II|
        -------

makdisi@yale.ARPA (Makdisi) (02/23/86)

Expires:
Followup-To:
Distribution:
Keywords:


The integral of s^3/[exp(s)-1] from 0 to infinity can be calculated by
elementary means (and a little trickery), except that one needs the result that
the sum of 1/n^4, n = 1 to infinity, is pi^2/90.  Here we go:

Write the integrand as s^3*exp(-s)/[1-exp(-s)], and then expand 1/[1-exp(-s)]
into 1 + exp(-s) + exp(-2s) + exp(-ns) + ... .  This is justified since s > 0,
so exp(-s) < 1 .  The integrand will then be the sum of s^3*exp(-n*s), n = 1 to
infinity (remember the exp(-s) term in the numerator).  What remains to be done
is to show that the integral of each term in the integrand is 6/n^4, so that the
sum of the integrals of the terms is 6*pi^2/90, or pi^2/15.  In fact, the
indefinite integral of s^3*exp(-n*s) is
	   3     2
       /  s    3s    6s     6   \   -ns
    - (  --- + --- + --- + ---   ) e    + C          [note the minus sign!]
       \         2     3     4  /
	  n     n     n     n
from integrating by parts three times.  The definite integral of each term from
0 to infinity is therefore 6/n^4, since as s tends to plus infinity, exp(-ns)
"outdoes" the polynomial, and the product tends to 0.

The only proof I know that the sum of 1/n^4, n = 1 to infinity, is pi^2/90
involves Fourier series -- anyone know a more elementary way of proving this?

P.S.  1/[1-exp(-s)] = 1 + exp(-s) + [exp(-s)]^2 + [exp(-s)]^3 + ...
      =  1 + exp(-s) + exp(-2s) + exp(-3s) + ... , just in case that step wasn't
      clear.
--
						Kamal Khuri-Makdisi
						makdisi@yale-cheops.UUCP

      LAMAKDISIDKAMAL         LAMAKDISIDKAMAL         LAMAKDISIDKAMAL
-- 
						Kamal Khuri-Makdisi
						makdisi@yale-cheops.ARPA

      LAMAKDISIDKAMAL         LAMAKDISIDKAMAL         LAMAKDISIDKAMAL

bs@faron.UUCP (Robert D. Silverman) (02/24/86)

> Expires:
> Followup-To:
> Distribution:
> Keywords:
> 
> 
> The integral of s^3/[exp(s)-1] from 0 to infinity can be calculated by
> elementary means (and a little trickery), except that one needs the result that
> the sum of 1/n^4, n = 1 to infinity, is pi^2/90.  Here we go:
> 
> Write the integrand as s^3*exp(-s)/[1-exp(-s)], and then expand 1/[1-exp(-s)]
> into 1 + exp(-s) + exp(-2s) + exp(-ns) + ... .  This is justified since s > 0,
> so exp(-s) < 1 .  The integrand will then be the sum of s^3*exp(-n*s), n = 1 to
> infinity (remember the exp(-s) term in the numerator).  What remains to be done
> is to show that the integral of each term in the integrand is 6/n^4, so that the
> sum of the integrals of the terms is 6*pi^2/90, or pi^2/15.  In fact, the
> indefinite integral of s^3*exp(-n*s) is
> 	   3     2
>        /  s    3s    6s     6   \   -ns
>     - (  --- + --- + --- + ---   ) e    + C          [note the minus sign!]
>        \         2     3     4  /
> 	  n     n     n     n
> from integrating by parts three times.  The definite integral of each term from
> 0 to infinity is therefore 6/n^4, since as s tends to plus infinity, exp(-ns)
> "outdoes" the polynomial, and the product tends to 0.
> 
> The only proof I know that the sum of 1/n^4, n = 1 to infinity, is pi^2/90
> involves Fourier series -- anyone know a more elementary way of proving this?
> 
> P.S.  1/[1-exp(-s)] = 1 + exp(-s) + [exp(-s)]^2 + [exp(-s)]^3 + ...
>       =  1 + exp(-s) + exp(-2s) + exp(-3s) + ... , just in case that step wasn't
>       clear.
> --
> 						Kamal Khuri-Makdisi
> 						makdisi@yale-cheops.UUCP
> 
>       LAMAKDISIDKAMAL         LAMAKDISIDKAMAL         LAMAKDISIDKAMAL
> -- 
> 						Kamal Khuri-Makdisi
> 						makdisi@yale-cheops.ARPA
> 
>       LAMAKDISIDKAMAL         LAMAKDISIDKAMAL         LAMAKDISIDKAMAL

One minor detail: One needs to establish that the series converges uniformly
in order to justify the term by term integration.

Bob Silverman

steiner@bgsuvax.UUCP (Ray Steiner) (02/26/86)

> 
> 
> Can someone out "there" show me an elementary (but, perhaps
> involving "tricks") method for integrating the following
> function:
> 
> int from 0 to inf s sup 3 over {e sup s - 1} ds
> 
> or "written" out:
> 
>       ( oo
>       |        3
>       |       s
>       | -------------- ds
>       |     s
>       |    e  -  1
>       ) 0
> 
> By elementary, I mean you can't use the fact that the integral
> is related to both the Gamma Function and the Riemann Zeta Func-
> tion or use techniques from the theory of complex variables.
> Essentially, using only techniques from the standard undergraduate
> calculus sequence, find the value of the integral.
> 
> However, techniques as used to find the value of:
> 
> int from 0 to inf e sup x sup 2 dx
> 
> are acceptable.
> 
> The value of the integral, if my memory has not failed me, is pi^2/15.
> 
> This integral results when considering the radiated power of a black
> body, integrated over all wavelengths.  Hence, the cross posting to
> net.physics.
> 
> Note to net.math readers (if it applies to you read it, otherwise
> no offense intended!):
> 
> I know I'm in for some scathing remarks that this is not mathematics,
> (high school algebra: yes, calculus: maybe) in that there is no group
> theory, no topology, and no set theory involved.  Well, those of us
> in applied mathematics are already tired of comments of that sort,
> so please don't bother :-).
> 
> Maybe what is needed is net.math.appld, and our pure mathematics
> brethren wouldn't have to read such (from their perspective)
> tom-foolery as this article.
> 
> Thanx in advance!
> 				larry cler
> 				ihnp4!drux2!ljc

Apropos of this same article, does anyone out there know if
the indefinite integral of e**x*sec(x) is expressible in terms
of elementary functions?  I have been wrestling with this
little teaser for 25 years without finding a solution!!
                                Ray Steiner

moroney@dec-jon.UUCP (02/27/86)

Can anyone evaluate the following differential equation to the form
y=f(x) (i.e. find f(x))

 " 2
y y = K   y(0)=K  y'(0)=K
                1        2

K > 0
 1

(The " means second derivitive, ' means first derivitive, K, K , K  are
fixed constants)                                              1   2

Thanks in advance.

-Mike Moroney

..decwrl!dec-rhea!dec-jon!moroney

tim@ism780c.UUCP (Tim Smith) (02/27/86)

In article <896@yale.ARPA> makdisi@yale-cheops.UUCP (Kamal Khuri-Makdisi) writes:
>
>The only proof I know that the sum of 1/n^4, n = 1 to infinity, is pi^2/90
>involves Fourier series -- anyone know a more elementary way of proving this?
>
It depends on what you consider elementary.  In "An Introduction 
to Analytic Number Theory", by Tom Apostol, he evaluates Zeta(2n) 
for positive integers n.  This is done in chapter 12. Your series
is Zeta(4).  He has to use contour integration.  I think it is more
elementary than Fourier series.

-- 
Tim Smith       sdcrdcf!ism780c!tim || ima!ism780!tim || ihnp4!cithep!tim

jablow@brahms.BERKELEY.EDU (Eric Robert Jablow) (02/28/86)

In article <1396@decwrl.DEC.COM> moroney@jon.DEC (Mike Moroney) writes:
>
>Can anyone evaluate the following differential equation to the form
>y=f(x) (i.e. find f(x))
>
> " 2
>y y = K   y(0)=K  y'(0)=K
>                1        2
>
>K > 0
> 1
>
>(The " means second derivitive, ' means first derivitive, K, K , K  are
>fixed constants)                                              1   2
>
>Thanks in advance.
>
>-Mike Moroney
>
>..decwrl!dec-rhea!dec-jon!moroney

Standard trick: in any differential equation of the form

		f(y, y', y")=0		(no independent variable),

let p=y'.  Then y"=p'=p*(dp/dy) by the chain rule.  Thus you get

		f(y, p, p(dp/dy))=0.

This is a first order ODE in y.  Solve it to get

		g(y)=p=y'.

Solve this for y in the obvious fashion.

The **best** book on ODEs is Ordinary Differential Equations, by Ince.
Dover publishes it, so it is cheap.  It is old-fashioned, though.


			Respectfully,
			Eric Robert Jablow
			MSRI
			ucbvax!brahms!jablow

	I may be a screwy little wabbit, but at least I'm not
	going to Alcatraz!

				--E. Fudd--

weemba@brahms.BERKELEY.EDU (Matthew P. Wiener) (02/28/86)

In article <12087@ucbvax.BERKELEY.EDU> jablow@brahms.UUCP (Eric Robert Jablow) writes:
>The **best** book on ODEs is Ordinary Differential Equations, by Ince.
>Dover publishes it, so it is cheap.  It is old-fashioned, though.

The **best** book for someone who just wants to solve a particular ODE,
like the person you responded to, is the Schaum's Outline Series on the
topic.  It too is cheap.

I don't like E L Ince's book.  I prefer P Hartman's or I G Petrovski's,
with the same title.

ucbvax!brahms!weemba	Matthew P Wiener/UCB Math Dept/Berkeley CA 94720

tim@ism780c.UUCP (Tim Smith) (03/01/86)

makdisi@yale-cheops.UUCP (Kamal Khuri-Makdisi) writes:
>
> The only proof I know that the sum of 1/n^4, n = 1 to infinity,
> is pi^2/90 involves Fourier series -- anyone know a more
> elementary way of proving this?
>

That's pi^4/90.  An elementary proof is in "Challenging Mathematical
Problems with Elementary Solutions, Volume II", by A.M. Yaglom and
I.M. Yaglom.  This is problem 145.  Here is what they do:

( in the following, things like n,m,j, etc, are integers )

First, we will establish that zeta(2) = pi^2/6.  This is just to
show how they approach this kind of thing, and besides, we might
as well be complete.

First, establish the formula

	sin nx = C(n,1) sin^1 (x) cos^(n-1) (x) -
		 C(n,3) sin^3 (x) cos^(n-3) (x) +
		 C(n,5) sin^5 (x) cos^(n-5) (x) - ...

which you can do by induction.

This can be rewritten as

	sin nx = sin^n (x) [ C(n,1) cot^(n-1)(x) -
			     C(n,3) cot^(n-3)(x) + ... ]

We let n = 2m+1, and let x = j pi / n,  1 <= j <= m.

Then we have sin(x) != 0, and sin(nx) = 0.  This gives us

	C(2m+1,1) cot^(2m)(x) - C(2m+1,3) cot^(2m-2) (x) + ... = 0

or, in other words, the polynomial

	C(2m+1,1) z^m - C(2m+1,3) z^(m-1) + ... = P(z)

has the roots

	cot^2 ( pi / n ), cot^2 ( 2 pi /n ), ... , cot^2 ( m pi /n ).

Now, the sum of the roots of a polynomial A0 y^n + A1 y^(n-1) + ...
is -A1/A0.

Thus,

	cot^2 (pi/n) + cot^2 (2 pi/n) + ... + cot^2 (m pi/n) =

	C(2m+1,3) / C(2m+1,1 ) = m(2m-1)/3

Noting that csc^2 u = cot^2 u + 1, we get that


	csc^2 (pi/n) + csc^2 (2 pi/n) + ... + csc^2 (m pi/n) = m(2m+2)/3

Now we are ready to go places!

From our elementary trig classes we know that

	0 < cot w < 1/w < csc w   when 0 < w < pi/2

or

	cot^2 w < 1/w^2 < csc^2 w

Using this, and the formulas above for sums of cot^2 and csc^2, we get

	m(2m-1)/3 < n^2/pi^2 * ( 1/1^2 + 1/2^2 + ... + 1/m^2 ) < m(2m+2)/3

or ( putting in 2m+1 for n),

	m(2m-1) pi^2                                 m(2m+2) pi^2
	------------ < 1/1^2 + 1/2^2 + ... + 1/m^2 < ------------
	3 ( 2m+1 )^2                                 3 ( 2m+1 )^2

As m -> oo, the things on the end both -> pi^2/6.

Now for zeta(4)!  We need to evaluate the sum

	cot^4 (pi/n) + cot^4 (2 pi/n) + ... + cot^4 (m pi/n)

In other words, the sum of the squares of the roots of of P(z).

If we consider a polynomial A0 y^n + A1 y^(n-1) + A2 y^(n-2) + ..., with
roots R1,R2,...,Rn, then we have

	sum Ri = -A1/A0, and
	sum RiRj ( i<j ) = A2/A0

Note that

	(sum Ri)^2 = sum( Ri^2 ) + 2 sum RiRj, i<j, or

	sum( Ri^2 ) =A1^2/A0^2 - 2 A2/A0.

Leaving the details to the reader, :-), we get

	cot^4 (pi/n) + cot^4 (2 pi/n) + ... + cot^4 (m pi/n) =

	m(2m-1)(4m^2+10m-9)/45

Using csc^4 = cot^4 + 2 cot^2 + 1, you can grind out the corresponding
sum for csc^4:

	8m(m+1)(m^2+m+3)/45.

We then get

	m(2m-1)(4m^2+10m-9) pi^4    m          8m(m+1)(m^2+m+3) pi^4
	------------------------ < sum 1/k^4 < ---------------------
	      45 (2m+1)^4           1              45 (2m+1)^4

We now submit to an orgy of mindless manipulation and get

	pi^4       1         2         3       13         m
	---- (1 - ----)(1 - ----)(1 + ---- - --------) < sum 1/k^4
	 90       2m+1      2m+1      2m+1   (2m+1)^2     1


	  pi^4        1             11
	< ----(1 - --------)(1 + --------)
	   90      (2m+1)^2      (2m+1)^2

Letting m -> oo, we get

	  zeta(4) = pi^4/90

They point out that we may evaluate sum cot^6, sum cot^8, etc, in the same
sort of way, and get

	zeta(6) = pi^6/945
	zeta(8) = pi^8/9450
	zeta(10) = pi^10/93555
	zeta(12) = 691 pi^12 / 638512875

Some other formulas for pi that they get by playing with trig functions
are Vieta's formula:

	Let R1 = (1/2)^(1/2), and
	    R(n+1) = (1/2 + 1/2 Rn)^(1/2)

	Then

		2/pi = R1 R2 R3 ...

Leibniz's formula:

	pi/4 = 1 - 1/3 + 1/5 - 1/7 + ...

and Wallis's formula:

	pi   2 2 4 4 6 6 8 8
	-- = - - - - - - - -  ...
	2    1 3 3 5 5 7 7 9

This is a great book.

Ubizmo, these computers sure suck for typing equations!
-- 
Tim Smith       sdcrdcf!ism780c!tim || ima!ism780!tim || ihnp4!cithep!tim

weemba@brahms.BERKELEY.EDU (Matthew P. Wiener) (03/02/86)

In article <3@bgsuvax.UUCP> steiner@bgsuvax.UUCP (Ray Steiner) writes:
>> Note to net.math readers (if it applies to you read it, otherwise
>> no offense intended!):
>> 
>> I know I'm in for some scathing remarks that this is not mathematics,
>> (high school algebra: yes, calculus: maybe) in that there is no group
>> theory, no topology, and no set theory involved.  Well, those of us
>> in applied mathematics are already tired of comments of that sort,
>> so please don't bother :-).
>> 
>> Maybe what is needed is net.math.appld, and our pure mathematics
>> brethren wouldn't have to read such (from their perspective)
>> tom-foolery as this article.

I don't mind calculus questions or high school math questions per se.
I do mind when they get >10 solutions posted, mostly the same!  The
same problem plagues net.puzzle, for example.

I like both pure and applied mathematics.

>Apropos of this same article, does anyone out there know if
>the indefinite integral of e**x*sec(x) is expressible in terms
>of elementary functions?  I have been wrestling with this
>little teaser for 25 years without finding a solution!!

It isn't.  Sorry.

ucbvax!brahms!weemba	Matthew P Wiener/UCB Math Dept/Berkeley CA 94720

ravikuma@umn-cs.UUCP (03/02/86)

For those interested in combinatorial algorithm design,  
here is a deceptively simple  problem:
  "A and B, two players each independently think of a
positive integer between 1 and 1000. You are allowed
to ask either of them (in any order) questions of the
form " Is your number less or equal to K?" for any K of
your choice, and find out the smaller of the guessed
                              -------
numbers. Trivially, you can do this using 20 questions,
since 10 questions suffice to find a guess. How much
better can you do? (14 questions suffice, though it is
not easy to prove that it is the lowerbound.)
Note: We are interested,  in the worst-case bound.)

lambert@boring.uucp (Lambert Meertens) (03/03/86)

> [...] does anyone out there know if the indefinite integral of e**x*sec(x)
> is expressible in terms of elementary functions?

The indefinite integral can be written as a Fourier-like series:

        x  cos x + sin x   cos 3x + 3 sin 3x   cos 5x + 5 sin 5x
  C + 2e  (------------- - ----------------- + ----------------- - ... ) .
	     1  +  1^2        1   +   3^2         1   +   5^2

This can be found by using the formal series

                ix   3ix    5ix
    sec x = 2 (e   -e    + e    + ... ) .

The integration constant C jumps at the poles of the integrand.
I don't see how to rewrite the sum as a finite closed expression, although it
does not look beyond hope; in particular, it is reminiscent of the expansions

              2m             1      cos x     cos 2x    cos 3x
    cosh mx = -- sinh m.pi (---- - ------- + ------- - ------- + ... )
              pi            2m^2   m^2+1^2   m^2+2^2   m^2+3^2

               2             sin x    2 sin 2x   3 sin 3x
    sinh mx = -- sinh m.pi (------- + -------- - -------- + ... ) .
              pi            m^2+1^2    m^2+2^2    m^2+3^2

-- 

     Lambert Meertens
     ...!{seismo,okstate,garfield,decvax,philabs}!lambert@mcvax.UUCP
     CWI (Centre for Mathematics and Computer Science), Amsterdam

steiner@bgsuvax.UUCP (Ray Steiner) (03/04/86)

Thanks to all of you who answered my earlier query. Does anyone
know how to prove that the integral of e**x*sec(x) is not
expressible in terms of elementary functions?

larsen@fisher.UUCP (Michael Larsen) (03/11/86)

> 
> 
> For those interested in combinatorial algorithm design,  
> here is a deceptively simple  problem:
>   "A and B, two players each independently think of a
> positive integer between 1 and 1000. You are allowed
> to ask either of them (in any order) questions of the
> form " Is your number less or equal to K?" for any K of
> your choice, and find out the smaller of the guessed
>                               -------
> numbers. Trivially, you can do this using 20 questions,
> since 10 questions suffice to find a guess. How much
> better can you do? (14 questions suffice, though it is
> not easy to prove that it is the lowerbound.)
> Note: We are interested,  in the worst-case bound.)

	To determine the required number of guesses, define more generally
f(x, y) to be the number of questions required to determine the lesser of
two integers, one in the interval [1, x], one in the interval [1+x-y, x],
where x >= y.  It is easy to show that the optimal first question in this
situation relates to the larger of the two intervals, [1, x].  This reduces
the computation of f(1000, 1000) to a problem requiring ~10^6 memories and ~10^9
operations. The calculation is simplified by defining g(m, n), for m >= f(n, n),
to be the largest integer p such that f(p, n) = m.  Clearly,

(*)			f(m+1, n) = f(m, n) + 2^m.

Now define h(n) = g(f(n, n), n).  If you know h(n), equation (*) gives all
other values of g(m, n).  Having computed h(n) and f(n, n) for all n such that
f(n, n) < M, you compute h(n) for all n with f(n, n) = M as follows:

Take the largest k with f(k, k) = M - 1 such that f(M - 1, k) <= n.  If no
such k exists, then f(n, n) > M.  If such a k does exist, then consider

			p = k + g(M - 1, n - k).

If p >= n, then f(n, n) = M, and h(n) = p.  Otherwise f(n, n) > M.
This gives a linear time, linear memory algorithm for computing f(x, x) for
all x < N.  Having implemented it, I report the following results:

			Range of n	Value of f(n, n)
			1		0
			2		2
			3		3
			4 - 6		4
			7 - 10		5
			11 - 19		6
			20 - 34		7
			35 - 62		8
			63 - 113	9
			114 - 209	10	
			210 - 387	11
			388 - 720	12
			721 - 1350	13

In particular, f(1000, 1000) = 13.