[net.math] Some Math problems

barry@ihuxt.UUCP (R. G. Barry) (10/19/84)

I have a couple of questions on some problems I'm having trouble with
and if anyone could help, it would be much appreciated.

Here they are:

1.Given 4 points on a plane, (3,0,0), (0,0,0), (3,0,5) and (0,0,5),
  (i.e. a plane on the xz plane), how do you determine the equation
  of that plane???

2. Given a matrix A where


		.68	-.6928	.24	0
		.6928	 .5	-.52	0
	A = 	.24	 .52	 .82	0
		 0	  0	  0	1

find a transformation B such that BB=A.

If anyone has any ideas, send mail to ihuxt!barry

Thanks in advance,
ihuxt!barry

kaden@uiucuxc.UUCP (10/22/84)

	The answer to Q1 is y = 0. In fact, any THREE points with
	y-coordinate = 0 will determine the plane y = 0.

chuck@dartvax.UUCP (Chuck Simmons) (10/23/84)

<Hi!  I'm Judy.  Fly me to Hawaii!>

>1.Given 4 points on a plane, (3,0,0), (0,0,0), (3,0,5) and (0,0,5),
>  (i.e. a plane on the xz plane), how do you determine the equation
>  of that plane???

I'm not sure I understand this question.  Let me assume that you have
been given a number of points (purportedly all lying in the same plane)
and you want to discover an equation for this plane.  

First, since a plane is determined by 3 points, choose 3 points from
the set of points that you have been given which are not colinear.
(If all triplets of points are colinear, then all your points lie
on the same line and you won't be able to find a unique equation for
the plane.)

[3 points are colinear if a linear combination of them adds up to zero.
Let our 3 selected points be P1=(x1,y1,z1), P2=(x2,y2,z2), and P3=(x3,y3,z3).
If there exist some a,b,c in R such that aP1+bP2+cP3=0, then the points
are colinear.  So if we can solve the system of equations:
    ax1+bx2+cx3 = 0
    ay1+by2+cy3 = 0
    az1+bz2+cz3 = 0
then the points are colinear.]

The equation of a plane is given by ax+by+cz+d=0. 
So we need to solve the system of equations:
    ax1+by1+cz1+d=0
    ax2+by2+cz2+d=0
    ax3+by3+cz3+d=0
The astute reader will notice there will usually be more than one solution.
Presumably these solutions will be scalar multiples of each other.

Example:  Given points (3,0,0), (0,0,0), (3,0,5) and (0,0,5), determine
the equation of the plane on which they lie.

Choose (3,0,0), (0,0,0), and (3,0,5) as 3 points that determine the
plane.  Then, we solve:
    3a+0b+0c+d=0
    0a+0b+0c+d=0
    3a+0b+5c+d=0

This gives a=c=d=0.  Choose b=1 and the desired equation pops 
out as y=0.


>2. Given a matrix A where
>
>		.68	-.6928	.24	0
>		.6928	 .5	-.52	0
>	A = 	.24	 .52	 .82	0
>		 0	  0	  0	1
>
>find a transformation B such that BB=A.

This one seems a lot harder.  Does anyone know if there is a general
method of taking the square-root of a square matrix?  One approach that
I'm investigating uses the Newton-Raphson method.  This method appears
to have promise.  I used it on a 2x2 matrix and came up with an answer.
Our Honeywell went to bed for the night before I could try my algorithm
out on the above matrix.

The Newton-Raphson method is used to take the square root of a real
number.  Suppose we want to know the square root of X.  We compute
successive approximations to the square root.  These approximations
converge rather quickly to the square root.  The following pl1 routine
implements the Newton-Raphson method.

sqrt: procedure (x, epsilon) returns (float);
declare x       float, /* number to find root of */
        epsilon float, /* how close we need to get */
        guess   float; /* our approximation */

guess = 1;                                /* initial approximation */
do while (abs (x-guess*guess) > epsilon); /* loop til we get close */
guess = (guess + x/guess) / 2;            /* get a new guess */
end;

return  (guess);
end sqrt;

I am attempting to approximate the square root of a matrix using this
algorithm but using a square matrix instead of a floating point number.
I don't really know that my sequence of approximations will converge for
matrices.  Also, there is the possibility that you may run into an
approximation which is non-invertible.

Anyway...  I hope this hasn't been too tedious.

-- Chuck
dartvax!chuck

chuck@dartvax.UUCP (Chuck Simmons) (10/23/84)

<You little sqrt.>

>2. Given a matrix A where
>
>		.68	-.6928	.24	0
>		.6928	 .5	-.52	0
>	A = 	.24	 .52	 .82	0
>		 0	  0	  0	1
>
>find a transformation B such that BB=A.

Using the Newton-Raphson method described earlier, I came up with


          .914249       -.399978        .0642701       0
          .399978        .866079       -.300184        0
    B =   .0642701       .300184        .95183         0
           0              0              0             1

-- Chuck
dartvax!chuck

gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (10/24/84)

> 2. Given a matrix A where
> 
> 
> 		.68	-.6928	.24	0
> 		.6928	 .5	-.52	0
> 	A = 	.24	 .52	 .82	0
> 		 0	  0	  0	1
> 
> find a transformation B such that BB=A.

I don't have the solution right at hand, but here is how I would
go about it (and its generalization) if it were my problem...

First, note that the block diagonal form of A means you can limit
your search to a block diagonal B:

		?	?	?	0
	B =	?	?	?	0
		?	?	?	0
		0	0	0	1

This amounts to a 3-dimensional problem.  Next, note that your
matrix A is orthonormal.  This means that it represents a 3-D
rotation by some amount (call it theta) about some fixed axis.
I would find the principal axes, and make up a solution B that
represents a rotation of half-theta about the same axis.  Books
on linear algebra or crystallography will show you how to
diagonalize the matrix.

chuck@dartvax.UUCP (Chuck Simmons) (10/26/84)

> 	The answer to Q1 is y = 0. In fact, any THREE points with
> 	y-coordinate = 0 will determine the plane y = 0.

*Any* three points?  What plane does (1,0,1),(2,0,2),(3,0,3) 
determine?

-- Nitpicky  

robison@uiucdcsb.UUCP (10/28/84)

I originally sent the answer to question 2 directly to Barry.
Since no-one else posted an answer, I'll "reprint" mine below:

-------------------------------------------------------------------------

- 2. Given a matrix A where
- 
- 
-		.68	-.6928 	 .24	0
-		.6928	 .5	-.52	0
-	 A = 	.24	 .52	 .82	0
-		  0	  0	  0	1
- 
- find a transformation B such that BB=A.

This is the problem of finding the square-root of matrix A.

(Notation: "*" means multiplication, "**" means exponentiation.  I wish I
could have superscripts/subscripts or a least an APL character set!)

First find the matrix Z, such that L = (1/Z)*A*Z, where L is a diagonal matrix:

          l1  0  0  0
     L =   0 l2  0  0
           0  0 l3  0
           0  0  0 l4

Then A = Z*L*(1/Z) and B = Z*(L**.5)*(1/Z).  (Why the answer for B?  Try 
multiplying out BB.)  Since L is a diagonal matrix, its square-root is
trivially:

           l1**.5      0      0      0
            0     l2**.5      0      0
            0          0 l3**.5      0
            0          0      0 l4**.5

The values l1,l2,l3, and l4 are the eigenvalues of A.  The columns of Z are
the eigenvectors of A.  Methods for finding eigenvectors and eigenvalues
may be found in a linear algebra textbook.  The eigenstuff has many other uses. 
You can, for instance, take the "tangent" of a matrix by finding L and Z, then
taking the tangent of each element of L, and then transforming back in the
same manner as for finding the square-root.  (This works for any function with
a McLauren expansion). 

One neat application involves taking e raised to a matrix power.  One can
take the formula for solving a single linear differential equation and use it
for solving a system of linear differential equations by substituting matrix
exponentiation for scalar exponentiation.

-----------------------------------------------------------------------------

Added note: the Newton-Ralphson method probably only works for matrices
with positive real determinants, but I can't prove this.

- Arch @ uiucdcs
 

jfh@browngr.UUCP (John "Spike" Hughes) (10/29/84)

There's a general method for finding the square root of
a symmetric matrix: Every real symmetric matrix has
real non-negative eigenvalues (see the definition of eigenvalue in any
linear algebra book). Let the eigenvalues be a1, a2, ..., an,
and corresponding unit eigenvectors be v1, v2, ..., vn (this means
that:
       M vn = an vn

(here M is the matrix in question). Now line up all the eigenvectors
in a matrix, each eigenvector being a column, and call the matrix Q. Then
Qt M Q is a diagonal matrix D (with the eigenvalues of M down the
diagonal) (here Qt denotes the transpose of Q). Let E be the square
root of D (i.e. take the square root of each entry in D). Then a
square root of M is given by

     Q E Qt

This method also works for non-symmetric matrices, if they have
all positive eigenvalues, and all are real, but you must use
Q inverse rather than Q transpose (alas).
     There is also a polar coordinate decomposition of a matrix
into the product of a real symmetric matrix R and a unitary matrix
U (in the case of a 2*2 complex matrix, these correspond to
r and theta for polar coordinates in the plane), and you can do something
from there, but you probably don't wnat to...\

gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (10/30/84)

>      Q E Qt
> 
> This method also works for non-symmetric matrices, if they have
> all positive eigenvalues, and all are real, but you must use
> Q inverse rather than Q transpose (alas).

In the original query, the matrix was orthogonal, so its inverse
WAS its transpose.

worley@Navajo.ARPA (11/01/84)

> There's a general method for finding the square root of
> a symmetric matrix: Every real symmetric matrix has
> real non-negative eigenvalues ...

(-2) is a (trivial) 1X1 real symmetric matrix which does not
have non-negative eigenvalues. You need the matrix to be
real symmetric positive definite (i.e. explicitly require that
the eigenvalues be positive) in order to guarantee that
    B = Q(sqrt(D))Q(t)
    A = BB
will have a non-complex B.

> This method also works for non-symmetric matrices, if they have
> all positive eigenvalues, and all are real, but you must use
> Q inverse rather than Q transpose (alas).

You also need a full set of eigenvectors, which don't always exist
for a nonsymmetric matrix.