[comp.sys.handhelds] Linear algebra, complex eigenvalues, homogeneous systems

jmorriso@fs0.ee.ubc.ca (John Paul Morrison) (03/08/90)

		Linear Algebra On the HP28S

Note:
	This article presents a few simple linear algebra techniques
	for the HP28S. This article will be short on code, because the
	methods are so simple. I am presenting this because I haven't
	seen anyone else say anything on the topic. So if you say "So
	what, that's obvious!" then go f.o.a.d. because I don't think
	many people have seen this yet. If you haven't found it out for
	yourself yet then you will be annoyed that you didn't see it
	staring right at you! It is a little like that eigenvalue
	method using the SOLVER, I found that method a year ago and,
	many people found it independently too, but I didn't see any
	word on it until now. So this article is for the people who
	haven't found it yet.

Topics:
	Homogeneous systems of equations
		Finding the solution space
	Systems with singular matrices
		Finding the solution space
	Eigenvectors
		Finding eigenvectors given unique eigenvalues
		Finding eigenvectors with repeated eigenvalues
	Complex eigenvalues

Intro

	A common problem in linear algebra is to find the solution
	space to homogeneous systems of equations. They take this
	form:
		Ax = 0 
	Unless otherwise noted, capital letters stand for matrices, and
	lowercase are vectors. There is always the trivial solution x =
	0.  If det A != 0 then there is only the trivial solution.  If
	det A = 0 then there are infinitely many solutions. Finding the
	solution space if there are infinitely many solutions is the
	intertesting part. The HP28S does not provide a standard way to
	find the solution space, but it can be done easily, and I am
	pretty sure the technique works for all singular matrices.
	Because I am not in linear math, and I don't know exactly how
	the HP solves linear systems, I can't say if these techniques
	will ALWAYS work. You will have to evaluate the results of your
	solutions. However I am quite confident that these will work in
	most cases

Gettin down ta bizness:
	
		Ax = b, where b != 0

	You already know that the HP can solve over-determined and
	under-determined non-homogeneous systems of equations. These
	systems involve singular matrices, and somewhere in the process
	of solving the system, a solution to the null space must get
	thrown in there. However these systems do not always have a
	solution. So what kind of answer do you get if there is no
	solution? You don't get a garbage answer, you get a solution
	to the corresponding homogeneous equation:
		Ax = 0
		
	Example:

		A =
		[[ 1 3 -15  7] 
		 [ 1 4 -19 10] 
		 [ 2 5 -26 11] 
		 [ 0 0   0  0]]
		 
	You can verify that det A = 0. If you try to solve Ax =
	(1,2,1,0), you get a valid answer x = (-2,1,0,0). If you try to
	solve Ax = (1,1,1,1) yo get this:
		x = (300000000000, -166666666666, 33333333333.4)
	Looks like crap, and it doesn't solve Ax = b, but it  if you
	normalize the vector, or at least divide by 10e12 to bring it
	down to size, it does solve Ax = 0! The original x here does
	not solve Ax = 0 because of round off: when you evaluate Ax you
	get subtraction of large numbers which are supposed to cancel
	each other.
	
	If you take x and scale it down to (3, -1.66666666666,
	.333333333334, 1) you do get proper answers, and you can see
	that this solution is equivalent to (9,-5,1,3): much nicer.

	Well that solution is only slightly better than the trivial
	solution x = 0. What you really want is a way to get the entire
	solution space. It is interesting to note that the x vector
	returned depends on the value of b. Here's a general way of
	getting the solution space:

	Take the standard basis vectors for R^N e1 = (1,0,0,..), e2 =
	(0,1,0,..),.., eN = (0,0,..,1) as the values of b, and use them to
	find N different x vectors. Some x MAY be solutions to Ax = b,
	and NOT solutions to Ax = 0. The rest will be solutions to Ax =
	0. You should then "normalize" the x in a consistent way. I'm
	using normalize in vague sense I don't necessarily mean x /
	|x|. Any reasonable method should work in practice, since a
	multiple of a solution is a also a solution. BUT large vectors
	may cause round off errors.

	Example:
		for A you get: (these were normalized by dividing the
		vector by ALOG(EXPON(x[1])) where x[1] is the first
		element of the vector.)
	
		x1 = (3,4,1,0)
		x2 = (-1.0000000002, -1.33333333333. -.333333333334, 0)
		(this is the same as (3,4,1,0))
		X3 = (-9.99999999987, -13.3333333334, -3.33333333334, 0)
		(also (3,4,1,0))
		x4 = (2,-3,0,1)
		
	You can see that the first three are just constant multiples of
	one another. You are left with (3,4,1,0) and (2,-3,0,1). If you
	perform the row reduction on A, you will find that these values
	are indeed a basis to the solution space. Intuitively this
	method seems to make sense. Once you have calculated all x, you
	just need to find the relationship between the vectors to throw
	out the linearly dependent ones.
	
	I believe that if you start backwards (with eN instead of e1)
	you will get the linearly independent vectors first, but this
	is just a hunch.

	Getting back to the non-homogeneous problem:  With Ax =
	(1,2,1,0) there was the solution (-2,1,0,0). Now you have ALL
	the solutions: x = (-2,1,0,0) + s(3,4,1,0) + t(2,-3,0,1) where
	s and t are arbitrary.
	
Eigenvalues

	Here is a recap of a method to find real eigenvalues:
	put your matrix in 'A'
	put the identity matrix in 'I'
	put 'EIG(L)' in 'EQ'
	and EIG is
	<< -> e
		<< A e I * - DET
		>>
	>>
	Now use the SOLVER to find the root, which is an eigenvector.
	
	Here is a sample NORM function, used to bring the solution down to something reasonable.
	NORM
	<< DUP 1 GET XPON 
	ALOG /
	>>
	The first element of the vector must not be zero.
	
Complex Eigenvalues:
	
	Unfortunately, SOLVER can't find complex roots. Just use any
	technique that DOES find complex roots. You could try Muller's
	method, but it is too complicated. Just use good ol' Newton's
	method: it DOES work for complex roots.  Here is a very simple
	implementation:
	
	This method works for any function, and you do not need to know
	the derivative of the function, since this calculates it
	numerically. This is important to the eigenvalue problem, since
	finding the characteristic polynomial is pretty difficult
	(tedious).

	F is a program that accepts an argument from the stack, and
	returns one to the stack.  
	
	NEWT 
	you put a complex object on the stack which is your initial
	guess. You then run newt until you see that it has converged.
	If you suspect that there is a complex root, your initial guess
	should be complex (ie it shouldn't be (1,0) because if F is a
	polynomial with real coefficients, then the guess will NEVER go
	into the complex plane, it will just oscillate on the real
	axis)
	
	<< DUP F -> x f
		<< x f OVER C->R 2
		DX SWAP 2 DX SWAP
		R->C DUP F f - SWAP x
		- / / -
		>>
	>>
	DX
	this takes 2 real numbers 
	2:      x
	1:    inc
	x is the data value. inc controls the precision of the returned
	value.  DX returns x +
	some_number_small_relative_to_x_and_dependent_on_inc.
	Basically you can't add too small an increment to x, otherwise
	you will lose precision in the arithmetic, and the derivative
	will be weird, and if you add too big an increment, the
	arithmetic will be ok, but your point will be too far away to
	get an accurate value of the derivative. Calculating the
	derivative numerically, means subtracting things that are very
	close together, and then dividing two very small numbers. Yecch!
	
	<< -> n
		<< DUP XPON NEG 11
		+ NEG n / FLOOR ALOG
		+
		>>
	>>
	With a reasonable guess, you can converge to a solution after
	around 5 to 8 iterations. This program can have problems with
	0/0 errors if f'(0) = 0.

Eigenvectors
	
	Well, now that you can find all the eigenvalues, now you just
	evaluate A - lI, where l is an eigenvalue. This gives you a
	singular matrix B, and you can solve Bx = 0 using the previous
	methods. The eigenvalues are a clue as to what kind of
	eigenvectors there are: if you have n repeated eigenvalues,
	then you obviously have an n dimensional eigenspace for the
	matrix B. So if you have no repeated eigenvalues, then you
	don't have to bother finding all the solutions to Ax = eN, as
	any nonzero solution will be the eigenvector.

Conclusion

	Well, as I said, these methods are pretty simple, but you may
	not have been aware of them. I would like feedback on this,
	particularly from people who know about linear algebra: all I
	have is basic university linear algebra, so I don't know where
	these methods will fall apart. I haven't embellished this
	stuff, but I'll probably work on some routines to automate
	finding the entire solution space, testing for linear
	dependence, Gram-Schmidt orthoganaliztion etc. I would
	appreciate any comments from people who found this useful. If
	anyone has any problems/quesions I can post some more examples.