[comp.ai.neural-nets] guassian elimination on sparce matricies

myke@gatech.edu (Mike Rynolds) (04/04/89)

If A represents a series of input state vectors and B is a corresponding list
of output state vectors, then in the equation AX = B, X is a neural net which
can be trained simply by setting it equal to A-1 * B. Since A and B consists of
1's and 0's, and mostly 0's, large matricies can be made managable if they are
sparce. 
	I have only been able to find gaussian elimination alg.'s on sparce
systems of linear equations of the form Ax = b, where x and b are vectors.
Can anyone direct me to where I can find a gaussian elimination alg on sparce
systems of the form AX = B?
-- 
Mike Rynolds
School of Information & Computer Science, Georgia Tech, Atlanta GA 30332
uucp:	...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!myke
Internet:	myke@gatech.edu

hwang@taipei.Princeton.EDU (Jenq-Neng Hwang) (04/06/89)

Instead of using Gaussian elimination type of algorithms for solving the sparse matrices, there
have been row-action methods proposed, which are iterative procedures suitable for solving
linear systems without any structural assumption on sparseness.  One famous example is the
Kaczmarz projection method, which can be used to interpret the dynamic behavior of later stage
of back-propagation learning, has been widely used in image reconstruction applications.
A good tutorial paper is:

Yair Censor, " Row-Action Methods for Huge and Sparse Systems and Their Applications,"
SIAM Review, Vol. 23, No. 4, pp 444-466, October 1981.

J. N. Hwang