[comp.ai.neural-nets] want info on Neural Nets on Connection Machine

jdb@arp.anu.edu.au (John Barlow) (01/24/91)

Could people who have done some work on the Connection Machine
please post me some of your ideas/feelings/intuitions/discoveries
using the CM for neural net study ?  I am looking after a new CM
and have several users interested in starting neural net work.

Any pointers, references to articles or papers gratefully received.

Thanks.
-- 
jdb = John Barlow, Parallel Computing Research Facility,
Australian National University, I-Block, PO Box 4, Canberra, 2601, Australia.
email = jdb@arp.anu.edu.au
[International = +61 6, Australia = 06] [Phone = 2492930, Fax = 2490747]

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (01/25/91)

In article <1991Jan24.090142@arp.anu.edu.au> jdb@arp.anu.edu.au (John Barlow) writes:
>Could people who have done some work on the Connection Machine
>please post me some of your ideas/feelings/intuitions/discoveries
>using the CM for neural net study ?  I am looking after a new CM
>and have several users interested in starting neural net work.

I have written a couple of backpropagation programs on the CM at the
U.S. Naval Research Lab.  My fastest program uses the CMSSL Matrix
Multiplication routine to perform fast systollic-array matric
multiplication.  It is easy to describe batch backprop in terms of
linear algebra.

Matt Singer at TMC though lays claim to the fastest backprop program
in terms of connections-updates per second.  He uses each processor
to handle a separate exemplar and trains them all in parallel.
Each processor then has a copy of the network weights, and
updates them by scan-adding up all the weight changes.
This is wonderful if you have thousands of exemplars.  I had around
20-30 and I found the matrix algebra method more appropriate.

The CM is a wonderful machine, but to do real good work on it you
need expert parallel programmers.  And if you want networks which
are of large enough size to be really faster on the CM than a mini-super,
they are probably too large to ever converge in a reasonable time during
learning.

The CM works great with nearest-neighbor communication (such as in
fluid-dynamics problems), but as for Neural Nets I think it is OK
but not the best.

-Thomas Edwards