finton@ai.cs.wisc.edu (David J. Finton) (03/19/90)
(Patrick van der Smagt) writes: >Does anyone have any references or results about the implementation of >neural networks on the Connection Machine? What about transputers? What are transputers? -David Finton
robert@aerospace.aero.org (Bob Statsinger) (03/20/90)
In article <9968@spool.cs.wisc.edu> finton@ai.cs.wisc.edu (David J. Finton) writes: >(Patrick van der Smagt) writes: >>Does anyone have any references or results about the implementation of >>neural networks on the Connection Machine? What about transputers? > >What are transputers? Transputers are a family of microprocessors designed with parallel distributed processing in mind. They consist of a RISC-style processor with microcoded support for communicating processes. Transputers communicate across hi-speed bidirectional links; data is transmitted without direct CPU participation. At USC a transputer network has been used for an implementation of Malsburg's dynamic link architecture for face recognition. I don't know if the resulting paper has been widely distributed but you could probably write Dr. Malsburg (malsburg@pollux.usc.edu) and ask him. -- Bob Statsinger Robert@aerospace.aero.org The employers expressed herein are strictly mine and are not necessarily those of my opinion's....uh..er...whatever...
stiv@stat5.rice.edu (david n stivers) (03/20/90)
>>Does anyone have any references or results about the implementation of >>neural networks on the Connection Machine? What about transputers? > >What are transputers? These are CPU chips manufactured by Inmos, Ltd. (a UK firm) that have 4 I/O ports and were designed specifically for use in building parallel computers. A very popular implementation is on cards which reside inside a PC; the PC acts as host (terminal/file server/compiler). I believe that the majority use a distributed memory architecture. david n stivers stiv@rice.edu
tedwards@nrl-cmf.UUCP (Thomas Edwards) (03/21/90)
>(Patrick van der Smagt) writes: >>Does anyone have any references or results about the implementation of >>neural networks on the Connection Machine? What about transputers? A Technical Report has been produced by Thinking Machines concerning various implementations of backpropagation on the Connection Machine. Contact David Singer at Thinking Machines. I myself have implemented backprop on the CM. If you think a bit about how to get the most out of the parallel structure, you can create very speedly learning implementations on the CM for neural networks. My implementation used TMC written matrix algebra routines which utitlized a very fast systollic array routine. I happened to have needed very large nets, with few training exemplars. If you have alot of training data, you could put one network and training exemplar on each processor, and run all 64K training exemplars at once, and then add up all of the weight deltas using the systollic array addition commands. The throughput can be amazing! -Tom