[comp.parallel] iWARP and neural nets

zenith@ensmp.fr (Steven Ericsson Zenith) (02/06/91)

yarri@rainier.eng.ohio-state.edu (Douglas Yarrington) says about iWARP:

   I'm interested in seeing how this type of processor encroaches on the
   realm of fine-grained neural networks, so I'd like to find some
   references, preferably recent, on the hardware structure. Could
   someone point me in the right direction please.

I'd say this type of processor does not at all encroache on the realm of
fine-grained neural networks, for pretty much the same reason the
transputer doesn't. When you talk about Neural Net silicon you need
something closer to a memory device, not a CPU. Look out

  "Analog VLSI and Neural Systems" by Carver Mead, 
  published by Addison-Wesley. 1989

And checkout Igor Aleksander's excellent contribution "Myths and
Realities about Neural Computing Architectures" (the first chapter) to:

  "Parallel Processing and Artificial Intelligence"
  Eds. Mike Reeve and Steven Ericsson Zenith
  published by John Wiley. 1989.

Steven
--
Steven Ericsson Zenith * Email: zenith@ensmp.fr  *    Fax:(1)64.69.47.09
                       | Francais:(1)64.69.47.08 | Office:(1)64.69.48.52
Center for Research in Computer Science - Centre de Recherche en Informatique
	     CRI - Ecole Nationale Superieure des Mines de Paris
	       35 rue Saint-Honore 77305 Fontainebleau France

vlo@xydeco.siemens.com (John Vlontzos) (02/06/91)

In article <12929@hubcap.clemson.edu> zenith@ensmp.fr (Steven Ericsson Zenith) writes:
>yarri@rainier.eng.ohio-state.edu (Douglas Yarrington) says about iWARP:
>
>   I'm interested in seeing how this type of processor encroaches on the
>   realm of fine-grained neural networks, so I'd like to find some
>   references, preferably recent, on the hardware structure. Could
>   someone point me in the right direction please.

>I'd say this type of processor does not at all encroache on the realm of
>fine-grained neural networks, for pretty much the same reason the
>transputer doesn't. When you talk about Neural Net silicon you need
>something closer to a memory device, not a CPU. Look out
>
>  "Analog VLSI and Neural Systems" by Carver Mead, 
>  published by Addison-Wesley. 1989
>

>Steven



Why do you say that you need something closer to a memory device?

ANNs can be implemented as a cascade of matrix-vector multiplications
and systolic arrays like the iWarp and others are very efficient in that.
By formulating ANN algorithms in matrix terms you avoid the need
for global communications since matrix multipliers have only
local connections. 

To see how it is done, take a look at proc. ICNN '88, proc. IJCNN '89
and proc ICASSP '89, '90 and IEEE Trans. ASSP Dec. '89

By the way, the only disadvantage the transputer has is its
serial links which make computation/communication unbalanced 
for systolic applications (remember that in systolic arrays
for every computation, you perform one or more data transfers
to a neighbor)

John Vlontzos
Siemens Corp. Research
Princeton N.J

Pasi.Koikkalainen@lut.fi (Pasi Koikkalainen) (02/06/91)

zenith@ensmp.fr (Steven Ericsson Zenith) writes about iWARP:

>I'd say this type of processor does not at all encroache on the realm of
>fine-grained neural networks, for pretty much the same reason the
>transputer doesn't. When you talk about Neural Net silicon you need
>something closer to a memory device, not a CPU. Look out

>  "Analog VLSI and Neural Systems" by Carver Mead, 
>  published by Addison-Wesley. 1989

Yes, all this is true, but it's only one way of looking the problem.
I agree that neural networks should be seen and specified  as low level,
massively parallel, architectures but any simulation experiment
is not practical without a model architecture which allows the description
of the neural network to be given in a structured environment.

Such a framework also makes it possible to use MIMD type of multiprocessors
(or multicomputers) for artificial neural computing. In the fact the idea of
coarse-grained computing adapts quite nicely to neural networks. The
basic computing unit is then a layer of neurons, a slab (Hecht Nielsen),
or any other similar groupping of neurons. One must also remember that
every application of neural networks requires a system level design,
where several networks are  competing or co-operating. 

There is not much hope that we really can produce neural systems
by using VLSI at this point. The requirement of the silicon space
is too much. So parallel computing is the next best alternative.

See for example:
Ghosh J. and Hwang K., Mapping Neural Networks onto Message-Passing
 Multicomputers, Journal of Parallel and Distributed Computing,
 6, 1989, pp. 291-330.

Also I have one paper to appear (manuscrpit is from 1989):
Koikkalainen P. and Oja E.,The Carelia simulator:
A Development and Specification Environment for Neural Networks
Advances in Control Networks and Large Scale Parallel Distributed
Prosessing Models, Ablex, NJ, 1991.

A good motivation for a model architecture is also given in:
Hecht-Nielsen H.  Neurocomputing, Reading, MA: Addison-Wesley Publishing
Company. 1990.

-- 

+--  Pasi Koikkalainen;       Lappeenranta University of Technology       
+--  P.O.Box 20, 53851 LPR, Finland;         Phone: +358 53 5743434         
+--  e-mail: pako@neuronstar.it.lut.fi or koikkalainen@ltkka.lut.fi

vlo@xydeco.siemens.com (John Vlontzos) (02/13/91)

Backpropagation at least was very efficiently implemented on the
original Warp. I think they achieved 80% of peak performance with this
algorithm:

Pomerleau, D.A.  et al,Neural Network Simulation at Warp Speed:  How We Got 17
Million Connections Per Second, IEEE Conference on Neural Networks, July, 1988

--
 /\/\   /\/\  Marty Marra, Woods Hole Oceanographic Institution, Blake Rm 109
/    \ /    \ Woods Hole, MA 02543 "marra@jargon.whoi.edu" (508)457-2000x3234