[comp.ai.neural-nets] SUMMARY: Neural Networks on SIMD-Machines

prechelt@i41s14.ira.uka.de (Lutz Prechelt) (06/20/91)

Some time ago I posted the request given below. This is the summary of
the answers I got.

Request:
------------------------------------------------------------------------
Newsgroups: comp.parallel,comp.ai.neural-nets
Subject: Neural Networks on SIMD-Machines
Keywords: parallel, SIMD, neural network, methodology

Does anybody do any systematic research on implementations of
Neural Networks on SIMD machines ?

I am not thinking of these simple kinds of problems that have of
course long been solved, such as a single net with backpropagation
(for instance the work of Zhang or Rosenberg/Blelloch).

What I am thinking of is a complete methodology for complex NN
applications:
- how to lay out irregular nets
- how to train or execute multiple nets of different types in parallel
- how to organize memory usage cleverly
- if I/O is necessary, how to organize it best.
- how to integrate the NNs with the rest of an application on a
  parallel machine.

I know that there is some work on these issues for MIMD machines
(especially Transputer Arrays), but for SIMD many problems are very
different.
------------------------------------------------------------------------


Answers:

------------------------------------------------------------------------

From:     David Zirl (GC) <dzirl@pica.army.mil>

Could you let me know what you find out about NN on SIMD machines

Thanks

David 

*************************************************************************
* Dr. David Zirl	Army High Performance Computing Research Center *
* ARDEC			Computer Sciences Corporation                   *
* USAISC-Dover                         		office: (201) 724-4590  * 
* ASQNC-APT-OT, BLDG 350-N		        fax:    (201) 724-4172  *
* Picatinny Arsenal, NJ 07806-5000         e-mail: dzirl@pica.army.mil  *
*************************************************************************

------------------------------------------------------------------------

Date: Tue, 11 Jun 91 18:03:13 -0700
From: Trent Lange <lange@cs.ucla.edu>
Message-Id: <9106120103.AA25626@lanai.cs.ucla.edu>
Organization: UCLA Artificial Intelligence Laboratory




I talk about just such problems (esp. on the Connection Machine) in:

Lange, T. (1990).  Simulation of Heterogeneous Neural Networks on Serial
and Parallel Machines.  Parallel Computing 14, 287-303.

I'd be interested in seeing whatever other responses you get.

Good luck,

- Trent Lange

------------------------------------------------------------------------

Date: Wed, 12 Jun 91 11:56:31 EDT
From: lesher@ncifcrf.gov
Message-Id: <9106121556.AA00853@fcs50f.ncifcrf.gov>

1) I've heard Simon Kasif, who does work on parallel algorithm theory, say
that if you want decent performance from the CM and other SIMD (?MIMD too?)
machines, you can't work on level of theoretical models; you have to 
do implementations with that machine's ideosyncracies in mind.
2) I haven't found anything beyond Zhang and Rosenberg/Blelloch, and this
has forced me to dive in myself.  Nor do other people who have put more
limited queries (than yours) to the net posted any significant results.  I
will be very interested to hear what you learn.

I'm developing Hopfield-style NN on CM to predict RNA folding.

{Sarah Lesher}

------------------------------------------------------------------------

From: Ephraim Vishniac <ephraim@think.com>
Received: by leander.think.com (4.1/Think-1.0C)
	id AA01561; Wed, 12 Jun 91 15:43:41 EDT
Date: Wed, 12 Jun 91 15:43:41 EDT
Message-Id: <9106121943.AA01561@leander.think.com>
Organization: Thinking Machines Corporation, Cambridge MA, USA

I don't, but I suggest you inquire of cmns-neural-nets@think.com, a
mailing list of people doing neural-net work on the CM-2.

   {I asked back:}
   The above address is probably the mailing list itself.
   What is the request address ? 
   Could you send me this, or have me put onto the list and
   drop me a note about it ?

I took a look, and cmns-neural-nets@think.com is actually just the
in-house portion of the mailing list. The full list is
cmns-neural-nets-ext@think.com. I added you to that list, so you
should be all set.

For more information about mailing lists relating to particular
interests on the Connection Machine, I think your best bet is to
contact cmns-manager@think.com. CMNS is the Connection Machine Network
Server, a machine we provide free of charge to the Internet community
to encourage the development of diverse CM applications. 

------------------------------------------------------------------------

Date: Wed, 12 Jun 91 09:03:05 +0200
From: Per Hammarlund <perham@nada.kth.se>
Message-Id: <9106120703.AA21473@nada.kth.se>

Yes, I do. I am looking into implementing recurrent NNs on the
Connection Machine. I could send you a few earlier papers on
biologically realistic neural networks on the CM and also an early
study (we have moved on and improved it) on artificial NNs on the CM.

I am looking into pretty much exactly these issues and a few more.

The problem is inherently much harder {on SIMD} since it is "hard" to keep all
of the machine working at the same time without wasting memory.

Could you please tell me a little bit about what you are doing?


per

Per Hammarlund
SANS -- Studies of Artificial Neural Systems
NADA -- Department or Numerical Analysis and Computing Science
Royal Institute of Technology
S-100 44 Stockholm
SWEDEN

{and in further conversation:}

{from me:}
   We are currently trying to implement the "Linked Predictive
   Neural Networks" speech recognition architecture on our
   MasPar MP-1  4096 processor machine.

{from Per:}
Do you have a report on the algorithm?
   
Have you read.... Where is it? I can't find it now, but I have a
report on implementing NNs on a Maspar. 
I will dig it up and send you the reference.
I think this was from Boeing or some other airplane manufacturer.
I will dig a little.

------------------------------------------------------------------------

Date:  Thu, 13 Jun 91 12:59:42 +0200
From: neschen@thp.uni-koeln.de

from : Martin Neschen
       Institut fuer Theoretische Physik
       der Universitaet zu Koeln
       Zuelpicher Str. 77
     D-5000 Cologne 41, R.F.A
internet: <neschen@thp.uni-koeln.de>


Hallo Lutz,

ich habe soeben ein Paper ueber eine effiziente SIMD-Architektur (die
im wesentlichen nur aus DRAMs und einfachen Booleschen Prozessoren
besteht) geschrieben, und fuer die HICSS-25-Konferenz auf Hawaii, Jan.
1992 eingeschickt. Es enthaelt viele Anwendungen physikalischer
diskreter Modelle, insbesondere Attraktor-NN. Ich habe die Struktur
besonders auf NN optimiert, weil ich sehr wahrscheinlich ab Januar '92
an der Ecole Polytechnique, Palaiseau in einer NN-Architektur-Gruppe
eine VSLI-Architektur fuer die Prozessoren entwickeln werde.

Ich werde Dir sofort eine Kopie des Papers zuschicken. Ich wuerde mich
freuen, wenn noch weitere Gruppen Ueberlegungen in SIMD-Richtung
anstellen wuerden, weil ich solche Architekturen auf Problemen, die
keine lokal unterschiedlichen Entscheidungen erfordern, fuer erheblich
ueberlegen halte. Im Augenblick werden ja ausser der CM hauptsaechlich
MIMD-Konzepte verfolgt (besonders im Rahmen des Teraflop-Projektes).
Spezielle Architekturen, die den Arbeitsspeicher mit in ASICs
integrieren, sind leider meist durch den geringen Speicher
beschraenkt, konnen daher nicht voll ausgelastet werden. Die
gr"oessten Modelle kann man wirklich nur mit DRAMs simulieren. Diese
sind auch schnell genug, wenn man eine ausreichende Anzahl von
Datenleitungen verwendet und so oft wie moeglich im Static-Column-Mode
zugreift.

Im Augenblick beschaeftige ich mich noch mit einer Pipeline-Architektur 
fuer Molekulardynamik-Simulationen (ich bin naemlich theor. Physiker).
Auf diesem Gebiet werde ich im Herbst meine Dissertation abschliessen.

Woran ist Euer Institut interessiert? Nur an Software-Strukturen oder
auch an effizienter Implementierung in Hardware.

Mit besten Gruessen

Martin 

------------------------------------------------------------------------

From: Andreas Zell <Zell@informatik.uni-stuttgart.de>
Date: Thu, 13 Jun 91 17:24:15 +0200
Message-Id: <9106131524.AA21428@asdec.informatik.uni-stuttgart.de>

At the Universitaet Stuttgart, IPVR (Institut fuer Parallele und
Verteilte Hoechstleistungsrechner), we are looking into the same
problem of how to most efficiently implement a wide range of different
neural network paradigms on a SIMD machine (on our MasPar MP-1216) and
would be glad to share our information with you. There seems to be no
complete methodology in your sense and I very much doubt you will find
one that does all what you want without sacrificing speed. It also
depends very much on the efficiency communication hardware of the machine and
thus is different on the NEWS grid on the CM, the X-grid of the
Maspar and the NEWS grid of the DAP (assuming you try to avoid the
much less efficient general routing mechanisms of these machines as
much as you can).
It is no accident that most published papers describe implementations
of backpropagation on fully connected feedforward networks (some even
with the same number of units in each layer).  Most papers that we
know of fall in the class of what you call 'simple kinds of problems',
which I am sure you will see are not quite so simple when you actually
program them on a parallel machine, i.e. they deal with regular,
usually fully connected topologies.
Now for some references:

K. A. Grajski: Neurocomputing using the MasPar MP-1, Ford Aerospace,
Advanced Dev. Dept., Tech Rep. No. 90-010, Mail Stop X-22, San Jose,
CA 95161-9041, Oct. 90

K. A. Grajski, G. Chinn, C. Chen, C. Kusymail, S. Tomboulian: Neural
Network Simulation on the MasPar MP-1 Massively Parallel Processor,
Proc. INNC, Paris, France, 1990

X. Zhang, M. Mckenna, J. P. Mesirov, D. L. Waltz: An Efficient
Implementation of the Back-propagation Algorithm on the Connection
Machine CM-2,

A. Singer: Implementations of Artificial Neural Networks on the
Connection Machine, Thinking Machines Corp., Tech. Rep. RL90-2, Jan.
1990, also in Parallel Computing, summer 1990

S. N. Gupta, M. Zubair, C.E. Grosch: Simulation of Neural Networks on
Massively Parallel Computer (DAP-510) using Sparse Matrix Techniques,
Dept. of Comp. Sc., Old Dominion Univ. Norfolk, VA 23529-0162, May
1990 

J. Yadegar, R. Thanakij: The DAP as a Neuroan Simulator, Active Memory
Technology, Inc. 16802 Aston Street, #103, Irvine, CA 92714

C. L. Wilson, R. A. Wilkinson, M. D. Garris: Self-Organizing Neural
Network Character Recognition Using Adaptive Filtering and Feature
Extraction, in Neural Networks, Vol. 3, 1991 [work done on a DAP]

=====

It would be nice if you could share the answers to your query with us,
or better, post them.

   Andreas Zell

======================================================================
Dr. Andreas Zell  +49 (711) 7816-350  zell@informatik.uni-stuttgart.de
Univ. Stuttgart, IPVR, Breitwiesenstr. 20-22, D-7000 Stuttgart 80, FRG
======================================================================

------------------------------------------------------------------------

Date: Thu, 13 Jun 91 18:00:46 +0100
From: M.Azema@cs.ucl.ac.uk

I initially did some work on the implementation of neural networks
on Transputer-based machines (MIMD). I am now carrying out the same type 
of work for the massively parallel machines (SIMD). 
Briefly, it analyses two (related) aspects:
	1- Design of a system that automatically implements neural network
		models on parallel machines
	2- Analytical study  of the performances to expect according to
		the neural network distribution choosen and also the
		parallel machine.
Either (1) and (2) involve implementations so we are  also implementing
different neural models  on a DAP. Readable reports should be available
pretty soon. Meanwhile if you need more information, email me.

I hope this is useful
Magali E. Azema Barac
Computer Science Dept, University College London
Gower Street, London WC1 E6BT

------------------------------------------------------------------------


THAT IS IT !

  Lutz
  




Lutz Prechelt   (++49/721/608-4317,  FAX: ++49/721/697760)
Institut fuer Programmstrukturen und Datenorganisation
Universitaet Karlsruhe;  D-7500 Karlsruhe 1;  Germany
prechelt@ira.uka.de  or  prechelt!ira.uka.de@relay.csnet

-- 
=========================== MODERATOR ==============================
Steve Stevenson                            {steve,fpst}@hubcap.clemson.edu
Department of Computer Science,            comp.parallel
Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell