floyd@BRL.MIL (09/20/89)
I am interested in locating implemented neural networks for the Butterfly. Any information as to their existence or how I may go about obtaining any would be appreciated. I would also be interested in finding references to such work. Thanks in advance, Floyd Wofford floyd@brl.mil
marcoz@MARCOZ.BOLTZ.CS.CMU.EDU (Marco Zagha) (09/20/89)
In article <6517@hubcap.clemson.edu>, floyd@BRL.MIL writes: > > I am interested in locating implemented neural networks > for the Butterfly. Any information as to their existence > or how I may go about obtaining any would be appreciated. > I would also be interested in finding references to such > work. In Blelloch and Rosenberg, "Network Learning on the Connection Machine" (IJCAI-87) there is a reference to: M. Fanty. "A Connectionist Simulator for the BBN Butterfly Multiprocessor," Technical Report Butterfly Project Report 2, University of Rochester, Comp. Sci. Dept., January 1986 == Marco (marcoz@cs.cmu.edu)
bukys@cs.rochester.edu (09/21/89)
The Rochester Connectionist Simulator was, at one time, ported to the Butterfly 1, running under the Chrysalis operating system. This was one of our first big Butterfly projects -- so it happened a long time ago, predating BBN's Uniform System, Streams I/O package, etc. This means that it was made to work by whipping up our own home-grown pile of servers, daemons, libraries, etc. It also means that the simulator itself uses its own synchronization package based on dual queues, and shared memory based on Chrysalis Map_Obj calls. I would be willing to give you what we have, but, I warn you, it will be WORK to make it work again. Not plug and play. And what you have in the end will have no graphics interface either. And it will be a clumsy I/O and terminal interface relying on some special daemons. And it will only work under Chrysalis. In spite of the above, all in all it worked pretty well. It didn't really make it possible to execute small networks extrememly quickly. It did make it possible to execute very large networks at reasonable speed, though. Meanwhile, a whole new set of issues arises when you start talking about very large networks on large-scale parallel machines (we had a 120-processor machine). The easiest part of parallelization is getting the kernel of the simulator running in parallel -- once it's done, it's done for all users. The harder part is building the network in the first place -- that involves user code, application-specific, so the user has to become a parallel programmer even if he/she doesn't want to be (either that, or wait a long time to build your network serially before you run it in parallel). The Butterfly simulator opened the first bottleneck by providing a parallel name table implementation. This was all done by Mark Fanty at the University of Rochester. It was written up in TR164 (January 1986), which can be had for $1 from "Technical Reports", Computer Science Department, University of Rochester, Rochester, NY 14627. The current manual for the uniprocessor simulator is TR233 ($10), and is also available for ftp from cs.rochester.edu. There was also a CACM article about connectionism and, I think, a sidebar on the Butterfly simulator, from about the same time (1986). My personal opinion is that the Butterfly is a great architecture for this sort of thing on a large scale -- better than hypercubes. On a small scale, bus-based multiprocessors might win, but they peter out at only 10 processors or so. I'm sure that it would be easier to do another port today, with the new tools available from BBN -- Mach OS, "uniform system" library, etc. Nobody here has the time to do it, though. ------------ Meanwhile, somebody else here has another simulator, this one tuned to running recurrent backpropagation nets much better than the more general-purpose but ineffienct "Rochester" simulator. His name is Patrice Simard, and I will see if I can get him to post something about the state of his software. I know that it has been used by another student, Alan Cox, as a sort of benchmark to run under his Butterfly OS called Emerald, so I think it's already parallelized, and may even use Mach primitives to do the dirty work. Liudvikas Bukys <bukys@cs.rochester.edu>