[comp.ai.digest] 500K connections per second

Dave.Touretzky@C.CS.CMU.EDU (10/08/87)

The inner loop (and most expensive part) of neural net simulations computes for
all j the net input to unit j, which is the sum for all i of the output of unit
i times the weight Wji on the connection from i to j.  This is just a multiply
and accumulate loop.  In fact, if you choose the right data structures, it's a
matrix-vector multiplication.  So when someone advertises that their
"neurocmputer" does 500K connections per second, they mean it does five hundred
fetch-multiply-accumulate operations per second.  This is a useful performance
measure because it is independent of the number of units and connections in the
model being simulated.

There is, unfortunately, no such thing as a commercially available
neurocomputer.  Presumably, a neurocomputer would either be a computer made out
of neurons, or a computer whose physical structure in some way resembled that
of the nervous system.  No product available today meets either of those tests.

What people are selling today as "neurocomputers" are just regular old
computers with some neural net simulation software.  For example, Hecht-Nielsen
NeuroComputer Corporation, the outfit that's been running those full-page
four-color ads in AI Magazine, sells their ANZA "neurocomputer" for $15,000.
The ANZA system is an off-the-shelf IBM PC/AT with an add-on board containing a
Motorola 68020 with floating point co-processor and 4Meg of memory.  For
roughly the same price you could buy a Sun III (same 68020 processor) and run
Unix and X-windows instead of PC-DOS.  In fact, Hecht-Nielsen will be
announcing a version of their simulation software for the Sun in the near
future.  That doesn't make the Sun III a neurocomputer, but then again, neither
is the ANZA.

The TRW Mark III is also a coprocessor build out of conventional components,
but it attaches to a Vax rather than an IBM PC.  The Science Applications
Corporation Sigma-1 is a high speed number cruncher based on a Harvard
architecture (the single processor has separate data and instruction paths); it
is not a neurocomputer.  Science Applications recently acquired a Connection
Machine which they plan to use for really heavy duty simulations.  (Connection
machines aren't neurocomputers either; they're much more general purpose than
that.  See the article by Blelloch and Rosenberg in IJCAI-87 for a report on
using a CM2 to simulate learning in neural nets.)

The TI Oddyssey DSP (Digital Signal Processor) is another board that does fast
matrix-vector multiplies.  Like the other products I mentioned, it is a
conventional architecture, basically a handful of TMS 98020(?) hardware
multiplier chips.  I have a special fondness for Texas Instruments because even
though they do some interesting neural net research, they never use the
misleading term "neurocomputer" in their ads for the Oddyssey.

Will there ever be real neurocomputers?  Perhaps some day:

Some people are building VLSI circuits whose structure is based on an abstract
description of neural circuitry.  For example, a group at BELLCORE led by
Joshua Alspector and Robert Allen has designed a 54-unit "Boltzmann Machine"
chip.  The 54 neurons are physically implemented as separate processors on the
chip, and their N*(N-1)/2 weighted connections are also implemented by separate
pieces of circuitry, giving a fully parallel implementation.  This is terrific
work, but it will be quite a while before it has any commercial impact, because
it's hard to put a lot of neurons on one chip, and expensive to communicate
across multiple chips.  It is possible to cram several hundred neurons on a
chip if you go for fixed weights (resistors) rather than variable ones, but
then the network can't learn.

Carver Mead and Mass Silviotti at Caltech have built a "silicon retina" low
level vision chip using analog (!) VLSI circuitry.  The chip's architecture was
inspired by the way real retinas do computation.

There is also work on optical implementations of neural networks, using lasers,
two-dimensional or volume holograms, and various mirrors and photosensors.  Two
of the big names in this area are Dmitri Psaltis (Caltech) and Nabil Farhat
(Penn).  It will probably take longer for this technology to reach the
marketplace than for VLSI-based technologies, as it is in a much earlier stage
of development.

A group at Bell Labs has been growing real neurons on a special substrate with
embedded electrodes, so they can have an electronic interface to a living
neural circuit.  This is a neat way to study how neural circuitry works, but
they only deal with a handful of neurons at a time.  I doubt whether it will
ever be practical to design special-purpose computers from living neurons.

A good place to learn more about neuromorphic computer architectures (a more
decorous term than "neurocomputer", in my opinion) is the proceedings of neural
net conferences.  There's the proceedings of the 1986 Snowbird Meeting on
Neural Networks for Computing, published by the American Institute of Physics
in New York.  There's also the IEEE First International Conference on Neural
Networks, which was held in San Diego this past June.  And there's the IEEE
Conference on Neural Information Processing Systems - Natural and Synthetic,
which will be held in Denver, at the Sheraton Denver Tech Center, on November
8-12.  This conference was originally to take place in Boulder, but
registration has been so heavy it had to move to larger quarters at the last
minute.  The conference chairman is Yaser Abu-Mostafa at Caltech.

Sorry, I don't have information on how to order proceedings from the IEEE
conferences.  Contact the IEEE.

-- Dave Touretzky
-------