[mod.ai] connectionism/complexity theory

KRULWICH@C.CS.CMU.EDU (Bruce Krulwich) (06/10/86)

June 2nd's issue of Business Week contained an article about
connectionist (parallel distributed processing) models.  In it it
mentioned a Bell Labs project which set up a neural network which solved
the traveling salesman problem aproximately but quickly.  I'm interested
in articles or other information about this project or any other project
linking connectionism with complexity theory, ie, connectionist
approaches to graph problems or models which solve other "classical"
algorithm design problems.

				Bruce Krulwich

				ARPAnet: KRULWICH@C.CS.CMU.EDU
				Bitnet:  BK0A%TC.CC.CMU.EDU@CU20B

ambar@EDDIE.MIT.EDU (Jean Marie Diaz) (07/06/86)

(an article published in the April 1, 1985 edition of Fortune--posted
w/out permission)

WHAT BELL LABORATORIES IS LEARNING FROM SLUGS

[...]  Inspired by the discoveries of physicist John Hopfield, a team
of Bell Labs scientists has been using research on slugs' brains to
develop a radically new type of computer.  [...]  The Bell computer
does not always select the single [best traveling salesman] route,
but--much like a human--it comes up with one of the better routes, and
never picks anything obviously loony.

New techniques for recording neurological activity in rats and in
three types of slugs--favored because of their large and accessible
nerve cells--are providing Bell's team with reams of information about
how neurons work.  But the conceptual focus of the Bell project is the
model of the new neural-network computer created by Hopfield, 51, who
splits his time between Bell Labs and the California Institute of
Technology.  Neural networks operate in the analog mode--when
information enters the brain, the neurons start firing and their
values, or "charges," rise and fall like electric voltage in analog
computers.  When information is digested, the network settles down
into a so-called steady state, with each of its many neurons resting
close to their highest or lowest values--effectively, then, either on
or off.  A computer designed to mimic a neural network would solve
problems speedily by manipulating data in analog fashion.  but it
would report its findings when each neuron is either in the on or off
state, operating like a digital computer speaking a binary language.

The simulated computer designed by Hopfield and his AT&T colleagues
uses microprocessors to do the work of neurons.  Each microprocessor
is connected to all others--as many neurons are interconnected--which
would make the machine costly and complex to build.  Another major
difference between this computer and traditional ones is that memory
is not localized in any one processor or set of processors.  Instead,
memory is in the patterns formed by all the neurons, whether on or
off, when they are in steady states.  As a result, the computer can
deal with fragmentary or imprecise information.  When given a
misspelled name, for example, it can retreive the full name and data
about the person by settling on the closest name in the network.

Though analog computation is astonishingly fast, it sacrifices
precision.  Neural-network computers work best on problems that have
more than one reasonable solution.  Examples include airline
scheduling, superfast processing for robots or weapons, and, more in
AT&T's line, routing long-distance telephone traffic.

-John Paul Newport
-- 

					AMBAR
		"I need something to change your mind...."