[misc.misc] THINKING COMPUTERS ARE A REALITY

c50p-az@dorothy.Berkeley.EDU (E. Stephen Mack) (10/15/86)

[]

The following article is quoted from the San Francisco Chronicle/Examiner
"Sunday Punch" of October 12, 1986, page 5.  This article is long, and poorly
written, but its content makes up for that.

Anything appearing within square brackets ([]) in this article is placed
there by me.  (My general comments appear at the end.)

(Reprinted without permission)  
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
by Curt Suplee

     [...A] radically new form of computer architecture -- and a
revolutionary conception of synthetic thought -- are bringing that prospect
[of machines that can actually think] disconcertingly close to reality:

     o  In Baltimore, a bucket of chips is teaching itself to read.

     o  In Cambridge and San Diego, blind wires are learning to see in
three dimensions.

     o  In Pittsburgh, terminals are talking back to their users. [...]

     At the heart of the new machines is a system called a neural network:
a circuit designed to replicate the way neurons act and interact in the
brain.

     [...] Caltech biochemist John J. Hopfield's [...] prototypical neural
network uses an amplifier to mimic the neuron's core and a set of mathematical
routines called algorithms to determine how each pseudo-neuron will process
its data.

     Incoming lines from other "cells" are run through a set of capacitors
and resistors that control the neuron's resting threshhold [sic].  And to
simulate the difference between excitatory and inhibitory signals, the
amplifier has two output lines, one positive, one negative.

     Such systems are capable of astounding speed, because, as Hopfield
and David Tank (of Bell Laboratories' Department of Molecular Bio-physics)
write in Biological Cybernetics, "a collective solution is computed on the
basis of the simultaneous interactions of hundreds of devices" producing a
sort of blitzkrieg committee decision.

     Those strengths are exquisitely well suited to some of the worst bio-
tech bugaboos in modern engineering: getting industrial robots to see 
properly; building defense systems to analyze images or sonar signals as
fast as they are received; developing systems that can recognize and respond
to speech.  No wonder there are now scores of scientists probing the networks'
potential.

     The TRW company has one neural-network computer already for sale and
another set for imminent release.

     Neural networks are besting mainframes at some of the toughest problems
in the computational chipstakes [sic].  Astonishing new products are expected
by the early '90s, and research is expanding in a dozen directions.

     "Listen to that," says Johns Hopkins biophysicist Terrence Sejnowski,
ear cocked toward the tape player.  The sound is an eerie, tweetering gargle
like some aborigine falsetto -- ma mnamnamna neeneenee mnunu bleeeeeeeeee.

     "It's discovering the difference between vowels and consonants,"
Sejnowski says, face still rapt after countless demonstrations.  He's
listening to a neural network teaching itself to read aloud.

     Working with Charles R. Rosenberg of Princeton's Psychology
Department, Sejnowski designed a network whose task was to learn to
pronounce correctly a group of sentences containing 1000 common English
words.

     They had been read previously by a little boy, and a linguist had
transcribed the boy's speech into phonemes (the discrete parts of
words), which would serve as the benchmark for the network's accuracy.

     The system was designed to begin in complete ignorance and "learn"
just as a child does -- by being told he is wrong.  That is, the output
end of the system would record each squawk the network sent to a speech
synthesizer, compare it with the correct phonemes recorded by the
linguist and send an error signal to inform the network how far off it
had been from the desired sound.

     Then the network, using a system called "back-propagation," would
begin amending itself backwards: Each layer of processing cells would
pass along the error code to the layer beside or below it, with orders
to change its output next time it encountered those particular letters.

     The tape contains the results.  Within an hour, the network is
beginning to pause at intervals ("See -- it's finding out about word
boundaries") and soon is hitting 20 to 30 percent right.

     After running all night, it's virtually perfect: "I like tagota
my grandmother's house."  And soon it is pronouncing correctly words
it has never seen before.

     Each of the system's 200 [!] cells has modified its equations hundreds
of times.  The scientists know it has taught itself.  But they don't know
how.  Nor can they predict exactly where in the mess it will store its
knowledge.

     "The network has obviously learned to extract something about English
pronunciation," Sejnowski says.  "Otherwise it couldn't generalize.  This
system can discover the rules."

     Some of what the computer has done is downright spooky:  Although
each cell is identical when the program begins running, "what we are
discovering is that these cells do tend to specialize in certain patterns --
some in vowels, some in consonants, some in certain phonemes.

     Nobody told it how to do this.  Nobody knows exactly how it did it.
Neural networks program themselves.

     [...N]eural networks are beginning to develop some [...] capabilities
[...] of associative memory and rapid "close-enough" solutions to
unspeakably complicated problems.

     "Close enough" may be a poor criterion for brain surgery and winning
the lottery, but for many problems in biomechanical engineering, robotics
and pattern recognition, "a good solution obtained very quickly is
better than waiting for the perfect solution," Tank says.

     Future designs may take advantage of the neural net's ability to
sustain massive loss, thanks to its decentralized structure.

     "Cut just one wire on a conventional computer," says Sewjnowski [sic],
"and the machine will stop dead.  But you can cut large sections out
of this network, and it doesn't even feel it.  It'll make a few more
errors occasionally, " like the brain after a concussion.  "But no
single connection is essential."

     That's a net plus for [manager of TRW's Artificial Intelligence
Center at Rancho Carmel, California, Robert] Hecht-Nielsen, whose work is
financed in part by the Pentagon's Defense Advanced Research Projects
Agency [!]: "Our customers like the idea that it might be able to take a few
bullets and keep on running."  (So does the Jet Propulsion Laboratory, whose
deep-space vehicles have to function for years.)

     Aside from the defense uses, Hecht-Nielsen expects neural networks to
promote dramatic improvements in robotics.  "The big problem with today's
industrial robots is that they have very primitive visual systems."
Networks, however, can program themselves "to learn to discriminate between
good and bad products."

     Hecht-Nielsen is equally enthusiastic about innovations in "the human
interface arena."  He foresees retrieval systems that exploit the networks'
capacity for "close-enough" or "near-match" solutions so that they'll reach
out and find the right data even when the user specifies "only some corrupted
version" of the right item.

     And the self-programming nets could save us from ourselves.  "Most
people who use a computer make mistakes, type the wrong keys.  Well, we
could have a keyboard that simply remembers your corrections and learns
the patterns."

     Then when you hit the wrong key, "it would end up doing what you
mean, not what you say."
                                                          Washington Post
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
[end of quoted material]

Comments:

1. Possibilities are enormous, far beyond just the intelligent keyboards
   or robotics mentioned.  Robotic servants are not at all inconceivable,
   along with robotic teachers, robotic firepersons, or a robot for any
   job hazardous to humans.

2. So enormous, in fact, that this could revolutionize society as we
   know it.  All science fiction novels and movies about robots that
   you and I had rejected as unlikely now seem to me within the realm
   of possibility.

Therefore, followups in these areas are encouraged:

1. More information:  Since this project is apparently sponsored by
   DARPA, and since the institutions mentioned in the article are for
   the most part on the net, a current update shouldn't be too difficult
   to find.  What academic and professional journals have neural networks
   been covered in?

2. Philosophical/moral implications:

   a. Consciousness of neural networks.
   b. Morality of "learning" servants:  Suppose one somehow "learns" to
      not want to obey?

3. Societal implications:

   a. New technological advancements in computers and robotics
   b. Possible advancements in brain-related fields (brain surgery, psychology)
   c. Loss of jobs for humans associated with increased robotics technology.


I look forward to learning more about neural networks from you.

                                  [e. stephen]
                          -=-~-=--=-~-=--=-~-=--=-~-=-
Post:  E. Stephen Mack, 2408 Atherton Street, Berkeley, CA  94704

ARPA: stephen@miro.Berkeley.EDU      -or-      c50p-az@dorothy.Berkeley.EDU
UUCP: {u-choose}!ucbvax!miro!stephen -or- {u-choose}!ucbvax!dorothy!c50p-az

FRIENDLY DISCLAIMER:  Please realize that I am only stating what I think.
My opinions do not represent opinions of U.C. Berkeley.

Off to the service when you're walking slowly to the car
And the silver in her hair shines in the cold November air;
    You hear the tolling bell,
    And touch the silk in your lapel.

faustus@ucbcad.BERKELEY.EDU (Wayne A. Christopher) (10/15/86)

Net.general isn't the right place to post such an article.  First, it's
probably copywrited and you shouldn't reproduce it; second, we've all read
newspaper articles about high-tech subjects and know how badly they are
usually written; and third, 90% of what goes under the heading of AI is
BS -- if you are interested in it there are more technical sources available
than the San Fransisco Chronicle.  Followups to net.ai, please...

	Wayne