[comp.ai.neural-nets] Neuron Digest V5 #36

neuron-request@HPLABS.HP.COM ("Neuron-Digest Moderator Peter Marvit") (08/29/89)

Neuron Digest	Monday, 28 Aug 1989
		Volume 5 : Issue 36

Today's Topics:
				 CG Methods
		      Connectionism, a paradigm shift?
		    Re: Connectionism, a paradigm shift?
		    Re: Connectionism, a paradigm shift?
		    Re: Connectionism, a paradigm shift?
		    Re: Connectionism, a paradigm shift?
		    Re: Connectionism, a paradigm shift?
		    Re: Connectionism, a paradigm shift?
		    Re: Connectionism, a paradigm shift?
		    Re: Connectionism, a paradigm shift?
		    Re: Connectionism, a paradigm shift?
		    Re: Connectionism, a paradigm shift?
		     Paradigm Shift Response (sort of)
		   Re: Paradigm Shift Response (sort of)


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: CG Methods
From:    tedwards@cmsun.nrl.navy.mil (Thomas Edwards)
Date:    Mon, 28 Aug 89 13:08:38 -0400 

(Many people have written to me for more information on this conjuagte
 gradient reference.  Since I know mail has bounced from me to alot of
 people who wrote, here it is:)

The reference is:
   Kramer, A. and Sangiovanni-Vincentelli, A.  "Efficient Parallel Learning
   Algorithms for Neural Networks."  _Advances in Neural Information 
   Processing Systems I_, ed. D. Touretzky.  Morgan Kaufmann Publishers, Inc.
   San Mateo, Ca. 1989.

   ISBN 1-558-60015-9

This article discusses backprop, steepest descent, and conjugate-gradient
(Polak-Ribiere rule, actually a good discussion of the rule, but no
serious discussion of the line minimization except a reference to Luenberger)
methods on The Connection Machine.  Results are reported, but actual 
parallel data representation and computation not well discussed.  

I have been examining steepest descent and conjugate gradient methods.
I am finding that there are often times when the point of line minimization
is so close to the current weight point in the search direction that the
line search has serious trouble not finding a "minimum" which is actually
much worse than the current position.  The line search is no doubt the
most difficult part of conjugate gradient and steepest descent programs.
Finding the best original points for the line search to use can be a chore.
I have yet to perform a steepest descent or conjugate gradient search
which is computationally more efficient than backpropagation, but I know
from using other conjugate gradient programs that it is possible, although
alot of work (and mathematical theory) has to go into the program design.
Perhaps that is why conjugate gradient learning has not been explored as
much as backpropagation.

 -Thomas Edwards
 tedwards@cmsun.nrl.navy.mil 
 ins_atge@jhunix.hcf.jhu.edu
 ins_atge@jhuvms.BITNET

	


------------------------------

Subject: Connectionism, a paradigm shift?
From:    dave@cogsci.indiana.edu (David Chalmers)
Organization: Indiana University, Bloomington
Date:    Thu, 03 Aug 89 23:40:44 +0000 

[[ Editor's Note:  The following [edited] discussion started on the AI
bulletin board and appeared on the USENET group.  While much of the
discussion in the Digest is technical, we often need to take stock in the
broader question of what we're doing and why.  The parallels between the AI
craze and the current Neural Network phenonmena are too easy to make.  Shoud
we believe our own PR? -PM ]]


Almost half the papers at this month's upcoming Cognitive Science conference
are about connectionism!  For a field which just 3-4 years ago was very
small and the "radical new kid on the block," this is an amazing growth.  Of
course, there has been much talk of a "paradigm shift."  But paradigm shifts
were never meant to happen this fast.  The electronic age seems to
accelerate everything (remember cold fusion?).  There's no chance for a
slow, graceful growth in the field; the bandwagon has arrived and it's
moving fast, jump on before it's too late!

This unnatural acceleration has got to lead to unstable, unpredictable
consequences.  There seems to be already almost as much valueless work in
connectionism as there was in "traditional AI" (tweak this, try that, apply
here, generalize there, and quickly, before somebody else does!).
Prediction: within a year or two an "anti-connectionist" backlash will be
growing very prominent.  (There are already a few signs.)  After all the
hype, people will begin to grumble "come on, they're just smart pattern
recognizers/associators.  Can they really do _cognition_?"  In this
accelerated age, these views will quickly become conventional wisdom, and
many will jump off the bandwagon as quickly as they jumped on.

Meantime, in the background people will keep plugging away, doing good
connectionist work at the slow and steady pace that good science seems to
require.  My prediction: in the wash, connectionism (along with other
"emergent" approaches) will emerge as the dominant and most successful
paradigm, but not for another decade yet, and not before another couple of
violent swings in various directions.  Comments?

Dave Chalmers     (dave@cogsci.indiana.edu)      
Concepts and Cognition, Indiana University.
"Whereof one cannot speak, thereof one must make it all up."

------------------------------

Subject: Re: Connectionism, a paradigm shift?
From:    andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head )
Organization: National Semiconductor, Santa Clara
Date:    Fri, 04 Aug 89 02:00:38 +0000 

In article <24241@iuvax.cs.indiana.edu>, dave@cogsci.indiana.edu (David Chalmers) writes:
[A discussion on fads and the rapid growth of connectionism, and a prediction
of its demise through hype]

I think you should crosspost this to comp.ai.neural-nets, whose members seem
to exhibit the usual healthy cynicism of a comp.. group; not a pack of
zealots by any means!

I agree that there exists a danger from over-rapid over-exposure and the
concomitant media hype. This is a constant warning cry made at the
conferences, and by people who popularise the field. You have to bear in
mind that we're only human, and become naturally excited even as researchers
and informed observers when new results appear. It is not necessary to
*immediately* understand the nature of the underlying mechanism when a new
and successful application is created (in this sense, your analogy to cold
fusion is spot-on).

I think that what is required to save the field from the "hype seesaw" is a
healthy rate of generation of solid new theoretical results.  Two fairly
recent results, for example, which could be seen to qualify:

1) A preprocessing paradigm using a simple one-layer net and an easily-
   implementable learning algorithm, which extracts the eigenvalues of
   the input autocorrelation - useful for image compression, etc.
   In particular, information-theoretic approaches are producing new
   results.  [Sanger, Linsker, Foldiak]
2) A formal proof of an algorithm for a restricted class of nets, which
   predicts detailed network dynamics given the training pattern set.
   [Lemmon, Kumar]

There is a tremendous amount of high-quality work going on, bolstered by the
application of formal mathematical techniques.  It seems to me that this
truly sets NN research apart from the much more "hand-waving" stuff that I
encountered when looking at conventional AI, when expert systems were on the
rise in the early- and mid-80s.  Here one found tree traversal stuff and
Bayesian statistical variations, definitons of "frames" and the like; the ad
hoc component was significant.  (although fuzzy set theory has to some
extent set some of this on a more formal footing, I have to agree).

The analogy I have in mind equates NN research to the microstructure of
cognition, and as such is akin to "physics". When dealing with the atoms of
behaviour, it's possible to produce significant and fundamental results.
Symbolic AI smacks to me much more like "inorganic chemistry".

The consensus view seems to be that these two paradigms will eventually
cooperate in future artificial cognitive systems. Work is already
ongoing to combine expert systems with NN coprocessors. However, taking
the brain as an existence proof, it's clear that NN technology can
implement all levels of cognition, whereas it is unclear whether symbolic
methods are capable of this [see e.g. Steve Harnad: subsymbolic and 
symbolic processing].

...........................................................................
Andrew Palfreyman	There's a good time coming, be it ever so far away,
andrew@berlioz.nsc.com	That's what I says to myself, says I, 
time sucks					   jolly good luck, hooray!

------------------------------

Subject: Re: Connectionism, a paradigm shift?
From:    dmark@cs.Buffalo.EDU (David Mark)
Organization: SUNY/Buffalo Geography
Date:    Sat, 05 Aug 89 12:54:24 +0000 


I agree that it does look like a paradigm shift, since it is a radically new
way to look at some problems.

I have not yet become very interested in connectionist models, because, as a
scientist rather than an engineer, I am interested primarily in seeking
_explanation_ rather than _performance_.  There is little doubt that NN
programs based on the connectionist paradigm perform some computing tasks
very well, including some (many) tasks of an AI/ES flavor.  But, I am not
aware of a lot of success in understanding what the weights _MEAN_, except
for some specialized fields such as low-level vision work, in which we also
have neurophysiological evidence.  Now, I only read a small proportion of
the NN/connectionist work, so I wonder if _explanations_ using NN/C have
become more eveident in the last year or two.  If not, I'm not interested,
and assume that the "old" paradigm will remain quite healthy in the sciences
at least.

(Just my underinformed opinions, obviously not necessarily those of my
colleagues!)

David Mark
dmark@cs.buffalo.edu

------------------------------

Subject: Re: Connectionism, a paradigm shift?
From:    holt@cs.AthabascaU.CA (Peter Holt)
Organization: Athabasca U, Alberta, Canada
Date:    Mon, 07 Aug 89 14:46:04 +0000 

I would say that is a fairly zealous statement.

Personally I have not decided which paradigm is better for what when yet,
but lets remember that there may only be a superficial resemblance between
the operations of the brain and current neural net technology!  A lot more
things are happening in the brain (especially chemically and at the
intraneuron level) than are in neural nets.  It may even be a coincidence
that what some of the functionality of neural nets approximates some of the
very basic perceptual-cognitive functions of the brain. Some of the other
functionality of neural nets (extracting eigenvalues?) would not seem to
match the way humans do same things at all.

------------------------------

Subject: Re: Connectionism, a paradigm shift?
From:    andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head )
Organization: National Semiconductor, Santa Clara
Date:    Mon, 07 Aug 89 19:59:55 +0000 

In article <705@aurora.AthabascaU.CA>,holt@cs.AthabascaU.CA (Peter Holt) writes:
> ..but lets remember that there may only be a superficial resemblance 
> between the operations of the brain and current neural net technology!

Let's talk about "resemblance", then.  "Resemblance" is a strong suit for
nets in the connectionism vs. serial symbolic systems debate, and yet you
use it for critique!  When PROLOG executes a branch instruction in the ALU
of the SPARC chip, where is the resemblance to the brain?
 
...........................................................................
Andrew Palfreyman	There's a good time coming, be it ever so far away,
andrew@berlioz.nsc.com	That's what I says to myself, says I, 
time sucks					   jolly good luck, hooray!

------------------------------

Subject: Re: Connectionism, a paradigm shift?
From:    jps@cat.cmu.edu (James Salsman)
Organization: Carnegie Mellon
Date:    Tue, 08 Aug 89 03:15:17 +0000 


> When PROLOG executes a branch instruction in the ALU of the SPARC chip,
> where is the resemblance to the brain?

It depends on the rest of the SPARC system's state.

If you have a formal description of a data structure and an algorithm, then
you have a program.  Using a technique called "programming" one may map
these descriptions on to different kinds of computer systems.  The
Neural-Net of the brain is one kind of system, and a SPARC system is
somthing else entirely.  The only reason that they can't be executing the
same program is that the I/O systems are very different.


:James P. Salsman (jps@CAT.CMU.EDU)

------------------------------

Subject: Re: Connectionism, a paradigm shift?
From:    coggins@coggins.cs.unc.edu (Dr. James Coggins)
Organization: University Of North Carolina, Chapel Hill
Date:    Sun, 13 Aug 89 14:34:01 +0000 

[[ Editor's Note: the beginning of this message has been cropped.  -PM ]]

>There is a tremendous amount of high-quality work going on, bolstered by
>the application of formal mathematical techniques.

I'm afraid that the theoretical foundation you appreciate is actually
inherited (or bastardized, depending on your point of view) from the
statistical pattern recognition studies of ten to twenty years ago.  Sure
there is a theory base, but it's ready-made, much of it not arising
inherently from NNs (but being REdiscovered there).

"...only be sure please always to call it RESEARCH!"
                   from Lobachevsky by Tom Lehrer

I have been impressed with the confirmation provided by this newsgroup that
the majority of researchers in this area really are disgusted at the
publicity-mongering, money-grubbing approach of too many well-placed (and
well-heeled) labs, researchers, writers, companies, seminar sellers, and the
like.  NNs might become a significant contribution making possible highly
parallel implementations of many kinds of processes if the science fiction
futurist brain-theory dabblers would shut up and let the real researchers
develop the field in a careful, disciplined way, without having to run
interference against massively inflated expectations of the work.

A few months ago I posted to comp.ai.neural-nets the document reproduced
below.  I guess it was too hot for the newsgroup, but I did receive 13
e-mail replies: 8 firmly supportive, 4 asking for more pointers to
statistical pattern recognition which I gladly supplied (But note: Is the
scholarship in the NN field really so weak that NN researchers are unaware
of twenty years of research in statistical pattern recognition? The evidence
says yes!), and one sharply critical but easy to refute (a True Believer who
went down in flames).

I posted the document below in the spirit of my other "Outrageous Discussion
Papers" that I have been circulating to carefully selected audiences to
provoke thought and comment and encourage skepticism.  I have one flaming
the use of rule-based expert systems in medical applications, one arguing
that edges are an inadequate foundation for vision, one arguing that
automatic identification of organs in CT scans is an unworthy task of little
practical value, one that is a manifesto for my approach to computer vision
research, and the neural net one below.  If you are interested, e-mail me,
but I'm leaving now for a three-week vacation, so don't expect my usual
rapid response.

 ---------------------------------------------
 My assessment of the neural net area is as follows: (consider these Six
Theses nailed to the church door)

1. NNs are a parallel implementation technique that shows promise for making
perceptual processes run in real time.

2. There is nothing in the NN work that is fundamentally new except as a
fast implementation.  Their ability to learn incrementally from a series of
samples nice but not new.  The way they learn and make decisions is decades
old and first arose in communication theory, then was further developed in
statistical pattern recognition.

3. The claims that NNs are fundamentally new are founded on ignorance of
statistical pattern recognition or on simplistic views of the nature of
statistical pattern recognition.  I have heard supposedly competent people
working in NNs claim that statistical pattern recognition is based on
assumptions of Gaussian distributions which are not required in NNs,
therefore NNs are fundamentally different.  This is ridiculous.  Statistical
pattern recognition is not bound to Gaussians, and NNs do, most assuredly,
incorporate distributional assumptions in their decision criteria.

4. A more cynical view that I do not fully embrace says that the main
function of "Neural Networks" is as a label for money.  It is a flag you
wave to attract money dispensed by people who are interested in the
engineering of real-time perceptual processing and who are ignorant of
statistical pattern recognition and therefore the lack of substance of the
neural net field.

5. Neural nets raise lots of engineering questions but little science.  Much
of the excitement they have raised is based on uncritical acceptance of
"neat" demos and ignorance. As such, the area resembles a religion more than
a science.

6. The "popularity" of neural net research is a consequence of the miserable
mathematical backgrounds of computer science students (and some
professors!).  You don't need to know any math to be a hacker, but you have
to know math and statistics to work in statistical pattern recognition.
Thus, generations of computer science students are susceptible to
hoodwinking by neat demos based on simple mathematical and statistical
techniques that incorporate some engineering hacks that can be tweaked
forever.  They'll think they are accomplishing something by their endless
tweaking because they don't know enough math and statistics to tell what's
really going on.

Dr. James M. Coggins          coggins@cs.unc.edu
Computer Science Department   A neuromorphic minimum distance classifier!
UNC-Chapel Hill               Big freaking hairy deal.
Chapel Hill, NC 27599-3175                -Garfield the Cat
and NASA Center of Excellence in Space Data and Information Science
 

------------------------------

Subject: Re: Connectionism, a paradigm shift?
From:    lee@uhccux.uhcc.hawaii.edu (Greg Lee)
Organization: University of Hawaii
Date:    Sun, 13 Aug 89 20:18:55 +0000 


>5. Neural nets raise lots of engineering questions but little science.

Judging from popular accounts, and as an outsider to the field, this is the
impression I get -- that NNs are an attempt to do technology without
science.  I have seen what I take to be kindred approaches in my own field,
linguistics.  The idea seems to be that one can escape the necessity to
achieve an understanding of human perception and leave that to a machine (or
algorithm, rather).  Since scientific understanding (new and old) is so
difficult to come by, it's a very seductive idea.  But not a reasonable one.
				Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Subject: Re: Connectionism, a paradigm shift?
From:    bph@buengc.BU.EDU (Blair P. Houghton)
Organization: Boston Univ. Col. of Eng.
Date:    Mon, 14 Aug 89 01:02:33 +0000 

>
>>5. Neural nets raise lots of engineering questions but little science.

Eh?  Science has been the forming of models and the fitting of them to
observed phenomena.  In the case of artificial neural systems, the models
are physical entities (neuromimes, simulations of neuromimes, simulations of
behavioral models of neuromimes and of elements composed of neuromimes,
etc.) rather than tautologies (laws, theorems, etc.), and the fit is a
behavioral one, as is every theory, until a new, deeper observation is made
of the behavior, or until we are prepared to discard degenerative
assumptions that limit our study of currently observed behavior.

>The idea seems to be that one can escape the
>necessity to achieve an understanding of human perception and leave that
>to a machine (or algorithm, rather).  Since scientific understanding
>(new and old) is so difficult to come by, it's a very seductive idea.
>But not a reasonable one.

I seem to remember having this same conversation before...anyway:

Doing neural nets this way is akin to allowing probability to be a
mathematical field, and to statistical mechanics and quantum theory.

The understanding has, and consciously so, been behind the techniques in
those areas since the techniques were first found to be superior to the
understanding in predictive power.

				--Blair
				  "It's quite reasonable.
				   It's quite reasonable to assume
				   that my thesis won't be half this
				   erudite."

------------------------------

Subject: Re: Connectionism, a paradigm shift?
From:    ari@kolmogorov.physics.uiuc.edu
Date:    Tue, 15 Aug 89 01:16:00 +0000 


Much of the hype with Neural Networks sounds much like the hype in the study
of Chaos.  One author of a popular book on Chaos claims a paradigm shift in
physical science, even as far as to claim that the 20th century will be
remembered for the theory of General Relativity, Quantum Mechanics and the
theory of Chaos!

One difficulty in the field of Chaos is the mixing of hype with solid
theoretical and conceptual advances.  Chaos is a broad title given to a
large class of ideas and observed (usually computationally) phenomenon as
well as some theory.  It is much more a collection of bits and pieces and
tantalizing glimpses than a cohesive theory.

One posting claims that: "Doing neural nets this way is akin to allowing
probability to be a mathematical field, and to statistical mechanics and
quantum theory."

Which seems to imply that the fields of probability, statistical mechanics
(my own field) and quantum theory are in some sense the less precise version
of some other field or fields which simply simulate, rather than theorize.

These views seem wrong to me, and certainly, the bulk of NN research appears
to be at a much less about theory, and much more about description and
simulation.

This is very well and good, and is much more akin to Monte Carlo Ising Spin
simulations in statistical physics.  However, such simulations are not the
bulk of statistical physics.

The current legacy of Chaos theory is a more descriptive rather than
theoretical understanding of chaotic phenomena.  Of course solid work has
been done, but a lot of pretty pictures have made more than the fair share
of impact.

I believe it is important to any field to understand the differences between
observing, describing, classifying and understanding phenomena.  One should
not claim the last simply from the first.


Aritomo Shinozaki co/ Physical Theory Group	ari@kolmogorov.physics.uiuc.edu
Beckman Institute
University of Illinois, Urbana-Champaign
Urbana IL, 61801
     

------------------------------

Subject: Re: Connectionism, a paradigm shift?
From:    jtn@potomac.ads.com (John T. Nelson)
Organization: Advanced Decision Systems, Arlington VA
Date:    Tue, 15 Aug 89 14:33:05 +0000 

> 6. The "popularity" of neural net research is a consequence of the
> miserable mathematical backgrounds of computer science students (and
> some professors!)....

A sweeping generalization.  Computer scientists aren't the only ones working
on neural networks and not all computer scientists are "student hackers."  I
wish people would stop confusing "programming" activities with thinking and
research activities.  They are distinctly different.  One is engineering and
the other is not.  There are computer scientists who approach problems as
theoreticians and there are computer scientists who approach problems with
ad hoc solutions in mind.

However...... (time to get up on my soapbox oh boy!)....

In my opinion we don't have a deep macroscopic understanding of what neural
nets are capable of doing or are doing even in the simplest networks.
Researchers are spending a lot of time and effort focusing on the
optimization of small techniques (e.g. backpropigation) and too little time
on developing formalisms for describing and understanding NNs as a whole.

A deep understanding of any complex paradigm will be reached only through
the efforts of many researchers, tackling the problem from different
viewpoints (like multiple sculptors chipping away at a block of marble to
reveal the statue hidden inside).  It's fairly useless for all of these
metaphorical artists to chip away at a big toe all at once, yet they must
also posses the same overall goal and understanding of the problem,
otherwise the final piece will not be consistant and balanced.

Well you get the idea.

------------------------------

Subject: Paradigm Shift Response (sort of)
From:    worden@ut-emx.UUCP (worden)
Organization: The University of Texas at Austin, Austin, Texas
Date:    Fri, 11 Aug 89 09:25:29 +0000 


It seems to me that most NN folks are doing their honest best with what
little we know now.  (And a thousand curses on those few but vociferous
money-sniffer dilettantes!!)

As I understand it, our sensory and motor systems are highly structured,
from the peripheral nerves to at least several cortical layer depths.
Beyond that, through the association areas and into the deeper structures of
the limbic system, no one really knows what the h--l is going on.

So, it doesn't surprise me that most NN folks work with the structured
networks.  After all, there can thereby be hope that one's model will be
biologically verified.  And, such work is not without merit; there remains a
great deal to be understood, even in the sensory/motor systems about which
we know the most.

My personal preference, however, is for the random type of networks.  Not as
sensory/motor systems, but as possible models of the deeper systems.  I have
a nice micrograph, from the old 1979 Scientific American special issue on
the brain, that shows a tangled mass of stained brain tissue.  Apparent
randomness, at least, does seem to coexist with structure inside our skulls!

What I would really like to see, though, if it is not too premature, is
collaboration between you majority structure enthusiasts and us minority
randomness aficionados, along with some A.I. folks, to seriously attempt to
build a "complete" system.  My thought would be to use structured NN's for
sensor input/processing and low-level learning, feeding into random NN's for
multisensor fusion and mid- level learning, feeding into an A.I. subsystem
for high- level learning and decision-making, feeding into random NN's for
multi-effector fission, feeding into structured NN's for pre-effector
conditioning and effector output.

By the way, if any of you have any references to recent collaborative work
between structured NN and A.I. folks, I would be very interested in getting
them.  Please email the info to me (or to this newsgroup).

Finally, I believe that all of us are lacking critical, fundamental knowlege
of some kind about how our brains work and that it is this deficiency that
now prevents us from building systems that behave the way we would really
like (i.e., in a "truly intelligent" fashion).  I just cannot buy the
arguments that greater size or greater speed or greater complexity or even
greater biological realism is the "answer".  I do believe that part of the
answer lies in building hybrid systems, but I think that there is a deeper
mystery.  Perhaps some cellular function that has yet to be observed and/or
understood.  Perhaps an interaction between neurons and glia, as suggested
by that recent Scientific American article.  Perhaps some phenomenon that we
don't even suspect at this point...

 - Sue Worden
  Electrical and Computer Engineering
  University of Texas at Austin

------------------------------

Subject: Re: Paradigm Shift Response (sort of)
From:    Joe Keane <jk3k+@ANDREW.CMU.EDU>
Organization: Mathematics, Carnegie Mellon, Pittsburgh, PA
Date:    13 Aug 89 02:41:18 +0000 

In article <16946@ut-emx.UUCP> worden@ut-emx.UUCP (worden) writes:
>As I understand it, our sensory and motor systems are highly
>structured, from the peripheral nerves to at least several
>cortical layer depths.  Beyond that, through the association
>areas and into the deeper structures of the limbic system,
>no one really knows what the h--l is going on.

Not yet at least.

>Apparent randomness, at least, does seem to coexist with
>structure inside our skulls!

If you looked at a microprocessor chip you might say the same thing.  I
don't think biological neural nets are as structured as silicon chips, or we
might be looking for `grandmother cells'.  But i don't think they're
completely random either.  It's up to NN people and neurobiologists to
figure out which structures are useful.

------------------------------

End of Neurons Digest
*********************