[comp.ai.neural-nets] What good are neural nets?

abbott@aerospace.aero.org (Russell J. Abbott) (03/15/90)

Is there a good characterization of the kinds of problems for which
neural nets are better than more traditional computational systems?
More specifically:

1) Is there a recognized (or even suggested) set of criteria in terms of
which one typically compares NN solutions to problems to more
traditional computational solutions?  Two possible criteria I can think
of are ease of development, e.g., training vs. programming, and speed of
execution once a system is developed.

2) Is there a characterization of a problem domain in which neural nets
are superior under any such criteria?


A related but somewhat different question:  to what extent are neural
nets equivalent to statistical classification algorithms?   That is, are
there neural nets that cannot be understood as instantiations of some
statistical classification algorithm?   In asking that question I want
to restrict the discussion to just the neural net part of a system and
not include a larger system that includes a neural net as a component.


What both of these questions are really getting at is the following.  If
a system designer wants to think of neural nets as one element in his
bag of system design tricks, what sort of function(s) should he think of
them as potentially capable of performing?
-- 
-- Russ abbott@itro3.aero.org

ted@nmsu.edu (Ted Dunning) (03/22/90)

In article <68764@aerospace.AERO.ORG> abbott@aerospace.aero.org (Russell J. Abbott) writes:


   Is there a good characterization of the kinds of problems for which
   neural nets are better than more traditional computational systems?

yes.


none.

donw@zehntel.zehntel.com (Don White) (03/22/90)

In article <68764@aerospace.AERO.ORG> abbott@aero.UUCP (Russell J. Abbott) writes:
>
>Is there a good characterization of the kinds of problems for which
>neural nets are better than more traditional computational systems?
>More specifically:
>
     Yes, any system to which there may be more than one right answer.
     Or any poorly defined problem.(Almost the same thing.)

>1) Is there a recognized (or even suggested) set of criteria in terms of
>which one typically compares NN solutions to problems to more
>traditional computational solutions?  Two possible criteria I can think
>of are ease of development, e.g., training vs. programming, and speed of
>execution once a system is developed.
>
>2) Is there a characterization of a problem domain in which neural nets
>are superior under any such criteria?
>
 
>-- Russ abbott@itro3.aero.org

     A neural net appears to me to be a constrained chaotic system. (As is
  the human mind.) The constraints cause a predisposition to come up with
  a RIGHT answer. The chaotic aspect CAN result in a wrong answer BUT it can
  also result in an unexpected/unplanned answer. This is the key to creativity.

     I wonder how one would go about quantifying the fractional dimension of
  an neural net. Hmmmm.

     Don White
     Box 271177 Concord, CA. 94527-1177
     zehntel!donw
    

shankar@rnd.GBA.NYU.EDU (Shankar Bhattachary) (03/22/90)

In article <TED.90Mar21113353@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes:
>
>In article <68764@aerospace.AERO.ORG> abbott@aerospace.aero.org (Russell J. Abbott) writes:
>
>
>   Is there a good characterization of the kinds of problems for which
>   neural nets are better than more traditional computational systems?
>
>yes.
>
>
>none.

May I request that if Ted Dunning has good reasons for his opinion, he
elaborate on the "none"? If neural nets are indeed no more effective under
any circumstances than are more conventional methods, many of us could save
ourselves a lot of trouble.

This is a serious request. Many people feel as Ted does, and I think
the argument deserves more than just a "none".

I have just begun looking at this area, and am at present inclined to
believe that neural nets are different in some way, although I am not sure
just how. With time, I expect I will make my mind up on the subject.
Meanwhile, I am interested in opinions from those who have more of a base
to build their opinions on.

--------------------------------------------------------------------------
Shankar Bhattacharyya, Information Systems, New York University
shankar@rnd.gba.nyu.edu
--------------------------------------------------------------------------

ted@nmsu.edu (Ted Dunning) (03/22/90)

In article <2355@rnd.GBA.NYU.EDU> shankar@rnd.GBA.NYU.EDU (Shankar Bhattachary) writes:

   In article <TED.90Mar21113353@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes:

   >none.

   May I request that if Ted Dunning has good reasons for his opinion,
   he elaborate on the "none"?

strictly speaking, "none" should be the default opinion, with the
burden of proof being on the people who claim that neural nets
actually are solving problems better than conventional approaches.

but in particular, if you take a few of the prototypical claims from
the neural net community, you find that they just don't add anything
new to the solution of particular problems, only that they add
something new to the collection of things that neural nets `kind of'
do.  the claim that the neural nets solve these problems with less a
priori structure than conventional approaches is completely specious
due to the amount of tweaking needed to get any sort of success.

a few classic examples include sejnowskis over-celebrated net-talk,
the learning of the xor function, and lapedes work with dna.  net-talk
does not work nearly as well as a handcrafted text to speech system
such as dectalk, nor does it work as well, learn as fast, or run as
fast as a non-linear interpolation method such as that used by doyne
farmer.  learning the xor function (or any logical function) is better
done using something like the genetic algorithm on a population of
state machines, and alan lapedes work with predicting whether short
base pair sequences code for particular proteins works better if you
forget the neural net mumbo-jumbo and just do the math of a non-linear
interpolation.

   If neural nets are indeed no more effective under any circumstances
   than are more conventional methods, many of us could save ourselves
   a lot of trouble.

good point!!

   This is a serious request. Many people feel as Ted does, and I
   think the argument deserves more than just a "none".

actually i think that the argument really deserves some examples.  why
is it that we should _presume_ that neural nets do magic just because
it says so in the latest survey in BYTE or AI magazine.


so let us turn this challenge back to the normal course in scientific
discourse, and ask if there is anything that neural nets actually do
better than conventional approaches.

bill@boulder.Colorado.EDU (03/22/90)

In article <2355@rnd.GBA.NYU.EDU> shankar@rnd.GBA.NYU.EDU 
(Shankar Bhattachary) writes:
>
>   If neural nets are indeed no more effective under any circumstances
>   than are more conventional methods, many of us could save ourselves
>   a lot of trouble.

  Well, a housefly, controlled by a neural net with computational power
similar to an 80386, solves robotics problems no conventional method
can begin to attack.  We don't understand how it does it, but it's way
too early to despair.

  One of the fundamental principles of creative problem solving is not
to be too critical of a newborn idea.  Give it a chance to grow for a
while, then decide.

	-- Bill Skaggs

kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) (03/22/90)

In article <TED.90Mar21175729@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes:
>
>but in particular, if you take a few of the prototypical claims from
>the neural net community, you find that they just don't add anything
>new to the solution of particular problems, only that they add
>something new to the collection of things that neural nets `kind of'
>do.  the claim that the neural nets solve these problems with less a
>priori structure than conventional approaches is completely specious
>due to the amount of tweaking needed to get any sort of success.

I think part of the point of neural nets it not that they add anything
new to the solution of any _particular_ problem, but that because they
_do_ solve problems with less a priori structure they are more general.
The same neural net can learn to solve many different problems, and
possibly to solve more than one problem concurrently. So, even if
classical system A solves problem X better than any NN and system B
solves Y better than any NN, a sufficiently large NN may be trainable to
solve either X or Y, or possibly both.

Part of the problem as I see it with the NNs I've heard about is that
they are too small to be very general. The large the NN, the more
general it will be.

You can argue about the above that it is easy to put the two classical
systems together making a third classical system which solves both X and
Y better than an NN, but the problem with this approach is that it isn't
easy to do this for every problem to get an intelligent machine.
Whereas, the NN approach generalizes automatically, without the need to
create lots of different solutions.

-Karl		kpfleger@phoenix.princeton.edu
		kpfleger@pucc	(bitnet)

muttiah@cs.purdue.EDU (Ranjan Samuel Muttiah) (03/22/90)

In article <18697@boulder.Colorado.EDU> bill@synapse.Colorado.EDU (Bill Skaggs) writes:
>In article <2355@rnd.GBA.NYU.EDU> shankar@rnd.GBA.NYU.EDU 
>(Shankar Bhattachary) writes:
>>
>>   If neural nets are indeed no more effective under any circumstances
>>   than are more conventional methods, many of us could save ourselves
>>   a lot of trouble.
>
>but it's way >too early to despair.

Like Bill says, it's way too early to throw our arms up in the air.
For my two cents worth, I think a problem may be the lack of the
amount applications being explored by people.  Everyone seems to
want to develop his own computation model of the brain (not too
difficult, once you have the math going) than evaluate the
models already in existence.  Let's beat BP to death ! :-).

On a related note, has anybody looked into the Linsker net model ?
(please email if you have).

rr2p+@andrew.cmu.edu (Richard Dale Romero) (03/22/90)

I think Ted is ignoring some very important aspects of the neural network.
It seems that we will be looking more and more towards parallel processing
in order to increase our computing power.  But, solving problems on a parallel
machine leads to really *big* complications in how to structure the program.
Simulating a neural network on a parallel machine is something that it is
beautifully suited for, though.  With more computing power, we can begin
to solve more types of problems that would have previously taken much to
long to do on today's von-Neumann (sp?) computers.

-Rick

ted@nmsu.edu (Ted Dunning) (03/22/90)

In article <14746@phoenix.Princeton.EDU> kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) writes:

   I think part of the point of neural nets it not that they add anything
   new to the solution of any _particular_ problem, but that because they
   _do_ solve problems with less a priori structure they are more
   general.

this looks right at first, but in fact, there is a large amount of
tweaking which makes this claim of little a priori structure much less
compelling. 

   The same neural net can learn to solve many different problems, and
   possibly to solve more than one problem concurrently. So, even if
   classical system A solves problem X better than any NN and system B
   solves Y better than any NN, a sufficiently large NN may be trainable to
   solve either X or Y, or possibly both.

   Part of the problem as I see it with the NNs I've heard about is that
   they are too small to be very general. The large the NN, the more
   general it will be.

true.

unfortunately, in most of the networks exhibited so far, the scaling
of the size of the neural net or the accuracy required is prohibitive.

   Whereas, the NN approach generalizes automatically, without the need to
   create lots of different solutions.

this is the claim, but where are the exemplars?

ted@nmsu.edu (Ted Dunning) (03/22/90)

In article <18697@boulder.Colorado.EDU> bill@boulder.Colorado.EDU writes:

     Well, a housefly, controlled by a neural net with computational power
   similar to an 80386, solves robotics problems no conventional method
   can begin to attack.  We don't understand how it does it, but it's way
   too early to despair.

ahhh.... when pressed, change the subject.

the biological plausibility of artificial neural nets is essentially
nil.  a linear sum followed by a soft limiter is nothing like what a
neuron does.  it may be that the collective behavior of all sorts of
different neurons will converge, but there is no indication yet that
neural nets in the popular style will do this.

     One of the fundamental principles of creative problem solving is not
   to be too critical of a newborn idea.  Give it a chance to grow for a
   while, then decide.

yes, indeed.  but let us not over-hype the idea to the point that
people will be so incredibly disillusioned that they won't work with
it for decades.  remember the history of neural nets (aka
perceptrons). 

tedwards@nrl-cmf.UUCP (Thomas Edwards) (03/23/90)

In article <TED.90Mar22085433@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes:
>unfortunately, in most of the networks exhibited so far, the scaling
>of the size of the neural net or the accuracy required is prohibitive.

I'll be the first one to admit that backpropagation learning can be truly tedious, and
using it on anything but the most toy problems will definately leave one with a
bad taste in the mouth for neural networks.

However, researchers are realizing that there are major problems with backpropagation

1) fixed step size--vanilla backprop does not include much in the way of higher order
                    derivatives to work out how far it should step along the error
                    surface each iteration.  Momentum, though useful, required much
                    tweaking for a problem.  Methods which involve higher order
                    derivatives (such as Quickprop (Fahlman, 1988) or conjugate
                    gradient methods) provide up to an order of magnitude decrease in
                    learning time.
2) moving targets---if to solve a problem, the network must evolve into two groups of
                    neurons solving inter-related problems, if one group of neurons
                    change significantly, the other set of neurons must change to
                    continue to "work" effectively with the first group. Also if
                    the network as a whole works to solve one subproblem of the problem,
                    it might then forget how to solve the first subproblem when it begins
                    to solve a second subproblem.

  Anyway, backprop is not the only model available to researchers.  I encourage programmers
to look at conjugate-gradient, quickprop, and cascade-correlation.  Cascade-correlation
(Fahlman, 1990) has solved the two intertwined spiral problem in 1700 training epochs
(which are faster than backprop epochs), compared to 20,000 backprop epochs with a
2-5-5-5-1 network (with "short-cuts") (Lang, 1988).

 Fahlman, S.E. (1988) "Faster-Learning Variations on Back-Propagation: An Empirical
                       Study" in _Proceedings_of_the_1988_Connectionists_Models_Summer_
                       School_, Morgan Kaufmann.
 Fahlman, S.E., and Lebiere, C. (1990) "The Cascade-Correlation Learning Architecture"
                       Carnegie Mellon.
 Lang, K., and Witbrok, M. (1988) "Learning to Tell Two Spirals Apart" in _Proceedings_of_
                       the_1988_Connectionists_Models_Summer_School_, Morgan Kaufmann.

-Thomas Edwards

slehar@bucasd.bu.edu (Lehar) (03/23/90)

	**** WHAT GOOD ARE NEURAL NETS? ****

There are generally two reasons for studying neural nets, 1) to better
understand how natural systems  compute, and 2)  to perhaps  gain from
this knowledge to possibly enhance our own computational techniques.

At the  present  time,  our  knowledge  of  natural  systems    is  so
rudimentary that our progress  in emulating  them is somewhat limited.
We have  however discovered several  fundamental principles of natural
computation which are  either  being currently used  to  advantage, or
show  promise of future   utility.     Some  of these principles   are
summarized below.

  1	That   computation  can   be  performed  by   numerous  simple
	asynchronous    analog   processors    which     are    richly
	interconnected, as  an alternative  to  the  more  traditional
	approach of fewer, more complex time-locked digital processors
	which communicate using highly encoded protocols.

  2	That an advantage of this approach is greater fault tolerance
	to the loss of individual components or data corruption.

  3	That such systems can be designed  to program themselves, thus
	eliminating the  need for a  highly skilled and time consuming
	task.  I.e. the problem domain does not  have to be explicitly
	understood and  modelled in order  to  build a system to  work
	with it.

  4	That such  systems  are  potentially much   easier to  connect
	together because of the  simple nature of the  signals between
	units. (You can  insert an electrode  into many points  in the
	brain  and elicit  simple  responses- hunger,  fear,  motor or
	sensory  response, etc.   A similar intrusion into the complex
	components of a computer is likely to elicit nothing more than
	a crash)

  5	That certain problem domains  are  best  solved by  a parallel
	distributed   approach instead   of a  sequential   analytical
	approach.   Generally, if there  is   uncertainty in the data,
	then  all interim  decisions  are  suspect,  and   alternative
	choices  should  continue to  be considered  until  the  final
	decision is made.  When the data is more precise and reliable,
	interim results can be trusted, and alternatives can be safely
	discarded as in digital computation.

Given the above observations, I  cannot imagine a good  reason for NOT
studying neural nets.  Ted Dunning (ted@nmsu.edu) says:

  ------------------------------------------------------------------------
  |  the biological plausibility of artificial neural nets is essentially
  |  nil.  a linear sum followed by a soft limiter is nothing like what a
  |  neuron does.  it may be that the collective behavior of all sorts of
  |  different neurons will converge, but there is no indication yet that
  |  neural nets in the popular style will do this.
  ------------------------------------------------------------------------

Current  models may   not  model biological  neurons    in   all their
complexity, but  would you  not agree  that they are  much  more  like
neural computation than say, a logic chip or an expert system?  Are we
not  moving  in   the   right direction?   If  current  models  are  a
simplification, do they not at least capture the essentials of some of
the   fundamental   aspects of  neural  processing?   (like the  above
mentioned 5)  And are they not therefore worth  investigating?  Perhaps
your problem is  that you think  current models are the final product,
the last word on brain modeling.

Dunning continues...
  ------------------------------------------------------------------------
  |  ...but let us not over-hype the idea to the point that
  |  people will be so incredibly disillusioned that they won't work with
  |  it for decades.  remember the history of neural nets (aka
  |  perceptrons). 
  ------------------------------------------------------------------------

Nobody disagrees with this point.  Will you point  out for  us what is
being said  that  is  over-hyped?  Give  us some  specifics.  Is there
anything  in  my contentions that is  over-hyped  for instance?  If we
look back at the source of all this  discussion, could it  not be said
perhaps that some people are UNDER-hyping neural nets...

Dunning...
  ------------------------------------------------------------------------
  |  In article <68764@aerospace.AERO.ORG> abbott@aerospace.aero.org 
  |             (Russell J. Abbott) writes:
  |  
  |  
  |     Is there a good characterization of the kinds of problems for which
  |     neural nets are better than more traditional computational systems?
  |  
  |  yes.
  |  
  |  
  |  none.  <========||
  -------------------||---------------------------------------------------
                     ||
                     ====== Surely you jest!


  
--
(O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O)
(O)((O))(((              slehar@bucasb.bu.edu              )))((O))(O)
(O)((O))(((    Steve Lehar Boston University Boston MA     )))((O))(O)
(O)((O))(((    (617) 424-7035 (H)   (617) 353-6425 (W)     )))((O))(O)
(O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O)

ted@nmsu.edu (Ted Dunning) (03/23/90)

In article <Ma2BFFu00VQDE5lGZn@andrew.cmu.edu> rr2p+@andrew.cmu.edu (Richard Dale Romero) writes:


   I think Ted is ignoring some very important aspects of the neural
   network.  ... parallel processing ... increase our computing power.
   ... neural network on a parallel machine ... beautifully suited

i think that rick is ignoring some very important aspects of the
neural network approach, namely that conventional approaches still
work much better, and that parallelization of many conventional
numerical codes is not all that difficult (for example the work at
sandia on large hydrodynamic codes).

furthermore, the non-linear interpolation work at los alamos (doyne
farmer and co.) has shown that relatively conventional approaches can
solve the same sorts of interpolation/classification problems that
neural nets solve with many orders of magnitude less machine and
training time.

why should we need to go to a 10^4 processor parallel machine just to
run a code that is suited for parallelism, if there is a serial code
that is 10^4 more efficient?

all of this is a bit off the original subject, though.

can anyone come up with an example of where neural nets work at least
as well as conventional approaches?

ted@nmsu.edu (Ted Dunning) (03/23/90)

steve lehar begins to respond, but still fails to give any examples.

In article <SLEHAR.90Mar22122404@bucasd.bu.edu> slehar@bucasd.bu.edu (Lehar) writes:


   There are generally two reasons for studying neural nets, ...

come now.... i agree that it is good to study all kinds of different
approaches to computation.  the question was are there any problems
where neural nets are superior to conventional approaches?

   We have however discovered several fundamental principles of
   natural computation which are either being currently used to
   advantage, or show promise of future utility.  Some of these
   principles are summarized below.

     1	That computation can be performed by numerous simple
	   asynchronous analog processors

we knew this by direct observation of natural systems.

     2	That an advantage of this approach is greater fault tolerance
	   to the loss of individual components or data corruption.

we knew this, too.

     3	That such systems can be designed to program themselves, thus

we even knew this,

     4	That such systems are potentially much easier to connect

and this.  an exception might be made for neural net advocates.

   Given the above observations, I cannot imagine a good reason for
   NOT studying neural nets.

come now, i never recommended that they not be studied.  only that
they are not yet competitive in _any_ area.

   Current models may not model biological neurons in all their
   complexity,

but current neural simulation models come much closer and show many
phenomenon not exhibited by neural nets.

   but would you not agree that they are much more like neural
   computation than say, a logic chip or an expert system?

of course, but why make a specious comparison?

   Are we not moving in the right direction?

i can't tell.

   If current models are a simplification, do they not at least
   capture the essentials of some of the fundamental aspects of neural
   processing?

what _are_ the essentials of some of the fundamental aspects of neural
processing?

and, no i don't think that neural nets capture much of the essential
aspects of neural computation.

   And are they not therefore worth investigating?

sure, but that isn't the question here.

   Perhaps your problem is that you think current models are the final
   product, the last word on brain modeling.

i hope NOT.

   Nobody disagrees with this point.  Will you point out for us what
   is being said that is over-hyped?  Give us some specifics.

over-hyping is best recognized by results.  the clear public
perception is that neural nets can be used to solve real problems
better than conventional approaches.

this is patently wrong.

and it is prima facie evidence of overhyping.

   Is there anything in my contentions that is over-hyped for
   instance?

your contentions are carefully worded to ignore the point of the
discussion.  are there examples of problems better solved by neural
nets?

   If we look back at the source of all this discussion, could it not
   be said perhaps that some people are UNDER-hyping neural nets...

? the source of the discussion was russel abbott who merely asked a
question.

i gave a supercilious answer, and implicitly (and later explicitly)
challenged proponents to come up with counter-examples.  none have.

   Dunning...
      In article <68764@aerospace.AERO.ORG> abbott@aerospace.aero.org 
      (Russell J. Abbott) writes:
      
      Is there a good characterization of the kinds of problems for
      which neural nets are better than more traditional computational
      systems?

   yes.

   none.

        Surely you jest!

actually, no.

i haven't yet found a single example where neural nets work better
(and i have looked).

can somebody come up with an example?

steve?

surely you have a concrete example in hand if you think i was in jest.

arras@icase.edu (Michael Arras) (03/23/90)

	Here is your example:

	We have shown through computer simulations, that our ANN is better
than conventional systems in correcting word errors during the decoding of
block codes.  Our soft-decision ANN outperforms standard hard-decision
decoding by two orders of magnitude (100) at a SNR of 7dB using a (15,5)
Cyclic Redundany Code.  Our ANN will be implemented in hardware, which will
enable it to be used in real-time with high speed transmission rates.  I am
working on software to be used with the Intel Hypercube here at NASA
Langley that will allow us to investigate performace of larger block codes.
It is my guess that larger codes such as the (31,11) BCH code used with the
ANN will outperform the (2,1)M=6 convolutional code.  The (2,1) convolutional
code is about the best there is (increasing M would give a better performance,
but also increases complexity).

	'High Order Neural Models for Error Correcting Code', C. Jeffries,
P. Protzel has been accepted at SPIE's 1990 Technical Symposim, Orlando, FL.,
which will be held April 16-20.

Mike Arras
Institute for Computer Applications in Science and Engineering
NASA Langely Research Center

cutrell@cogsci.ucsd.EDU (Doug Cutrell) (03/23/90)

Ted Dunning has repeatedly asked for specific examples of where neural
network approaches perform better than traditional approaches.  The
following spring to mind as immediate examples:

Le Cun, Boser, Denker, Henderson, Howard, Hubbard, and Jackel of
AT&T Bell Labs recently report achieving 9% rejection rate for a 1%
error criterion on a U.S. Postal Service hand-written zip-code data
base, with a throughput of a dozen digits per second on a 25 MFLOP
DSP, including image aquistion and normalization. 
This data set is *EXTREMELY* noisy and consists of undoctored
digitized images of zipcodes exactly as they are scribbled on real
envelopes.  (See Nerual Compuataion 1:4, pp. 541-551 )

Gerald Tesauro's backgamman playing network recently defeated all
other computer implementations at the recently held First Computer
Olympiad in London.  (See Neural Computation 1:3 pp. 321-323).

And finally, Sejnowski's original NetTalk, while admittedly inferior
to DECTalk, did not require teams of professionals in excess of one
decade in order to achieve its performance!

This list is not meant to be complete.  I do not contend that the
neural network framework is essential to the above successes.  Many
approaches may be applied with similar results -- neural nets are
capable of implementing general Turing computation.  Their value comes
from the type of approaches that the neural network paradigm inspires.

Doug Cutrell
Dept. of Cog.Sci., D-015
UCSD
La Jolla, CA 92093

cutrell@cogsci.ucsd.edu   (internet)

ted@nmsu.edu (Ted Dunning) (03/23/90)

YES.  now let's hear more examples, as well as examine the existing
ones.

but first, let's attend to cases.  in particular, i refer below to the
work done by doyne farmer in cnls and t-13 at los alamos and at the
santa fe institute on non-linear interpolation using radial and other
basis functions.

the reason that this work is so pertinent here is that it performs
essentially the same sort of interpolation that multi-level neural
nets do, except that it requires very much less training, and when
implemented efficiently, it runs orders of magnitudes more quickly
than normal neural net architectures.

these codes provide both a refutation to the assertion that neural
nets do things better than conventional approaches, while strongly
supporting the assertion that research into novel areas is important
(since they were derived by examining what a neural net really does).

In article <103@cogsci.ucsd.EDU> cutrell@cogsci.ucsd.EDU (Doug Cutrell) writes:

   The following spring to mind as immediate examples:

   Le Cun, Boser, Denker, Henderson, Howard, Hubbard, and Jackel of
   AT&T Bell Labs recently report achieving 9% rejection rate for a 1%
   error criterion on a U.S. Postal Service hand-written zip-code data
   base, with a throughput of a dozen digits per second on a 25 MFLOP
   DSP, including image aquistion and normalization. 
   This data set is *EXTREMELY* noisy and consists of undoctored
   digitized images of zipcodes exactly as they are scribbled on real
   envelopes.  (See Nerual Compuataion 1:4, pp. 541-551 )

this is very good.  what is the performance of conventional
approaches?  even more to the point, what would the performance of
farmer's codes?

   Gerald Tesauro's backgamman playing network recently defeated all
   other computer implementations at the recently held First Computer
   Olympiad in London.  (See Neural Computation 1:3 pp. 321-323).

even better since this is essentially a direct competition between
approaches.

   And finally, Sejnowski's original NetTalk, while admittedly
   inferior to DECTalk, did not require teams of professionals in
   excess of one decade in order to achieve its performance!

this is a _very_ poor example.  other approaches have been able to
learn the training set used by sejnowski in much less time and have
been much more accurate on novel material.

ted@nmsu.edu (Ted Dunning) (03/23/90)

In article <1990Mar22.201531.8352@icase.edu> arras@icase.edu (Michael Arras) writes:

	   Here is your example:

indeed.

	   We have shown through computer simulations, that our ANN is
	   better
   than conventional systems in correcting word errors during the
   decoding of block codes.

note, some conventional systems.

   Our soft-decision ANN outperforms standard hard-decision decoding
   by two orders of magnitude (100) at a SNR of 7dB using a (15,5)
   Cyclic Redundany Code.

outperforms?  in price performance?  or only in terms of accuracy?
does the original method require specialized hardware, too?  or is the
specialized hardware only needed for high bit rates.

have you tried other non-linear interpolation techniques?

	   'High Order Neural Models for Error Correcting Code', C.
   Jeffries, P. Protzel has been accepted at SPIE's 1990 Technical
   Symposim, Orlando, FL., which will be held April 16-20.

are preprints available?

ajr@eng.cam.ac.uk (Tony Robinson) (03/23/90)

In article <TED.90Mar21175729@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes:
>so let us turn this challenge back to the normal course in scientific
>discourse, and ask if there is anything that neural nets actually do
>better than conventional approaches.

Some examples:

0) Play backgammon:  I remember reading that Tesaura's program won some
   competition in London recently.
1) Detect bombs:  At IJCNN-89 Shea and Lin presented a system for
   discriminating between suitcases with and without explosives.
2) Low level speech recognition:  There are several examples, perhaps the
   best known is Waibel with Time Delay Neural Networks (IEEE ASSP 37:3 1989).

I don't like hype either, it makes it harder to find the good work.

Tony Robinson

robert@aerospace.aero.org (Bob Statsinger) (03/23/90)

In article <Ma2BFFu00VQDE5lGZn@andrew.cmu.edu> rr2p+@andrew.cmu.edu (Richard Dale Romero) writes:
>I think Ted is ignoring some very important aspects of the neural network.
>It seems that we will be looking more and more towards parallel processing
>in order to increase our computing power.  But, solving problems on a parallel
>machine leads to really *big* complications in how to structure the program.
>Simulating a neural network on a parallel machine is something that it is
>beautifully suited for, though.  With more computing power, we can begin
>to solve more types of problems that would have previously taken much to
>long to do on today's von-Neumann (sp?) computers.
>

	Parallel processors, in and of themselves, will not let us tackle 
new kinds of problems; they will only run the "old" problems faster. 
I think the point of the above posting is that, as we simulate NN's 
on faster and faster processors, their results may become 
more and more satisfactory. At the very least the same unsatisfactory
results will arrive faster  :-)
 
	So very soon the questions we ask will
be things like: does our implementation of (backprop, or nettalk, etc)
on our Massively Parallel Processor (MPP) do anything for us that
our implementation of (genetic algorithmsm, dectalk, linear regression, etc)
on the MPP does not?


-- 
Bob Statsinger 				Robert@aerospace.aero.org

	The employers expressed herein are strictly mine and are
	not necessarily those of my opinion's....uh..er...whatever...

robert@aerospace.aero.org (Bob Statsinger) (03/23/90)

In article <TED.90Mar22114305@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes:
>
>i haven't yet found a single example where neural nets work better
>(and i have looked).
>
>can somebody come up with an example?
>
>steve?
>
>surely you have a concrete example in hand if you think i was in jest.

	What about machine vision and invariant object recognition?

	There are at least 2 NN paradigms for machine vision which
give decent results: Widrow's ADALINE's and Malsburg's dynamic link
architecture. Malsburg's regime has shown good results for 
infra-red images of vehicles, and for human faces; and image 
recognition is invariant under both translation and mild distortion.

	What are some of the non-neural techniques used, and with what
results?

-- 
Bob Statsinger 				Robert@aerospace.aero.org

	The employers expressed herein are strictly mine and are
	not necessarily those of my opinion's....uh..er...whatever...

kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) (03/23/90)

In article <79@nrl-cmf.UUCP> tedwards@cmsun.UUCP (Thomas Edwards) writes:
>In article <TED.90Mar22085433@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes:
>>unfortunately, in most of the networks exhibited so far, the scaling
>>of the size of the neural net or the accuracy required is prohibitive.
>
>I'll be the first one to admit that backpropagation learning can be truly tedious, and
>using it on anything but the most toy problems will definately leave one with a
>bad taste in the mouth for neural networks.

There is one giant problem with back-prop. In the (admittedly long term)
goal of actual artificial intelligence, back-prop will have to be
abandoned as the general method of altering the network. The reason is
that back-prop requires at all time that the 'correct' output of the
system be known, so that it can be compared with the network's own
output. This is obviously not the way any natural intelligence learns
(all of the time, anyway).

We need a method of alterning the network without this problem.

-Karl		kpfleger@phoenix.princeton.edu
		kpfleger@pucc	(bitnet)

mv10801@msi-s6 (Jonathan Marshall [Learning Center]) (03/23/90)

In article <TED.90Mar22114305@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes:
>i haven't yet found a single example where neural nets work better
>(and i have looked).
>can somebody come up with an example?

The advantage of NNs is their generality.  The point is not whether
NNs outperform other approaches to specific problems.  Rather, the
main reason for using NNs is that the same basic mechanisms can work
on a variety of problems.

So what if a chess-playing AI program with 30-move lookahead could
beat Bobby Fischer?  The program wouldn't be good for much else.  At
least a human player can perform many other intelligent tasks.

So what if certain statistical methods have better accuracy than NNs?
When we find the correct NNs, they will be able to be used for many
other tasks besides predicting loan defaults or learning XOR.

Thus, your question about performance of NNs is both unfair and
irrelevant.  NNs ultimately will be designed for generality, not pure
performance.

Today's NNs, which are often applied to toy problems such as optimal
graph traversal, loan approval, are primitive and are mainly useful
only for research purposes.

o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
o								o
o  Jonathan A. Marshall		       mv10801@uc.msc.umn.edu	o
o  Center for Research in Learning, Perception, and Cognition	o
o  205 Elliott Hall, University of Minnesota			o
o  Minneapolis, MN 55455, U.S.A.				o
o								o
o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o