[comp.ai.neural-nets] Networks for pattern recognition problems?

fozzard@boulder.Colorado.EDU (Richard Fozzard) (07/18/90)

Do you know of any references to work done using connectionist (neural)
networks for pattern recognition problems? I particularly am interested
in problems where the network was shown to outperform traditional algorithms.

I am working on a presentation to NOAA (National Oceanic and Atmospheric
Admin.) management that partially involves pattern recognition
and am trying to argue against the statement:
"...results thus far [w/ networks] have not been notably more
impressive than with more traditional pattern recognition techniques".

I have always felt that pattern recognition is one of the strengths of
connectionist network approaches over other techniques and would like
some references to back this up.

thanks much, rich
========================================================================
Richard Fozzard					"Serendipity empowers"
Univ of Colorado/CIRES/NOAA	R/E/FS  325 Broadway, Boulder, CO 80303
fozzard@boulder.colorado.edu                   (303)497-6011 or 444-3168

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (07/19/90)

In article <23586@boulder.Colorado.EDU> fozzard@boulder.Colorado.EDU (Richard Fozzard) writes:
>I am working on a presentation to NOAA (National Oceanic and Atmospheric
>Admin.) management that partially involves pattern recognition
>and am trying to argue against the statement:
>"...results thus far [w/ networks] have not been notably more
>impressive than with more traditional pattern recognition techniques".

That's a difficult statement to argue against.  I do not recall any
neural network techniques for pattern recognition which _perform_
notably better than traditional pattern recognition techniques.

From my experience, these are the real advantage of neural nets:
1) Generality.  There are many general neural network systems
    which are capable of learning almost any kind of pattern
    recognition without much specialized knowledge of the programmer
    about the problem.
2) Speed of System Development.  Generalized neural models will
    enable a user to develop a categorization system very quickly.
    For example, I spend a week training a network to learn
    threat detection problems to an accuracy reached by months
    of signal analysis experts (I am sure, however, that they are on the
    way to developing more accurate systems in the near future)
3) High Speed VLSI implementation.  Trained networks can be implemented
    in a highly parallel manner in VLSI.  This, however, hasn't
    been done very much.  

In the future, it would be nice to expand the above list.  But for
right now, with commercially available software, that's about 
as far as I would go.  Neural Nets are currently an excellent way
to do a "quick job" of getting a lower bound on acceptable
pattern recognition ability.  In most cases, however, you would
probably want to start with Neural Nets, and then go beyond with
more advanced methods.  

Neural Nets are "thought savers."  They give you some very general
ability at relatively high speeds (on the order of days) without
you having to think about the problem.  They can be useful when
properly applied, and useless when improperly applied.

(I am aware of retina-like neural models which provide very
 good contrast enhancement and CCD element calibration which
 do work better than most "traditional" techniques...so
 there are some examples of neural networks being very useful.
 I am sure that as research into Neural Networks continue,
 they will become an ever increasing tool of science.)

-Thomas Edwards

mek4_ltd@uhura.cc.rochester.edu (Mark Kern) (07/19/90)

In article <5856@jhunix.HCF.JHU.EDU> ins_atge@jhunix.UUCP (Thomas G Edwards) writes:
>In article <23586@boulder.Colorado.EDU> fozzard@boulder.Colorado.EDU (Richard Fozzard) writes:
>>I am working on a presentation to NOAA (National Oceanic and Atmospheric
>>Admin.) management that partially involves pattern recognition
>>and am trying to argue against the statement:
>>"...results thus far [w/ networks] have not been notably more
>>impressive than with more traditional pattern recognition techniques".
>
>That's a difficult statement to argue against.  I do not recall any
>neural network techniques for pattern recognition which _perform_
>notably better than traditional pattern recognition techniques.
>

	I hope I did not take the quote too far out of context. I'm not
sure what the underscores around the "perform" mean.  I have often
wondered about neural-net performance over traditional pattern
classification techniques.  I seem to recall though, that neural-nets are
demonstratably better at recognizing cursive handwriting.  Can anyone
verify or refute this? If performance is supposed to mean "speed", then
one can argue that we don't have many neural-nets running in true
parallel yet to make a comparison.  I personally find it hard to believe
that traditional methods would be faster for something such as vision
processing, but I am not very familiar with neural nets.

Mark Kern


-- 
=========================================================================
   Mark Edward Kern, mek4_ltd@uhura.cc.rochester.edu  A.Online: Markus
      Quagmire Studios U.S.A. "We not only hear you, we feel you !"
=========================================================================

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (07/20/90)

In article <8462@ur-cc.UUCP> mek4_ltd@uhura.cc.rochester.edu (Mark Kern) writes:
>In article <5856@jhunix.HCF.JHU.EDU> ins_atge@jhunix.UUCP (Thomas G Edwards) writes:
>>That's a difficult statement to argue against.  I do not recall any
>>neural network techniques for pattern recognition which _perform_
>>notably better than traditional pattern recognition techniques.
>>
>
>	I hope I did not take the quote too far out of context. I'm not
>sure what the underscores around the "perform" mean.  

I was definately underspecific.  I meant performance with respect
to percentages of incorrect recognitions.
Neural nets can be much faster than "traditional" methods once
learning has been completed.  But learning can often be a very
tedious and long task.  Of course, neural networks may not need
the kind of exacting tuning and expert knowledge "traditional"
techniques do.  Some neural models, however, don't neccessarily
live up to the above statements.  Unless you are talking about
a particular connectionist system in a particular application,
generalities often are difficult to specify.

-Thomas Edwards

scott@isles.tmc.edu (Scott Otterson x5117 ) (07/20/90)

In article <5856@jhunix.HCF.JHU.EDU> you write:

>(I am aware of retina-like neural models which provide very
> good contrast enhancement and CCD element calibration which
> do work better than most "traditional" techniques.
>
>-Thomas Edwards

Are there any published references on this?  Sounds intesting.

Scott Otterson
GE Medical Systems

forbis@milton.u.washington.edu (Gary Forbis) (07/20/90)

In article <2809@mrsvr.UUCP> scott@isles.UUCP (Scott Otterson  x5117	) writes:
>In article <5856@jhunix.HCF.JHU.EDU> you write:
>>(I am aware of retina-like neural models which provide very
>> good contrast enhancement and CCD element calibration which
>> do work better than most "traditional" techniques.

>Are there any published references on this?  Sounds intesting.

I cannot cite any particular work but I can give you a place to start.

Carver Mead gave a lecture at the UW this spring.  He was touting a switch
to analog devices for computing.  He showed pictures of images generated by
a retina simulator.  Over time it corrected for flaws in manufacturing and
defects on the lens.  One interesting side effect was the device produced
after images.

I think a perusal of recent works by this interesting man would be a good place
to start.

yinlin@kuikka.tut.fi (Lin Yin) (07/21/90)

In article <2809@mrsvr.UUCP> scott@isles.UUCP (Scott Otterson  x5117	) writes:
>In article <5856@jhunix.HCF.JHU.EDU> you write:
>
>>(I am aware of retina-like neural models which provide very
>> good contrast enhancement and CCD element calibration which
>> do work better than most "traditional" techniques.
>>
>>-Thomas Edwards
>
>Are there any published references on this?  Sounds intesting.
>
>Scott Otterson
>GE Medical Systems

If there are some published references on this, please let me know.
 
Lin Yin 

email: yinlin@tut.fi

blanz@ibm.com (Dr. Wolf-Ekkehard Blanz) (07/21/90)

Sorry, no such luck.

You don't really expect connectionist classifiers to be "better" than all
conventional classifiers.  This is because you could always argue that
for instance a polynomial of arbitrary high degree could always be made 
at least as good as a given net (because you can model its decision
surface with the polynomial).  What you really want to show is that the
implementation of a connectionist classifier might be more cost-effective
or the training easier.  Now, we all know that you cannot really show that
connectinoist classifiers are particularly easy to train.  They might be
more cost-effective to build though, especially when we're talking real-time
pattern recognition.

We have done some comparisons in terms of performance and
implementation cost.  The work is published in NIPS, ICPR, and IBM reports.
If you cannot get all or one of those easily I'll be more than glad to mail 
to you what you're missing if you're interested.

% Image segmentation using NNs
@inproceedings{Blanz90b,
    AUTHOR    =  "W. E. Blanz and Sheri L. Gish",
    TITLE     =  "A Connectionist Classifier Applied to Image
                  Segmentation",
    BOOKTITLE =  "10th Int. Conf. Pattern Recognition",
    ADDRESS   =  "Atlantic City, NJ",
    MONTH     =  "June 3-7",
    YEAR      =  "1990"

% Comparison of synthetic and real world data --- including HW cost
@techreport{Gish89,
    AUTHOR    =  "Sheri L. Gish and W. E. Blanz",
    TITLE     =  "Comparing a Connectionist Trainable Classifier with
                 Classical Statistical Decision Analysis Methods",
    INSTITUTION = "IBM",
    TYPE      =  "Research Report",
    NUMBER    =  "RJ 6891 (65717)",
    MONTH     =  "June",
    YEAR      =  "1989"                                                }

% Comparison on segmentation problem only - no HW
@incollection{Gish90a,
    AUTHOR    =  "Sheri L. Gish and W. E. Blanz",
    TITLE     =  "Comparing the Performance of a Connectionist
                  and Statistical Classifers on an Image
                  Segmentation Problem",
    BOOKTITLE =  "Neural Information Processing Systems 2",
    EDITOR    =  "David S. Touretzky",
    PUBLISHER =  "Morgan Kaufmann Publishers",
    ADDRESS   =  "San Mateo, California",
    PAGES     =  "614--621",

black@beno.CSS.GOV (Mike Black) (07/21/90)

I know of one example where a Boltzman machine implementation out-performed
more traditional methods.  I don't have the report by me, but I seem to recall
that instead of classifying about 55-60% of the set, the neural net did in
the 70-75% range.  This data was the fourier spectrum of doppler radar done
with tanks and jeeps.  The objective was to properly classify each.  A company
local to me (Computer Science Innovations in Palm Bay, Florida) picked up
this project after the original contractor had given up with more traditional
methods.  This was definitely an example where the neural net performed better.
If anyone would like some more info I can pass requests on to the principal
investigator that did the implementation.
Mike...
--
-------------------------------------------------------------------------------
: usenet: black@beno.CSS.GOV   :  land line: 407-494-5853  : I want a computer:
: real home: Melbourne, FL     :  home line: 407-242-8619  : that does it all!:
-------------------------------------------------------------------------------

reynolds@thalamus.bu.edu (John Reynolds) (07/23/90)

You might take a look at Grossberg and Mingola's Boundary Contour
System/Feature Contour System (BCS/FCS) model. 

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (07/23/90)

In article <5309@milton.u.washington.edu> forbis@milton.u.washington.edu (Gary Forbis) writes:
>In article <2809@mrsvr.UUCP> scott@isles.UUCP (Scott Otterson  x5117	) writes:
>Carver Mead gave a lecture at the UW this spring.  He was touting a switch
>to analog devices for computing.  He showed pictures of images generated by
>a retina simulator.  Over time it corrected for flaws in manufacturing and
>defects on the lens.  One interesting side effect was the device produced
>after images.

Carver Mead discusses a silicon retina model in his book,
which I believe is entitled "Anlog VLSI and Neural Models."
Something similar has also appeared in the journal _Neural Networks_
(Pergammon Press).

At the Naval Research Lab, there is work using a Connection Machine
to do a software retina model for infrared focal plane arrays.
They have truly nasty problems with photo-element matching, and
almost every element has a slightly different calibration.
The raw images from these things are messy to the point of being
almost useless.  With a few iterations of a neural model
which adjusts the calibration parameters of each element to average
local neighborhoods, the image clears up quite nicely.
Afterimages and things similar to "Mach Bands" do tend to show up
also, as in the human eye.

We have already learned alot about how to use retinal neural processing
to aid our image processing.  I feel as we move up the visual pathway,
we will find more interesting processing which will be of use.  
I am currently involved in research dealing with target tracking by
neural means which involve using neural elements to develop maximum
likelyhood paths to implement "inertia" constraints (similar to
another recent article in _Neural Networks_ which dealt with 
visual motion processing.

-Thomas Edwards

crounse@norton.uucp (Great Rumpuscat) (07/28/90)

	For a Carver Mead type system described formally as a
neural network, you might check out (the award winning) :

"Cellular Neural Networks: Theory and Applications"
Leon O. Chua and Lin Yang
IEEE CAS vol35 no10 oct88

The paper (applications part) discusses several image processing 
techniques which are often used in recognition algorithms (like
edge extraction).  A Cellular Neural Network has a grid topology
and only local connections which both suggest image processing as
an application -- and also make for nice implementation on silicon.

,,,,,,,,,,,,,,,,,crounse@norton.berkeley.edu,,,,,,,,,,,,,,,,,,,,,,,
Kenneth R. Crounse,      -  UC Berkeley
King of                  -  (Rally Behind the Ridiculous)
Randomness and           -  Dept. of EECS
Chaos                    -  Nonlinear Electronics Laboratory