[comp.ai.neural-nets] Summary

fozzard@boulder.Colorado.EDU (Richard Fozzard) (07/26/90)

Here are the responses I got for my question regarding comparisons of
connectionist methods with traditional pattern recognition techniques.

I believe Mike Mozer (among others) puts it best: 
"Neural net algorithms just let you
do a lot of the same things that traditional statistical algorithms allow
you to do, but they are more accessible to many people (and perhaps
easier to use)."

Read on for the detailed responses. (Note: this does not include anything
posted to comp.ai.neural-nets, to save on bandwidth)

rich

========================================================================
Richard Fozzard					"Serendipity empowers"
Univ of Colorado/CIRES/NOAA	R/E/FS  325 Broadway, Boulder, CO 80303
fozzard@boulder.colorado.edu                   (303)497-6011 or 444-3168


Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Date: Tue, 17 Jul 90 13:53:55 -0500
From: honavar@cs.wisc.edu (Vasant Honavar)
Message-Id: <9007171853.AA13318@goat.cs.wisc.edu>
Received: by goat.cs.wisc.edu; Tue, 17 Jul 90 13:53:55 -0500
To: fozzard@boulder.colorado.edu
Subject: pattern recognition with nn
Cc: honavar@cs.wisc.edu
Status: R



 Honavar, V. & Uhr, L. (1989).
	Generation, Local Receptive Fields, and Global Convergence 
	Improve Perceptual Learning in Connectionist Networks, 
	In: Proceedings of the 1989 International Joint Conference 
	on Artificial Intelligence, San Mateo, CA: Morgan Kaufmann.

 Honavar, V. & Uhr, L. (1989).
  	Brain-Structured Connectionist Networks that Perceive and Learn,
	Connection Science: Journal of Neural Computing, Artificial 
	Intelligence and Cognitive Research, 1 139-159.

 Le Cun, Y. et al. (1990).
	Handwritten Digit Recognition With a Backpropagation Network,
	In: Neural Information Processing Systems 2, D. S. Touretzky (ed.),
	San Mateo, CA: 1990.

 Rogers, D. (1990).
	Predicting Weather Using a Genetic Memory: A Combination of 
	Kanerva's Sparse Distributed Memory With Holland's Genetic Algorithms,
	In: Neural Information Processing Systems 2, D. S. Touretzky (ed.),
	San Mateo, CA: 1990.

From perry@seismo.CSS.GOV Tue Jul 17 14:39:52 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: from beno.CSS.GOV by seismo.CSS.GOV (5.61/1.14)
	id AA05503; Tue, 17 Jul 90 16:39:14 -0400
Received: by beno.CSS.GOV (4.0/SMI-4.0)
	id AA11042; Tue, 17 Jul 90 16:39:12 EDT
Date: Tue, 17 Jul 90 16:39:12 EDT
From: perry@seismo.CSS.GOV (John Perry)
Message-Id: <9007172039.AA11042@beno.CSS.GOV>
To: fozzard@boulder.colorado.edu
Subject: Re
Cc: perry@dewey.css.gov
Status: R

Richard,
	It depends on which neural network you are using, and the underlying
	complexity in seperating pattern classes.  We at ENSCO have developed
	a neural network architecture that shows far superior performance
	over traditional algorithms.  Mail me if you are interested.

John L. Perry
ENSCO, Inc.
5400 Port Royal Road
Springfiedl, Virginia 22151
(Springfield)
703-321-9000
email: perry@dewey.css.gov, perry@beno.css.gov

From shaw_d@clipr.colorado.edu Tue Jul 17 14:43:27 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Message-Id: <9007172044.AA24992@boulder.Colorado.EDU>
Date: 17 Jul 90 14:36:00 MDT
From: "Dave Shaw" <shaw_d@clipr.colorado.edu>
Subject: RE: Networks for pattern recognition problems?
To: "fozzard" <fozzard@boulder.colorado.edu>
Status: R

Rich- our experience with the solar data is still inconclusive, but would
seem to indicate that neural nets have exhibit no distinct advantage over
more traditional techniques, in terms of 'best' performance figures. The
reason appears to be that although the task is understood to be non-linear,
(which should presumably lead to better performance by non-linear systems
such as networks), there is not enough data at the critical points to define
the boundaries of the decision surface. This would seem to be a difficulty that
all recognition problems must deal with.

Dave


From kortge@galadriel.Stanford.EDU Tue Jul 17 15:02:05 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Message-Id: <9007172102.AA26014@boulder.Colorado.EDU>
Received: by galadriel.Stanford.EDU (3.2/4.7); Tue, 17 Jul 90 14:02:44 PDT
Date: Tue, 17 Jul 90 14:02:44 PDT
From: kortge@galadriel.Stanford.EDU (Chris Kortge)
To: fozzard@boulder.colorado.edu
Subject: Re: pattern recognition
Status: R

You may know of this already, but Gorman & Sejnowski have a paper on
sonar return classification in Neural Networks Vol. 1, #1, pg 75,
where a net did better than nearest neighbor, and comparable to a
person.

I would be very interested in obtaining your list of "better-than-
conventional-methods" papers, if possible (maybe the whole connectionists
list would, for that matter).

Thanks--
Chris Kortge
kortge@psych.stanford.edu

From @MCC.COM:galem@mcc.com Tue Jul 17 16:06:18 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: from sunkist.aca.mcc.com by MCC.COM with TCP/SMTP; Tue 17 Jul 90 17:06:18-CDT
Date: Tue, 17 Jul 90 17:06:14 CDT
From: galem@mcc.com (Gale Martin)
Posted-Date: Tue, 17 Jul 90 17:06:14 CDT
Message-Id: <9007172206.AA01492@sunkist.aca.mcc.com>
Received: by sunkist.aca.mcc.com (4.0/ACAv4.1i) 
	id AA01492; Tue, 17 Jul 90 17:06:14 CDT
To: fozzard@boulder.Colorado.EDU
Subject: Re:  Networks for pattern recognition problems?
Status: R

I do handwriting recognition with backprop nets and have anecdotal evidence
that the nets do better than the systems developed by some of the research
groups we work with.  The problem with such comparisons is that the success
of the recognition systems depend on the expertise of the developers.  There
will never be a definitive study.

However, I've come to believe that such accuracy comparisons miss the point.
Traditional recognition technologies usually involve alot of hand-crafting
(e.g., selecting features) that you can avoid by using backprop nets.  For
example, I can feed a net with "close to" raw inputs and the net learns 
to segment it into characters, extract features, and classify the characters.
You may be able to do this with traditional techniques, but it will take alot
longer.  Extending the work to different character sets becomes prohibitive;
whereas it is a simple task with a net.

Gale Martin
MCC
Austin, TX

From ted@aps1.spa.umn.edu Tue Jul 17 16:12:55 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: from aps1.spa.umn.edu by uc.msc.edu (5.59/MSC-2.00/900715)
	id AA08469; Tue, 17 Jul 90 17:13:22 CDT
From: "Ted Stockwell" <ted@aps1.spa.umn.edu>
Message-Id: <9007172212.AA05795@aps1.spa.umn.edu>
Received: by aps1.spa.umn.edu; Tue, 17 Jul 90 17:12:34 CDT
Subject: Re: Networks for pattern recognition problems?
To: fozzard@boulder.colorado.edu
Date: Tue, 17 Jul 90 17:12:33 CDT
In-Reply-To: <no.id>; from "Richard Fozzard" at Jul 17, 90 6:16 pm
X-Mailer: ELM [version 2.2 PL10]
Status: R

> 
> Do you know of any references to work done using connectionist (neural)
> networks for pattern recognition problems? I particularly am interested
> in problems where the network was shown to outperform traditional algorithms.
> 
> I am working on a presentation to NOAA (National Oceanic and Atmospheric
> Admin.) management that partially involves pattern recognition
> and am trying to argue against the statement:
> "...results thus far [w/ networks] have not been notably more
> impressive than with more traditional pattern recognition techniques".
> 

This may not be quite what you're looking for, but here are a few 
suggestions:

1) Pose the question to salespeople who sell neural network software.  They
   probably have faced the question before.

2) One advantage is that the network chacterizes the classes for you.  
   Instead of spending days/weeks/months developing statistical models
   you can get a reasonable classifier by just handing the training data
   to the network and let it run overnight.  It does the work for you
   so development costs should be much lower.

3) Networks seem to be more often compared to humans than to other
   software techniques.  I don't have the referrences with me, but I 
   recall that someone (Sejnowski?) developed a classifier for sonar
   signals that performed slightly better than human experts (which *is*
   the "traditional pattern recognition technique"). 


-- 
Ted Stockwell                                     U of MN, Dept. of Astronomy
ted@aps1.spa.umn.edu                          Automated Plate Scanner Project

From mariah!yak@tucson.sie.arizona.edu Tue Jul 17 16:25:40 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
From: mariah!yak@tucson.sie.arizona.edu
Received: from tucson.UUCP by megaron.cs.arizona.edu (5.61/15) via UUCP
	id AA06676; Tue, 17 Jul 90 15:26:14 -0700
Received: by tucson.sie.arizona.edu (5.61/1.34)
	id AA29760; Tue, 17 Jul 90 14:58:40 -0700
Date: Tue, 17 Jul 90 14:58:40 -0700
Message-Id: <9007172158.AA29760@tucson.sie.arizona.edu>
To: arizona!fozzard%boulder.colorado.edu@cs.arizona.edu
Status: R

Dear Dr. Fozzrad,

I read your email message on a call for pattern recog. problems
for which NN's are known to outperform traditional methods.

I've worked in statistics and pattern recognition for some while.
Have a fair number of publications.

I've been reading th neural net literature and I'd be quite surprised
if you get convincing replies in the affirmative, to your quest.  
My opinion is that stuff even from the '60's and '70's, such as
the books by Duda and Hart, Gonzales and Fu, implemented on standard
computers, are still much more effective than methodology I've come across
using NN algorithms, which are mathematically much more rstrictive.


In brief, if you hear of good solid instances favorable to NN's,
please let me know.

Sincerely,

Sid Yakowitz
Professor


From John.Hampshire@SPEECH2.CS.CMU.EDU Tue Jul 17 19:27:17 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Message-Id: <9007180127.AA09023@boulder.Colorado.EDU>
Date: Tue, 17 Jul 90 21:18:33 EDT
From: John.Hampshire@SPEECH2.CS.CMU.EDU
To: fozzard@BOULDER.COLORADO.EDU
Subject: Re:  Networks for pattern recognition problems?
Status: R

Rich,

  Go talk with Smolensky out there in Boulder.  He
should be able to give you a bunch of refs.  See also
works by Lippmann over the past two years.
Barak Pearlmutter and I are working on a paper that
will appear in the proceedings of the 1990 
Connectionist Models Summer School which shows
that certain classes of MLP classifiers yield
(optimal) Bayesian classification performance on
stochastic patterns.  This beats traditional linear
classifiers...

  There are a bunch of results in many fields showing
that non-linear classifiers out perform more traditional
ones.  The guys at NOAA aren't up on the literature.
One last reference --- check the last few years of
NIPS and (to a lesser extent) IJCNN proceedings
NIPS = Advances in Neural Information Processing Systems
       Dave Touretzky ed., Morgan Kaufmann publishers
ICJNN = Proceedings of the International Joint Conference on
       Neural Networks, IEEE Press

John

From mozer@neuron Tue Jul 17 20:50:39 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by neuron.colorado.edu (cu.generic.890828)
Date: Tue, 17 Jul 90 20:51:28 MDT
From: Michael C. Mozer <mozer@neuron>
Message-Id: <9007180251.AA04754@neuron.colorado.edu>
To: fozzard@alumni
Subject: Re:  Help for a NOAA connectionist "primer"
Cc: pauls@neuron
Status: R

Your boss is basically correct.  Neural net algorithms just let you
do a lot of the same things that traditional statistical algorithms allow
you to do, but they are more accessible to many people (and perhaps
easier to use).

There is a growing set of examples where neural nets beat out conventional
algorithms, but nothing terribly impressive.  And it's difficult to
tell in these examples whether the conventional methods were applied
appropriately (or the NN algorithm in cases where NNs lose to
conventional methods for that matter).  

Mike

From burrow@grad1.cis.upenn.edu Tue Jul 17 23:07:19 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: from GRAD2.CIS.UPENN.EDU by central.cis.upenn.edu
	id AA27955; Wed, 18 Jul 90 00:39:01 -0400
Return-Path: <burrow@grad1.cis.upenn.edu>
Received: by grad2.cis.upenn.edu
	id AA16913; Wed, 18 Jul 90 00:40:21 EDT
Date: Wed, 18 Jul 90 00:40:21 EDT
From: burrow@grad1.cis.upenn.edu (Tom Burrow)
Posted-Date: Wed, 18 Jul 90 00:40:21 EDT
Message-Id: <9007180440.AA16913@grad2.cis.upenn.edu>
To: fozzard@boulder.colorado.edu
Subject: procedural vs connectionist p.r.
Status: R


Sorry, this isn't much of a contribution -- mostly a request for
your replies.  If you are not going to repost them via the 
connectionist mailing list, could you mail them to me?

No, for my mini-contribution: Yan Lecun, et al's work as seen
in NIPS 90 on segmented character recognition is fairly impressive,
and they claim that their results are state of the art.

Tom Burrow

From jsaxon@cs.tamu.edu Wed Jul 18 08:32:27 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: from cs.tamu.edu (PHOTON.TAMU.EDU) by cssun.tamu.edu (AA13736); Wed, 18 Jul 90 09:32:12 CDT
Received: by cs.tamu.edu (4.0/SMI-4.0)
	id AA08709; Wed, 18 Jul 90 09:32:08 CDT
Date: Wed, 18 Jul 90 09:32:08 CDT
From: jsaxon@cs.tamu.edu (James B Saxon)
Message-Id: <9007181432.AA08709@cs.tamu.edu>
To: fozzard@boulder.colorado.edu
Subject: Re: Networks for pattern recognition problems?
Newsgroups: comp.ai.neural-nets
In-Reply-To: <23586@boulder.Colorado.EDU>
Organization: Computer Science Department, Texas A&M University
Cc: james@visual2.tamu.edu
Status: R

In article <23586@boulder.Colorado.EDU> you write:
>Do you know of any references to work done using connectionist (neural)
>networks for pattern recognition problems? I particularly am interested
>in problems where the network was shown to outperform traditional algorithms.
>
>I am working on a presentation to NOAA (National Oceanic and Atmospheric
>Admin.) management that partially involves pattern recognition
>and am trying to argue against the statement:
>"...results thus far [w/ networks] have not been notably more
>impressive than with more traditional pattern recognition techniques".
>
>I have always felt that pattern recognition is one of the strengths of
>connectionist network approaches over other techniques and would like
>some references to back this up.
>
>thanks much, rich
>========================================================================
>Richard Fozzard					"Serendipity empowers"
>Univ of Colorado/CIRES/NOAA	R/E/FS  325 Broadway, Boulder, CO 80303
>fozzard@boulder.colorado.edu                   (303)497-6011 or 444-3168

 Well, aside from the question of ultimate generality to which the answer 
is "OF COURSE there are references to neural network pattern recognition
systems.  The world is completely full of them!"

Anyway, maybe you'd better do some more research.  Here's a couple off the
top of my head:

Kohonen is really hot in the area, he's been doing it for at least ten
years.  Everybody refers to some aspect of his work.

I also suggest picking up a copy of the IJCNN '90 San Diego, all 18lbs
of it.  (International Joint Conference on Neural Networks) But for a
preview:  I happened to sit in on just the sort of presentation you
would have liked to hear.  His title was "Meterological Classification
of Satellite Imagery Using Neural Network Data Fusion"  Oh Boy!!! Big
title!  Oh, it's by Ira G. Smotroff, Timothy P. Howells, and Steven Lehar.
>From the MITRE Corporation (MITRE-Bedford Neural Network Research Group)
Bedford, MA 01730.  Well the presentation wasn't to hot, he sort of hand
waved over the "classification" of his meterological data though he
didn't describe what we were looking at.  The idea was that the system
was supposed to take heterogeneous sensor data (I hope you know these:
GOES--IR and visual, PROFS database--wind profilers, barometers, solarometers,
thermometers, etc) and combine them.  Cool huh.  If they had actually done
this, I imagine the results would have been pretty good.  It seems though
that they merely used an IR image and a visual image and combined only
these two.  Their pattern recognition involved typical modeling of the
retina which sort of acts as a band pass filter with orientations, thus
it detects edges.  Anyway, their claim was the following:  "The experiments
described showed reasonably good classification performance.  There was no
attempt to determine optimal performance by adding hidden units [Hey, if
it did it without hidden units, it's doing rather well.], altering learning
parameters, etc., because we are currently implementing self-scaling learning
algorithms which will determine many of those issues automatically. [Which
reminds me, Lehar works with Grossberg at Boston University.  He's big
on pattern recognition too, both analog and digital.  Check out Adaptive
Resonance Theory, or ART, ART2, ART3.]..."  Anyway it looks like a 
first shot and they went minimal.  There's lots they could add to
make it work rather well.  In terms of performance, I'd just like to
make one of those comments...  From what I saw at the conference,
neural networks will outperform traditional techniques in this sort of
area.  The conference was brimming over with successful implementations.

Anyway...  Enough rambling, from a guy who should be writing his thesis
right now...  Good luck on your presentation!

Oh, I think it automacally puts my signature on.....

Did it?

-- 
 ---- \ / ----	  /--------------------------------------------\  James Bennett Saxon
|   O|	 |   O|	  |  "I aught to join the club and beat you    |  Visualization Laboratory
|    |   |    |   |   over the head with it." -- Groucho Marx  |  Texas A&M University
 ----     ----   <---------------------------------------------/  jsaxon@cssun.tamu.edu

From arseno@phy.ulaval.ca Wed Jul 18 08:48:52 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Date: Wed, 18 Jul 90 10:38:16 EDT
From: Henri Arsenault <arseno@phy.ulaval.ca>
Message-Id: <9007181438.AA19593@einstein.phy.ulaval.ca>
To: fozzard@boulder.colorado.edu
Subject: papers on pattern recognition
Status: R

In response to your request about papers on neural nets in pattern recognition, there is a good review in IEEE transactions on neural networks, vol. 1, p. 28.
"Survey of neural network technology for automatic target recognition", by M. W. Roth. The paper has many references.
arseno@phy.ulaval.ca

From @pnlg.pnl.gov:d38987@proteus.pnl.gov Wed Jul 18 10:11:12 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: from proteus.pnl.gov (130.20.65.15) by pnlg.pnl.gov; Wed, 18 Jul 90
 09:08 PST
Received: by proteus.pnl.gov (4.0/SMI-4.0) id AA05172; Wed, 18 Jul 90 09:07:50
 PDT
Date: Wed, 18 Jul 90 09:07:50 PDT
From: d38987%proteus.pnl.gov@pnlg.pnl.gov
Subject: NN and Pattern Recognition
To: fozzard@boulder.colorado.edu
Cc: d3c409@calypso
Message-Id: <9007181607.AA05172@proteus.pnl.gov>
X-Envelope-To: fozzard@boulder.colorado.edu
Status: R

Richard,  We have done some work in this area, as have many other people.

I suggest you call Roger Barga at (509)375-2802 and talk to him, or send him
mail at:

d3c409%calypso@pnlg.pnl.gov

Good luck,

Ron Melton
Pacific Northwest Laboratory
Richland, WA 99352

From fozzard Wed Jul 18 11:12:05 1990
To: mozer@neuron
Subject: Re:  Help for a NOAA connectionist "primer"
Cc: pauls@neuron
Status: R

mike,
	thanks for the input - it seems a cogent summary of the (many) 
responses I've been getting. However, it seems just about noone has really
attempted a one-to-one sort of comparison using traditional pattern
recognition benchmarks. Just about everything I hear and read is 
anecdotal.

Would it be fair to say that "neural nets" are more accessible, simply
because there is such a plethora of 'sexy' user-friendly packages for sale?
Or is back-prop (for example) truly a more flexible and widely-applicable
algorithm than other statistical methods with uglier-sounding names?

If not, it seems to me that most connectionists should be having a bit
of a mid-life crisis about now.

rich

From mozer@neuron Wed Jul 18 11:15:31 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by neuron.colorado.edu (cu.generic.890828)
Date: Wed, 18 Jul 90 11:16:22 MDT
From: Michael C. Mozer <mozer@neuron>
Message-Id: <9007181716.AA06051@neuron.colorado.edu>
To: fozzard@alumni
Subject: Re:  Help for a NOAA connectionist "primer"
Status: R

I think NNs are more accessible because the mathematics is so straightforward,
and the methods work pretty well even if you don't know what you're doing
(as opposed to many statistical techniques that require some expertise
to use correctly).

For me, the win of NNs is as a paradigm for modeling human cognition.
Whether the NN learning algorithms existed previously in other fields
is irrelevant.  What is truly novel is that we're bringing these numerical
and statistical techniques to the study of human cognition.  Also,
connectionists (at least the cog sci oriented ones) are far more concerned
with representation -- a critical factor, one that has been much studied
by psychologists but not by statisticians.

Mike


From cole@cse.ogi.edu Wed Jul 18 11:33:04 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: by cse.ogi.edu
	(5.61+eap+OGI_1.1.named/IDA-1.2.8+OGI_1.12) id AA26297; Wed, 18 Jul 90 10:33:14 -0700
Date: Wed, 18 Jul 90 10:33:14 -0700
From: Ron Cole <cole@cse.ogi.edu>
Message-Id: <9007181733.AA26297@cse.ogi.edu>
To: fozzard@boulder.Colorado.EDU
Subject: Re:  Networks for pattern recognition problems?
Status: R

Call Les Atlas at U Washington.  He has an article coming out in
IEEE Proceedings August comparing NNs and CART on 3 realworld problems.

Ron

Les Atlas: 206 685 1315

From bimal@jupiter.risc.com Wed Jul 18 11:59:32 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: by jupiter.risc.com (4.0/SMI-4.0)
	id AA22486; Wed, 18 Jul 90 10:59:24 PDT
Date: Wed, 18 Jul 90 10:59:24 PDT
From: bimal@jupiter.risc.com (Bimal Mathur)
Message-Id: <9007181759.AA22486@jupiter.risc.com>
To: fozzard@boulder.colorado.edu
Subject: pattern recognition
Status: R

The net result of experiments done by us in pattern classification for two
dimensinal data i.e. image to features, classify features using NN, is that
there is no significant improvement in performance of the overall system.
-bimal mathur - Rockwell Int

From PH706008@brownvm.brown.edu Wed Jul 18 13:07:05 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Message-Id: <9007181907.AA15014@boulder.Colorado.EDU>
Received: from BROWNVM.BROWN.EDU by brownvm.brown.edu (IBM VM SMTP R1.2.1MX) with BSMTP id 8687; Wed, 18 Jul 90 15:06:56 EDT
Received: by BROWNVM (Mailer R2.07) id 7351; Wed, 18 Jul 90 15:06:54 EDT
Date:         Wed, 18 Jul 90 14:10:28 EDT
From: Chip Bachmann <PH706008@brownvm.brown.edu>
Subject:      Re: Networks for pattern recognition problems?
To: Richard Fozzard <fozzard@boulder.Colorado.EDU>
In-Reply-To:  Your message of Tue, 17 Jul 90 12:19:01 -0600
Status: R


An example of research directly comparing neural networks with traditional
statistical methods can be found in: R. A. Cole, Y. K. Muthusamy, and
L. Atlas, "Speaker-Independent Vowel Recognition: Comparison of
Backpropagation and Trained Classification Trees", in Proceedings of the
Twenty-Third Annual Hawaii International Conference on System Sciences,
Kailua-Kona, Hawaii, January 2-5, 1990, Vol. 1, pp. 132-141.  The neural
network achieves better results than the CART algorithm, in this case for
a twelve-class vowel recognition task.  The data was extracted from the
TIMIT database, and a variety of different encoding schemes was employed.

Tangentially, I thought that I would enquire if you know of any postdoctoral
or other research positions available at NOAA, CIRES, or U. of Colorado.
I completed my Ph.D. in physics at Brown University under
Leon Cooper (Nobel laureate, 1972) this past May; my undergraduate degree
was from Princeton University and was also in physics.  My dissertation
research was carried out as part of an interdisciplinary
team in the Center for Neural Science here at Brown.
The primary focus of my dissertation was
the development of an alternative backward propagation algorithm which
incorporates a gain modification procedure.  I also investigated the
feature extraction and generalization of backward propagation for a speech
database of stop-consonants developed here in our laboratory at Brown.
In addition, I discussed hybrid network architectures and, in particular,
in a high-dimensional, multi-class vowel recognition problem (namely
with the data which Cole et. al. used in the paper which I mentioned above),
demonstrated an approach using smaller sub-networks to partition the data.  Such
approaches offer a means of dealing with the "curse of dimensionality."


If there are any openings that I might apply for, I would be happy to forward
my resume and any supporting materials that you might require.


                                                 Charles M. Bachmann
                                                 Box 1843
                                                 Physics Department &
                                                 Center for Neural Science
                                                 Brown University
                                                 Providence, R.I. 02912
                                                 e-mail: ph706008 at
                                                         brownvm

From wilson@magi.ncsl.nist.gov Wed Jul 18 14:08:10 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: by magi.ncsl.nist.gov (4.1/NIST-dsys)
	id AA13132; Wed, 18 Jul 90 16:05:39 EDT
Date: Wed, 18 Jul 90 16:05:39 EDT
From: Charles Wilson x2080 <wilson@magi.ncsl.nist.gov>
Organization: National Institute of Standards and Technology
	formerly National Bureau of Standards
Message-Id: <9007182005.AA13132@magi.ncsl.nist.gov>
To: fozzard@boulder.colorado.edu
Subject: character recognition
Status: R

We have shown on character recognition problem that neural networks
are as good in accuracy as traditional methods but much faster (on a parallel
computer), much easier to prpgram ( a few hundred lines of parallel
fortran) and less brittle.

see C. L. Wilson, R. A. Wilkinson, and M. D. Garris, "self-organizing
Neural Network Character Recognition on a Massively Parallel Computer",
Proc. of the IJCNN, vol 2, pp. 325-329, June 1990.

From shaw_d@clipr.colorado.edu Wed Jul 18 16:39:40 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Date: 18 Jul 90 16:22:00 MDT
From: "Dave Shaw" <shaw_d@clipr.colorado.edu>
Subject: RE: Networks for pattern recognition problems?
To: "fozzard" <fozzard@alumni.colorado.edu>
Status: R

To date we have compared the expert system originally built for the task with
many configurations of neural nets (based on your work), multiple linear 
regression equations, discriminant analysis, many types of nearest neighbor
systems, and some work on automatic decision tree generation algorithms.
Performance in measured both in the ROC P sub a (which turns out to be only
a moderate indicator of performance, due to the unequal n's in the two 
distributions), and maximum percent correct, given the optimal bias setting.
All systems have been trained and tested on the same sets of training and test
data. As I indicated before, the story isn't completely in yet, but it is very
hard to show significant differences between any of these systems on the solar
flare task.

Dave


From uvm-gen!idx!gsk@uunet.UU.NET Wed Jul 18 21:35:06 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: from uvm-gen.UUCP by uunet.uu.net (5.61/1.14) with UUCP 
	id AA15716; Wed, 18 Jul 90 23:35:47 -0400
Received: by uvm-gen.uvm.edu (5.51/2.4D)
	id AA22762; Wed, 18 Jul 90 17:48:05 EDT
Message-Id: <9007182148.AA22762@uvm-gen.uvm.edu>
Received: by idx.UUCP (DECUS UUCP w/Smail);
          Wed, 18 Jul 90 16:45:41 EST
Date: Wed, 18 Jul 90 16:45:41 EST
From: George Kaczowka <uvm-gen!idx!gsk@uunet.UU.NET>
To: fozzard@boulder.colorado.edu
Subject: Networks for pattern recognition problems?
Status: R

> Do you know of any references to work done using connectionist (neural)
> networks for pattern recognition problems? I particularly am interested
> in problems where the network was shown to outperform traditional algorithms.
> 
> I am working on a presentation to NOAA (National Oceanic and Atmospheric
> Admin.) management that partially involves pattern recognition
> and am trying to argue against the statement:
> "...results thus far [w/ networks] have not been notably more
> impressive than with more traditional pattern recognition techniques".
> 
> I have always felt that pattern recognition is one of the strengths of
> connectionist network approaches over other techniques and would like
> some references to back this up.
> 
> thanks much, rich
> ========================================================================
Rich -- 
  I don't know if this helps, but a company in Providence RI called NESTOR has
put together a couple of products.. come of which have been  customized systems
for customers solving pattern recognition problems.. One I remember was
regarding bond trading in the financial world.. I seem to remenber that the
model outperformed the "experts" by at least 10-15%, and that this was used
(and is as far as I know) by some on wall street. I know that they have been in
the insurance field for claim analysys as well as physical pattern
recognition.. They were founded by a phd out of Brown University, and I am sure
that you caould obtain reference works from them..  I understand that they are
involved in a few military pattern recognition systems for fighters as well..
  Good luck.. I was interested in their work some time ago, but have been off
on other topics for over a year..
				-- George --

------------------------------------------------------------
- George Kaczowka   IDX Corp  Marlboro, MA  - gsk@idx.UUCP -
------------------------------------------------------------

From marwan@ee.su.oz.AU Thu Jul 19 16:50:05 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: from brutus.ee.su.OZ.AU by extro.ucc.su.OZ.AU (5.61+/1.34)
	id AA11894; Fri, 20 Jul 1990 08:45:44 +1000
Received: from sedal.ee by brutus.ee.su.oz (4.0/4.7)
	id AA07141; Fri, 20 Jul 90 08:48:46 EST
Date: Fri, 20 Jul 90 08:48:46 EST
From: marwan@ee.su.oz.AU (Marwan Jabri)
Message-Id: <9007192248.AA07141@ee.su.oz.AU>
To: fozzard@boulder.colorado.edu
Subject: pattern recognition
Status: R

We have been working on the application of neural nets to the pattern r
recognition of ECG signals (medical). I will be happy in mailing you some of our
very good results that are better of what has been achieved using conventional
techniques. Is this the sort of things you are looking for? what media you want?

Marwan Jabri

-------------------------------------------------------------------
Marwan Jabri, PhD			  Email: marwan@ee.su.oz.au
Systems Engineering and Design Automation     Tel: (+61-2) 692-2240
  Laboratory (SEDAL)		 	      Fax: (+61-2) 692-3847
Sydney University Electrical Engineering
NSW 2006 Australia

From PP219113@tecmtyvm.mty.itesm.mx Fri Jul 20 10:39:48 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Received: by boulder.Colorado.EDU (cu-hub.890824)
Received: from TECMTYVM.MTY.ITESM.MX by VAXF.COLORADO.EDU; Fri, 20 Jul 90 10:40
 MST
Received: from tecmtyvm.mty.itesm.mx (PP219113) by TECMTYVM.MTY.ITESM.MX
 (Mailer R2.07) with BSMTP id 9403; Fri, 20 Jul 90 10:40:12 CST
Date: Fri, 20 Jul 90 10:10:26 CST
From: PP219113@tecmtyvm.mty.itesm.mx
Subject: Re: Networks for pattern recognition problems?
To: fozzard@boulder.Colorado.EDU
Message-Id: <900720.101026.CST.PP219113@tecmtyvm.mty.itesm.mx>
Organization: Instituto Tecnologico y de Estudios Superiores de Monterrey
X-Envelope-To: fozzard@boulder.Colorado.EDU
Status: R

hi,
David J. Burr (in 'Experiments on NN Recognition of Spoken and Written Text,
suggests that NN and Nearest neighbor classification performs at near the same
level of accuracy, IEEE Trans on ASSP, vol 36, #7,pp1162-68, july 88)
My own experience with character recognition using neural nets  actually
suggests that NN have better performance than nearest neighbor and hierarchical
clustering, (I suggest to talk to Prof. Kelvin Wagner, ECE, UC-Boulder)
See also, "Survery of Neural Net Tech for Automatic Target Recognition" in
Trans in Neural Net, March 90, pp 28 by M.W. Roth.
 
jose luis contreras-vidal

From shaw_d@clipr.colorado.edu Fri Jul 20 15:38:21 1990
Received: by alumni.colorado.edu (cu.generic.890828)
Date: 20 Jul 90 15:32:00 MDT
From: "Dave Shaw" <shaw_d@clipr.colorado.edu>
Subject: RE: Networks for pattern recognition problems?
To: "fozzard" <fozzard@alumni.colorado.edu>
Status: R

Rich- the network configurations we have used are all single hidden layer of
varying size (except for 1 network with no hidden layer). Hidden layer size 
has been varied from 1 to 30 units. Input layer=17 units, output layer=1 unit.
All activation functions sigmoidal. As I indicated before, there was
essentially no difference between any of the networks. We are moving towards a
paper (one at least) and this work will likely be included as part of my 
dissertation as well.

Dave


========================================================================
Richard Fozzard					"Serendipity empowers"
Univ of Colorado/CIRES/NOAA	R/E/FS  325 Broadway, Boulder, CO 80303
fozzard@boulder.colorado.edu                   (303)497-6011 or 444-3168

reynolds@bucasd.bu.edu (John Reynolds) (08/01/90)

I suggest you write 

Sheri Gish 
IBM Knowledge Based Systems
2800 Sand Hill Road
Menlo Park, CA 94025

or 

W.E. Blanz
IBM Research
Almaden Research Center
650 Harry Road
San Jose, CA 95120

and request their recently (6/19/89) published research report
"Comparing a Connectionist Trainable Classifier with Classical
Statistical Decision Analysis Methods"  (report # RJ 6891 (65717))

Their report critically analyzes the performance of a connectionist
(simple back prop) with a Gaussian and three polynomial (linear,
quadratic, and cubic) classifiers on a variety of data sets.  The
results unambiguously support the connectionist system as a viable
alternative to the standard techniques, especially for larger
problems.  In every case its results are comparable to or better than
the other methods.

The data sets are designed to test the classifiers' success in
handling (1) different degrees of separability (2) overlapping
distributions (3) outliers (in which case the connectionist is *far*
superior to all but the cubic polynomial classifier (i.e. it achieved
perfect classification whereas the linear polynomial classifier
achieved a 63.3% error rate on both the test set and the training set)
and (4) non-informative features.  The connectionist system also did
better than all but the cubic polynomial solution in a real world
image classification task.  They also found that while the standard
techniques were cheaper for small problems, for problems of realistic
size, the connectionist system was superior.

-john

chrisley@csli.Stanford.EDU (Ron Chrisley) (08/02/90)

Another reference:

We compared Kohonen's LVQ and LVQ2 to kNN and Parametric Bayes classifiers in
our 1988 paper "Statistical Pettern Recognition with Neural Networks:
Benchmarking Studies" at ICNN.  In it, we found the following results:

Task		P. Bayes	kNN		LVQ	LVQ2

Test1		12.1		12.0		10.2	9.8

Test2		13.8		12.1		13.2	12.0

The numbers are error percentages.  The tests were real speech data (15 
dimensional inputs, 1550 samples).  Error rates are for performance on test
data, not training data!

We also made comparisons against other nnets (BP and Boltmann Machines), and
found that as the dimesionality of the task got larger, and as the tasks got
more difficult (less deterministic), LVQ did better than BP, but not as good as
BM, which was expensive in terms of time and resources.

Hope this is of interest/use.

-- 
Ron Chrisley    chrisley@csli.stanford.edu
Xerox PARC SSL                               New College
Palo Alto, CA 94304                          Oxford OX1 3BN, UK
(415) 494-4728                               (865) 793-484