[comp.ai.neural-nets] NEURON Digest V2 #24 - Reprints from AIList

NEURON-Request@ti-csl.csc.ti.COM (NEURON-Digest moderator Michael Gately) (10/21/87)

NEURON Digest	Wed Oct 21 09:01:28 CDT 1987   Volume 2 / Issue 24
Today's Topics:
      Re: Neural Networks & Unaligned fields <AIL V5 #209>
      Re: neural net conference <AIL V5 #209>
      Mactivation <AIL V5 #211>
      Re: Neural Networks & Unaligned fields <AIL V5 #211>
      Re: Neural Net Literature, shifts in attention <AIL V5 #219>
      Neural Net Literature <AIL V5 #219>
      Neural computing / Speech processing <NL-KR V3 #26>
      Boltzmann Machine <AIL V5 #221>
      IEEE ASSP and Hinton's recirculation algorithm <AIL V5 #221>

----------------------------------------------------------------------

Date: 9 Sep 87 13:54:23 GMT
From: PT!cadre!geb@cs.rochester.edu  (Gordon E. Banks)
Subject: Re: Neural Networks & Unaligned fields <AIL V5 #209>
 
In article <3523@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP
(Stephen Smoliar) writes:
>In article <277@ndmath.UUCP> milo@ndmath.UUCP (Greg Corson) writes:
>>Ok, here's a quick question for anyone who's getting into Neural Networks.
>>If you setup the type of network described in BYTE this month, or the
>>type used in the program recently posted to the net, what happens if you
>>feed it an input image that is not aligned right?
 
I didn't see the Byte article, but the simple neural networks that
I have seen (such as the one that solves the T-C problem by Hinton
& Rummelhart in the PDP book) do not generalize very well.  You can
train the hidden units with a given input, but then if you shift the
pattern, it won't work.  I asked Rummelhart about this, and he said
that once the hidden units develop the patterns (such as edge detectors
and center-surround, etc.) you do not need to retrain for each translation
of the pattern, but you need to add more units to the network.  These
units have the same weights as the previously trained units, but they
have a different field of view.  You have to have another set of units
for each region which can possibly contain the image.  Alternatively,
you have to have a scheme for making sure the image is "centered" in
the field of view.  Sounds like there is some room for interesting
research here, maybe a thesis.

------------------------------

Date: 9 Sep 87 15:00:36 GMT
From: caasi%sdsu.UUCP@sdcsvax.ucsd.edu (Richard Caasi)
Subject: Re: neural net conference <AIL V5 #209>
 
 
In response to those who asked and because email didn't work too well,
there was a session devoted to Speech Recognition and Synthesis at the
neural net conference (San Diego, June '87).
The papers were:
 
    Issues and Problems in Speech Processing with Neural Networks
    Learning Phonetic Features Using Connectionist Networks: An
    Experiment in Speech Recognition
    A Neural Network Model for Speech Recognition Using the Generalized
    Delta Rule for Connection Strength Modification
    Neural Networks for the Auditory Processing and Recognition of Speech
    Multilayer Perceptrons and Automatic Speech Recognition
    Neural Net Classifiers Useful for Speech Recognition
    Isolated Word Recognition with an Artificial Neural Network
    Recent Developments In a Neural Model of Real-Time Speech Analysis and
    Synthesis
    Concentrating Information in Time: Analog Neural Networks with
    Possible Applications to Speech Recognition
    Guided Propagation Inside a Topographic Memory
    The Implementation of Neural Network Technology

------------------------------

Date: 4 Sep 87 02:41:49 GMT
From: hao!boulder!mikek@husc6.harvard.edu  (Mike Kranzdorf)
Subject: Mactivation <AIL V5 #211>
 
 
> I have seen inquiries around here about neural net simulators.  I have
> written a program called Mactivation which simulates single and double
> layer networks which can be viewed as matrix-vector multipliers.
 
 
        Would some who has recieved a copy of Mactivation please post it?
My Mac doesn't talk to the net yet (no modem cord for my new SE).
Preferably someone with 2.02 - it's a little faster but no big deal.
I suppose comp.binaries.mac and comp.doc are the right places.
You are all still welcome to write to me for it; posting will just make
it more accessible.  I'll be sure to post when there's an update.
Thanks much.
 
--mike                          mikek@boulder.colorado.edu

------------------------------

Date: 4 Sep 87 16:13:31 GMT
From: boulder!mikek@boulder.colorado.edu (Mike Kranzdorf)
Reply-to: boulder!mikek@boulder.colorado.edu (Mike Kranzdorf)
Subject: Re: Neural Networks & Unaligned fields <AIL V5 #211>
 
 
        The second reference above is correct, but fails to mention work
by Fukishima and Mozer.  These multi-layer networks are able to form
an internal distributed representation of a pattern on an input retina.
They demonstrate very good shift and scale invariance.  The new and
improved neocognitron (Fukishima) can even recognize multiple patterns
on the retina.
 
--mike                                  mikek@boulder.colorado.edu

------------------------------

Date: 7 Sep 87 05:47:19 GMT
From: maiden@sdcsvax.ucsd.edu (VLSI Layout Project)
Reply-to: maiden@sdcsvax.ucsd.edu (VLSI Layout Project)
Subject: Re: Neural Networks & Unaligned fields <AIL V5 #211>
 
 
In article <12331701930.42.LAWS@KL.SRI.Com> AIList-Request@SRI.COM writes:
>The current networks will generally fail to recognize shifted patterns.
>All of the recognition networks I have seen (including the optical
>implementations) correlate the image with a set of templates and then
>use a winner-take-all subnetwork or a feedback enhancement to select
>the best-matching template.
[some lines deleted]
>                                       -- Ken
>-------
 
There are a number of networks that will recognize shifts in position.
Among them are optical implementations (see SPIE by Psaltis at CalTech)
and the Neocognitron (Biol. Cybern. by Fukushima).  The first neocognitron
article dates to 1978, the latest article is 1987.  There have been a
number of improvements, including shifts in attention.
 
 Edward K. Y. Jung
 ------------------------------------------------------------------------
 1. If the answer to life, the universe and everything is "42"...
 2. And if the question is "what is six times nine"...
 3. Then God must have 13 fingers.
 ------------------------------------------------------------------------
 UUCP: {seismo|decwrl}!sdcsvax!maiden     ARPA: maiden@sdcsvax.ucsd.edu

------------------------------

Date: 19 Sep 87 18:18:56 GMT
From: maiden@sdcsvax.ucsd.edu  (VLSI Layout Project)
Subject: Re: Neural Net Literature, shifts in attention <AIL V5 #219>
 
Someone sent me mail about a citation for Fukushima's network that
handled "shifts in attention".  I lost the address.  If that person
receives this information through this channel, I'd appreciate a
e-mail letter.
 
"A Neural Network Model for Selective Attention in Visual Pattern
   Recognition," K. Fukushima, _Biological Cybernetics_ 55: 5-15 (1986).
 
"A Hierarchical Neural Network Model for Associative Memory,"
   K. Fukushima, _Biological Cybernetics_ 50: 105-113 (1984).
 
"Neocognitron: A Self-organizing Neural Network Model for a Mechanism
   of Pattern Recognition Unaffected by Shift in Position,"
   K. Fukushima, _Biological Cybernetics_ 36: 193-202 (1980).
 
The same person mentioned about vision-like systems, so here are some
interesting physiologically grounded network papers:
 
"A Self-Organizing Neural Network Sharing Features of the Mammalian
   Visual System," H. Frohn, H. Geiger, and W. Singer, _Biological
   Cybernetics_ 55: 333-343 (1987).
 
"Associative Recognition and Storage in a Model Network of
   Physiological Neurons," J. Buhmann and K. Shulten, _Biological
   Cybernetics_ 54: 319-335 (1986).
 
Concerning selection:
 
"Neural networks that learn temporal sequences by selection," S. Dehaene,
   J. Changeux, and J. Nadal, _Proceeding of the National Academy of
   Sciences, USA_ 84: 2727-2731 (1987).
 
I hope this of help.  I apologize for the delay; my bibliography on
neural networks spans an entire file cabinet and is severely disorganized
after the last move.
 
Edward K. Y. Jung
------------------------------------------------------------------------
UUCP: {seismo|decwrl}!sdcsvax!maiden     ARPA: maiden@sdcsvax.ucsd.edu

------------------------------

Date: 18 Sep 87 13:49:36 GMT
From: ihnp4!homxb!homxc!del@ucbvax.Berkeley.EDU  (D.LEASURE)
Subject: Neural Net Literature <AIL V5 #219>
 
In article <598@artecon.artecon.UUCP>, donahue@artecon.artecon.UUCP
(Brian D. Donahue) writes:
> Does anyone know of a good introductory article/book to neural networks?
 
We're using Rumelhart and MCClelland's 2 (I've heard a rumor that a
third volume is out) volume set on the Parallel Distributed Processing
Project in a seminar at Rutgers. I've only 8 chapters of it, but it
covers a lot of ground in neuroscience, cognitive psychology
(though some would disagree that such models are really cog-psy),
and computing. I recommend it. It's only $25 for both volumes in
paperback.
--
David E. Leasure - AT&T Bell Laboratories - (201) 615-5307

------------------------------

Date: Thu, 17 Sep 87 13:21 EDT
From: Andrew Jeavons <andrewj@crta.UUCP>
Subject: Neural computing / Speech processing <NL-KR V3 #26>
 
Does anyone have any information on work using systems derived
from neuronal networks theory being used for speech recognition ?
Any references to literature or names of companies would be
appreciated .
 
Thanks
---
Andrew Jeavons
 
Quantime Ltd	Quantime Corp
London		Cincinnati OH
England		USA
 
USA : ..!oucs!crta!andrewj    UK : ..!ukc!qtlon!andrew
 
"Psychology theories are like used cars .
 Every few years you get a new one ."

------------------------------

Date: Sun, 27 Sep 87 15:25:30 EDT
From: Ali Minai <amres%uvaee.ee.virginia.edu@RELAY.CS.NET>
Subject: Boltzmann Machine <AIL V5 #221>
 
While reading two different references about the Boltzmann Machine, I came
across something I did not quite understand. I am sure that there is a
perfectly reasonable explanation, and would be glad if someone could point
it out.
 
In chapter 7 of PARALLEL DISTRIBUTED PROCESSING (Vol 1), by Hinton and
Sejnowski, the authors define Pij+ as the probability of units i and j
being on when ALL visible units are being clamped, and Pij- as the
probability of i and j being on when NONE of the visible units are
being clamped (pp 294, 296). They then proceed to present the expression
for the gradient of G with respect to weights Wij as -1/T (Pij+ - Pij-).
 
However, in the paper entitled LEARNING SYMMETRY GROUPS WITH HIDDEN
UNITS: BEYOND THE PERCEPTRON, by Sejnowski, Keinker and Hinton, in
Physica 22D (1986), pp 260-275, it is explicitly stated that Pij+
is the probability when ALL visible units (input and output) are being
clamped, BUT Pij- is the probability of i and j being on when ONLY THE
INPUT UNITS ARE CLAMPED (pp 264). So there seems to be no concept of
FREE-RUNNING here.
 
Since the expression for dG/dWij is the same in both cases, the
definitions of Pij- must be equivalent. The only explanation I could
think of was that "clamping" the inputs ONLY was the same thing as letting
the environment have a free run of them, so the case being described is
the free-running one. If that is true, obviously there is no contradiction,
but the terminology sure is confusing. If that is not the case, will
someone please explain.
 
Also, can anyone point out any latest references to work on the Boltzmann
Machine?
 
Thanks,
 
       Ali.
 
====================
 
       Ali Minai,
       Department of Electrical Engg.
       University of Virginia,
       Charlottesville, Va 22901.
 
       ARPANET: amres@uvaee.ee.Virginia.EDU

------------------------------

Date: 24 Sep 87 17:49:54 GMT
From: ur-tut!mkh1@cs.rochester.edu  (Manoj Khare)
Subject: Re: Neural Networks & Unaligned fields <AIL V5 #221>

In article <1241@uccba.UUCP> finegan@uccba.UUCP (Mike Finegan) writes:
>In article <759@ucdavis.UUCP>, g451252772ea@ucdavis.UUCP (g451252772ea) writes:
>> > IEEE ASSP (Acoustics, Speech, and Signal Processing) April 1987,
>>
>>    I found the 4/87 issue (and the rest of 1987) , but not this article.
>> Are you certain of this reference?  Thanks...
>>
>I am not sure if it was April (I believe it was), but the whole journal is
>devoted to the subject of Neural Nets for that issue, and definitely exists.
>                                               - Mike Finegan
>                                               ...!{hal|pyramid}!uccba!finegan
 
 
 
The article "An Introduction to Computing with Neural Nets" by Richard P.
Lippmann appeared in IEEE ASSP magazine april 1987, pp 4-22.
 
Q. Does anybody have any idea if the book "Analog VLSI and Neural Systems" by
Carver A. Mead is published yet? OR Is there any way I could get his lecture
notes on the related course at CalTech? Thanks in advance.
 
                 ..... Manoj Khare

------------------------------

Date: 27 Sep 87 06:43:56 GMT
From: deneb.ucdavis.edu!g451252772ea@ucdavis.ucdavis.edu 
      (0040;0000009606;0;327;142;)
Subject: IEEE ASSP and Hinton's recirculation algorithm <AIL V5 #221>
 
    Thanks for the help with the IEEE ASSP reference; indeed I was looking
at the journal, not the 'magazine' (two shelves up, higher than me).  It
appears worth the second trip.
    Now: Geoffrey Hinton claims to have a new 'recirculation' algorith for
back-propagation, which is claimed to be 'more biologically realistic'
according to the Nature commentary reporting his claim (Nature, 7/9/87,
p. 107) (That's July, not Sept, for all you over-sea folk).  But only
that commentary has appeared- I don't know where (if) Hinton has published
the algorithm itself.  The commentary only mentions 'a packed audience at
the Society of Experimental Psychology', not even stating where the meeting
was.
   Any ideas?
   Thanks - Ron Goldthwaite, Psychology & Animal Behavior, U.Cal. Davis
 
'Economics is a branch of ethics pretending to be a science;
 Ethology is a science, pretending relevance to ethics'

------------------------------

Date: 25 Sep 87 04:19:19 GMT
From: cbosgd!osu-cis!tut!dlee@ucbvax.Berkeley.EDU  (Dik Lee)
Subject: Re: Neural Networks & Unaligned fields <AIL V5 #221>

In article <1241@uccba.UUCP> finegan@uccba.UUCP (Mike Finegan) writes:
>In article <759@ucdavis.UUCP>, g451252772ea@ucdavis.UUCP (g451252772ea) writes:
>> > IEEE ASSP (Acoustics, Speech, and Signal Processing) April 1987,
>>
>>    I found the 4/87 issue (and the rest of 1987) , but not this article.
>> Are you certain of this reference?  Thanks...
>>
>I am not sure if it was April (I believe it was), but the whole journal is
>devoted to the subject of Neural Nets for that issue, and definitely exists.
 
Yes, the paper appeared in IEEE ASSP magazine, Apr. 1987. Be sure you are
looking at ASSP magazine, not Journal of ASSP; they are two different
publications.
 
- Dik Lee    Dept. CIS, Ohio State Univ.

------------------------------

End of NEURON-Digest
********************