[comp.ai.neural-nets] Neuron Digest V5 #4

neuron-request@HPLABS.HP.COM ("Neuron-Digest Moderator Peter Marvit") (01/16/89)

Neuron Digest	Sunday, 15 Jan 1989
		Volume 5 : Issue 4

Today's Topics:
	      Copies of DARPA Request for Proposals Available
		   Human Learning & Connectionist Models
			    Re: INNS application
			    Re: INNS application
	      medical applications of computer neural networks
		     Neural nets for spatial reasoning?
	      Neural Networks in Natural and Artificial Vision
		      PDP Vol III simulator on a MAC?
		    Post-processing of neural net output
		  Re: Post-processing of neural net output
		     Second SIMILARITY METRICS Posting
		     Submission-Neural Learning Methods
			      Re: talk at ICSI
		  VLSI Implementations of Neural Networks


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
ARPANET users can get old issues via ftp from hplpm.hpl.hp.com (15.255.16.205).

------------------------------------------------------------

Subject: Copies of DARPA Request for Proposals Available
From:    will@ida.org (Craig Will)
Date:    Tue, 03 Jan 89 10:50:14 -0500 


      Copies of DARPA Request for Proposals Available

     Copies of the DARPA Neural Network Request for Proposals are now
available (free) upon request.  This is the same text as that published
December 16 in the Commerce Business Daily, but reformatted and with bigger
type for easier reading.  This version was sent as a 4-page "Special
supplementary issue" to subscribers of Neural Network Review in the United
States.

     To get a copy mailed to you, send your US postal address to either:

                       Michele Clouse
                  clouse@ida.org (milnet)

or:
                   Neural Network Review
                       P. O. Box 427
                   Dunn Loring, VA  22027

------------------------------

Subject: Human Learning & Connectionist Models
From:    gluck@psych.Stanford.EDU (Mark Gluck)
Date:    Thu, 05 Jan 89 07:17:32 -0800 

[[Editor's note: I hope Mark summarizes his responses and sends them in.  I,
for one, would be very interested in the results. -PM]]

I would grateful to receive information about people using connectionist/
neural-net approaches within cognitive psychology to model human learning
and memory data. Citations to published work, information about work in
progress, and copies of reprints or preprints would be most welcome and
appreciated.

Mark Gluck
Dept. of Psychology
Jordan Hall; Bldg. 420
Stanford University
Stanford, CA 94305

  (415) 725-2434
  gluck@psych.stanford.edu.

------------------------------

Subject: Re: INNS application
From:    lehr@isl.Stanford.EDU (Michael Lehr)
Organization: Stanford University
Date:    01 Jan 89 02:52:37 +0000 

[[Editor's note:  This relates to a series of messages last fall about INNS
membership and the apparent lack of response from the organization. I hope
the controversy will be resolved herein. -PM]]

In article <19202@shemp.CS.UCLA.EDU> rosen@CS.UCLA.EDU () writes:
>Although I signed up (and paid) for my INNS membership last September in 
>Boston, I have yet to receive my first (or any) issue of the Neural Network
>journal, yet I have just received notice to renew my membership.
>Does anyone know when (or, at this point if) they will mail the journal
>to those of us who have paid for it?  I feel somewhat foolish renewing
>my membership and paying my dues at this point - I wonder just what I
>am paying for.
>       Bruce
>Bruce Rosen
>       ARPA:   rosen@CS.UCLA.EDU
>       UUCP:   ...!ucbvax!ucla-cs!rosen


Dr. Widrow asked me to relay this message in response the journal problem:

I have received several complaints from INNS members about not receiving
their copies of the journal NEURAL NETWORKS.  If this has happened to you,
please accept my apology.  The person who is now in charge of membership is:

                Frank Polkinghorn
                9202 Ivanhoe Road
                Ft. Washington, MD 20744

Frank is working hard to allieviate the logjam.  I am pleased to report that
INNS has grown from zero to 4000 members in a little over one year.  We are
having some "growing pains."  If you do not receive you journals within
about a month, write to Frank and let him know the facts of your situation.
I am sure that he will get your journals to you soon.

                                  Sincerely, 
                                  Bernard Widrow
                                  President INNS


         -ml

------------------------------

Subject: Re: INNS application
Organization: The Santa Cruz Operation, Inc.
Date:    06 Jan 89 20:56:08 +0000 

[[Editor's note:  Well, at least one happy customer! -PM]]

>>In article <19202@shemp.CS.UCLA.EDU> rosen@CS.UCLA.EDU () writes:
>>I paid also in Sept. --- just received by renewal notice --- yet I have
>>never received the INNS journal. 

I paid in September, heard nothing until November, and then received Vol. 1,
No. 4 from Pergamon Press.  Then, just yesterday, Vol. 1, Nos.  1-3 arrived,
as well as the abstracts from the September INNS meeting held in Boston.  So
now I'll probably renew.

FYI, here's the address for Pergamon Press:

Pergamon Journals, Inc.
Fairview Park
Elmsford   NY  10523

------------------------------

Subject: medical applications of computer neural networks
From:    sid@koko.UUCP (Dave Sidney)
Organization: Calif. State Univ., Stanislaus, Turlock, Ca
Date:    29 Dec 88 09:44:14 +0000 

Anybody have any information on current medical research or applications
using computer neural networks?  What is being done where, by whom, and with
what results?

------------------------------

Subject: Neural nets for spatial reasoning?
From:    engelson-sean@CS.YALE.EDU (Sean Philip Engelson)
Organization: Yale University, New Haven, CT
Date:    04 Jan 89 03:51:04 +0000 


I'm looking for references on neural-net models of spatial reasoning tasks.
Any sort of spatial reasoning, learning or retrieval tasks involving
connectionist architectures would be appreciated.

Thanks,
        -Sean-


Sean Philip Engelson, Gradual Student   Mi ha'ish hachafetz chayim,
Yale Department of Computer Science     Ohev yamim, lir'ot tov?
51 Prospect St.                         Sur mera` v`aseh tov,
New Haven, CT 06520                     Bakesh shalom, v'rodfehu!


------------------------------

Subject: Neural Networks in Natural and Artificial Vision
From:    daugman%charybdis@harvard.harvard.edu (j daugman)
Date:    Fri, 06 Jan 89 10:41:42 -0500 


For preparation of 1989 conference tutorials and reviews, I would be
grateful to receive any available p\reprints reporting research on neural
network models of human / biological vision and applications in artificial
vision.  Thanks in advance.

John Daugman
Harvard University
950 William James Hall
Cambridge, Mass.  02138

------------------------------

Subject: PDP Vol III simulator on a MAC?
From:    Keith Stenning <keith%epistemi.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Date:    Sun, 15 Jan 89 14:34:56 +0000 

[[ Editor's note:  This sems to be a periodic request, but I still haven't
heard of any source.  Can a reader help out here? -PM ]]
I am looking for a version of the CMU PDP Vol. 3 package of
PDP simulations converted for the MAC. That's the package that comes
with McClelland and Rumelhart Vol III. Can anyone help?

Keith Stenning (keith@epistemi.ed.ac.uk)

------------------------------

Subject: Post-processing of neural net output
From:    mesard@BBN.COM
Date:    Wed, 28 Dec 88 16:40:39 -0500 

A large portion of existing feedforward nets take an input vector and
produce an encoded boolean response on their outputs (e.g., yes/no,
left/right, signal/noise).

Typically, the output is interpreted based on the activation of a singleton
output unit exceeding some threshold, or whether the activation of one unit
is larger than another.

This approach may mean throwing away a lot of information.  Even if the
activation fails to meet some criterion, it might be useful as a similarity
measure, or a rating of the net's confidence in its output.  Furthermore,
the criterion could be adjusted based on the discriminablity between outputs
in the positive and negative cases.

Does anyone know of any work that has been done in this area?

void Wayne_Mesard();   Mesard@BBN.COM   Bolt Beranek and Newman, Cambridge, MA

------------------------------

Subject: Re: Post-processing of neural net output
From:    terry@cs.jhu.edu (Terry Sejnowski <terry@cs.jhu.edu>)
Date:    Fri, 30 Dec 88 16:32:32 -0500 

The value of an output unit is highly correlated with the confidence of a
binary categorization.  In our study of predicting protein secondary
structure (Qian and Sejnowski, J. Molec. Biol., 202, 865-884) we have
trained a network to perform a three-way classification.  Recently we have
found that the real value of the output unit is highly correlated with the
probability of correct classification of new, testing sequences.  Thus, 25%
of the sequences could be predicted correctly with 80% or greater
probability even though the average performance on the training set was only
64%.  The highest value among the output units is also highly correlated
with the difference between the largest and second largest values.  We are
preparing a paper for publication on these results.

Terry Sejnowski


------------------------------

Subject: Second SIMILARITY METRICS Posting
From:    king@rd1632.Dayton.NCR.COM (James King)
Organization: R&D, NCR Corp., Dayton, Ohio
Date:    12 Jan 89 15:46:57 +0000 



*** SECOND POSTING ON SIMILARITY METRICS ***

About a month ago I posted a request for information, and interest, in the
area of "similarity metrics".  I am posting a second call for information
now with the hope of furthering my base of understanding, and also to
develop a base for the semi-formal survey I will be sending all respondents
(this will be out in a week or so).

I have received feedback from about thirty people.  Most of the respondents
have described their interests in this area, and many have provided
abstractions of their methods for measuring similarity.  This is very
encouraging and I hope it continues.  The survey will also be sent to a
sizeable number of researchers that I know of already.  My hope is to make
this a cross-discipline study that provides insight from the Case-Based
Reasoning, Analogical Reasoning, EBL, Information Retrieval, Behavioral
Studies, Machine Learning, Child Psychology, etc.  fields.

At present a proposal for holding a workshop on this topic at IJCAI has been
decided against.  The topic may be presented as a panel discussion at a
focused Case-Based Reasoning Workshop this year.

   --------------------- Original (edited) posting follows -------------------

SIMILARITY      ...  What does it mean?
for ANALOGY          What are the measures?
for REMINDING        Are there generalities or is it domain-specific?
for EXEMPLARS   ...  Eliciation strategies, cues, weights, features, etc.

I am performing independent research in the area of Case-Based Reasoning,
CBR, and I am working on various metrics for similarity.

In general, what ideas do you (the net-world) have about:
   - What about a new situation reminds you of a prior experience?
   - OR
   - How does one situation remind you of another?
A little more focus might be how does one discriminate and weigh features of
a new situation (case) in relationship to a large case-base of experiences
that may or may not have a bearing on the new situation.  Did that provide
more focus or fuzziness!?

This notice is sent out as a preliminary "attention-getter" to provide
myself with some input to help form a more formal survey.  Once written I
hope to send it to a specific set of researchers (consisting mostly of
people in the CBR, information retrieval (IR), doc. mngt. areas) and to
anyone in netland that requests so.

If anyone is interested in responding to any of this:

   - I will watch the "nets" for replies
   - Email to:  j.a.king@dayton.ncr.com
   - Call:  (513)-445-1090 before 4:30 (EST) (317)-478-5910 after 6:00
   - Mail:  NCR Corp.  1700 S. Patterson  WHQ-5E  Dayton, OH 45479

The survey will be finished and sent in the next week or two, so please let
me know of your interest and what YOU might like to get out of such a study.

Thank you for your time.

Jim King 

------------------------------

Subject: Submission-Neural Learning Methods
From:    David Kanecki <kanecki@vacs.uwp.wisc.edu>
Date:    Mon, 09 Jan 89 20:02:07 -0600 


I am interested in studying the various learning rules used on neural
networks. Can anyone send me a re article reference or tutorial on the
following:

        First Order Delta Rule
        Second Order Delta Rule
        First Order Methods or Second Order Methods
                a. Delta Rule
                b. Hebb Rule
                c. Gradient Decent
                d. Back Propagation
                e. Kohonen's rule
                f. A.H. Klopf and B. Kosco rule
                g. Others not named above

In exchange I will compile a list and send it to the Neuron Digest.

Thank you,

David Kanecki,ACS/Bio. Sci.

------------------------------

Subject: Re: talk at ICSI
From:    baker%icsi.Berkeley.EDU@berkeley.edu (Paula Ann Baker)
Date:    Thu, 05 Jan 89 08:44:19 -0800 

[[ Editor's note: I got thisnotice and then lost it.  I did make it to the
talk, though, which was quite interesting.  Dr. Baggi, a music professor,
gave some very nice aural demonstrations of "computer-generated" swing, but
was a little short on explaining the actual connectionist implementation.
His approach would be best described as classic expert-system style, in
which he hand fed the possibilities and structures and used the simulator
for not much more than "Mozartian dice."  So far, he seems to have found no
particular advantage to the connectionist architecture, apart of the ease of
programming "paralell" events.  Training the network to analyze swing and
*then* synthesize new music would be most intriguing, but he said that he
wouldn't work on that any time soon. -PM ]]


             The International Computer Science Institute
                     is pleased to present a talk:

                 Wednesday, January 4, 1989  12:00 p.m.

              "A Connectionist Model for the Synthesis of 
                      Swing in Afro-American Jazz" 

                             Dr. Denis Baggi
                    University of California, Berkeley

        The introduction of computer technology to problems of music and
musicology is as old as computer science and goes back to Charles Babbage
and Ada Lovelace. Starting with the Illiac Suite in 1956, considerable
progress has been achieved both in computer assisted sound synthesis and in
algorithms for symbol manipulation in composition and music analysis.
Several centers - Bell Labs., CCRMA at Stanford, IRCAM in Paris, the
Experimental Music Studio at MIT - have been dedicated to these problems.

        With less than half-a-dozen exceptions, however, no results of
applications of computer techniques have been obtained for the
psychoacoustical, perceptual problem of swing in Afro-American Jazz.
Properly, and holistically, defined, swing is the medium of the Jazz message
- as space is the medium of sculpting.  Technically, swing consists of
patterns of rhythmic accentuations and tension-release acoustical devices
which are almost instinctively perceived by the listener - though an
analytic description, let alone a teaching methodology, is elusive.

        This talk describes some work in progress for the construction of a
connectionist model for the synthesis of swing. Using the Rochester
Connectionist Simulator at the International Computer Science Institute in
Berkeley, a system is being developed which accepts as input the harmonic
grid of a Jazz standard and which constructs the lines played by a rhythm
section consisting of piano, bass and drums. The main program is built
around a connectionist network consisting of several automata running in
parallel such that control passes from one to the other. The output drives a
digital sound synthesis machine. Some preliminary results will be played and
discussed.
        
      This talk will be held in the Main Lecture Hall at ICSI.
         1947 Center Street, Suite 600, Berkeley, CA  94704
  (On Center Street between Milvia and Martin Luther King Jr. Way)
        


------------------------------

Subject: VLSI Implementations of Neural Networks
From:    josh@flash.bellcore.com (Joshua Alspector)
Date:    Fri, 06 Jan 89 14:32:55 -0500 

I will be giving a tuturial on the above topic at the Custom Integrated
Circuits Conference.  Vu grafs are due at the end of February and I would
like to include as complete a description as possible of current efforts in
the VLSI implementation of neural networks.  I would appreciate receiving
any preprints or hard copies of vu grafs regarding any work you are doing.
E-mail reports are also acceptable.  Please send to:


Joshua Alspector
Bellcore, MRE 2E-378
445 South St.
Morristown, NJ 07960-1910

------------------------------

End of Neurons Digest
*********************