[comp.ai.neural-nets] Neuron Digest V4 #24

neuron-request@HPLABS.HP.COM (Neuron-Digest Moderator Peter Marvit) (11/16/88)

Neuron Digest   Tuesday, 15 Nov 1988
                Volume 4 : Issue 24

Today's Topics:
                      Re: GENETIC LEARNING ALGORITHMS
                      neural nets and finite elements
       Re: Schedule of remaining neural network talks this semester
                       Re: PDP prog:s source code fo
                       88 Connectionist Proceedings
                      Response to "Learning with NNs"
                         Frontiers in Neuroscience
               Object-Oriented Languages for NN Description
                seperability and unbalanced data discussion
         Cyberspace Implementation Issues (Tee into optic nerve?)
                    CA simulator for suns: ftp up again
                               AI Geneology

Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"

------------------------------------------------------------

Subject: Re: GENETIC LEARNING ALGORITHMS
From:    brian@caen.engin.umich.edu (Brian Holtz)
Organization: U of M Engineering, Ann Arbor, Mich.
Date:    02 Nov 88 23:48:00 +0000 

Does anyone know of any references that describe classifier systems whose
messages are composed of digits that may take more than two values?  For
instance, I want to use a genetic algorithm to train a classifier system to
induce lexical gender rules in Latin.  Has any work been done on managing
the complexity of going beyond binary-coded messages, or (better yet)
encoding characters in messages in a useful, non-ASCIIish way?  I will
summarize and post any responses.


In article <7104@bloom-beacon.MIT.EDU>, thefool@athena.mit.edu (Michael A. de la Maza) writes:
> 
> Has anyone compiled a bibliography of gla articles/books?  


In "Classifier Systems and Genetic Algorithms" (Cognitive Science and
Machine Intelligence Laboratory Technical Report No. 8) Holland lists some
80 or so applications of GAs, and offers a complete bibliography to
interested parties.  He can be reached at the EECS Dept., Univ. of
Michigan, Ann Arbor MI 48109 (he doesn't seem to have an obvious email
address here...).  You can get a copy of the technical report from
Sharon_Doyle@ub.cc.umich.edu.

------------------------------

Subject: neural nets and finite elements
From:    buc@Jessica.stanford.edu (Robert Richards)
Organization: Stanford University
Date:    05 Nov 88 21:28:12 +0000 


Does anyone have any information or pointers to articles which deal with
the use of neural net in finite element analysis?  I am especially
interested in the application of genetic algorithms, specifically simulated
annealing, to determine the optimal geometry of the part being modeled.

Thank you in advance.


        Rob Richards
        Stanford University

------------------------------

Subject: Re: Schedule of remaining neural network talks this semester
From:    tek@cmx.npac.syr.edu (Aman U. Joshi)
Organization: Northeast Parallel Architectures Center, Syracuse NY
Date:    06 Nov 88 23:43:33 +0000 

In article <1072@cseg.uucp> are@hcx.uucp (ALAN RAY ENGLAND) writes:
>
>
>On 11/18/88 E. Tzanakou will give a talk titled "ALOPEX: Another 
>optimization method."  As a PhD student studying optimal learning in
>neural networks I am intrigued by the title.  Could someone enlighten me 
>as to exactly what ALOPEX is.  A reference to a publication would be
>greatly appreciated.
>


I am working with Prof. Erich Harth, the inventor of "ALOPEX", at Syracuse
University, Syracuse, NY.  I have used his technique for 3-d crystal
formation, for pattern recognition, and (presently) for VLSI Standard Cell
Placement.  "ALOPEX" originated as an abbreviation of "ALgorithm fOr
Pattern EXtraction", but also means 'fox' in greek.

I have had the opportunity to run this on (perhaps the world's fastest)
machine, The Connection Machine-2, by Thinking Machine Inc. The results are
fantastic.  Those interested in "ALOPEX"(an inherently parallel stochastic
optimization technique) should contact Prof. Erich Harth, 201, Physics
Building,Syracuse University, Syracuse, NY -13244.  or, e-mail me at
tek@cmx.npac.syr.edu.

Prof. Harth's phone number :(315) 443-2565. 

Thank you,
Aman U. Joshi,
(315) 443-3573.
aujoshi@sunlab.npac.syr.edu

------------------------------

Subject: Re: PDP prog:s source code fo
From:    spl@cup.portal.com (Shawn P Legrand)
Organization: The Portal System (TM)
Date:    08 Nov 88 21:59:49 +0000 

For those looking for source code to Rumelhart's et. al. PDP volumes check
out the "3rd volume" - "Handbook for PDP". This book (spiral bound
paperback) comes with two diskettes of C source code dealing with topics
covered in the original two volumes.  If you would like more specific
information on this volume send me some EMail and I would be happy to
respond.

                                                    Shawn P. Legrand, CCP

                                             +----------------------------+
                                             | spl@cup.portal.com         |
                                             |        or                  |
                                             | ...sun!cup.portal.com!spl  |
                                             +----------------------------+

[[ Editor's Note:  Is it available via ftp somewhere?  I also haven't seen
the availability of a MAC version.  Could someone enlighten the Digest? -PM ]]


------------------------------

Subject: 88 Connectionist Proceedings
From:    terry@cs.jhu.edu (Terry Sejnowski <terry@cs.jhu.edu>)
Date:    Tue, 08 Nov 88 19:41:15 -0500 

NOW AVAILABLE:

Proceedings of the 1988 Connectionist Models Summer School, edited by David
Touretzky, Geoffrey Hinton, and Terrence Sejnowski.

Available from:
   Morgan Kaufmann Publishers, Inc.
   Order Fulfillment Center
   P.O. Box 50490
   Palo Alto, CA 94303-9953
   tel. 415-965-4081

Cost is $24.95 plus $2.25 postage and handling ($4.00 for foreign orders.)
For each additional volume ordered, increase postage by $1.00 (foreign,
$3.00).  Enclose full payment by check or money order.  California
residents please add sales tax.

Terry


------------------------------

Subject: Response to "Learning with NNs"
From:    goodhart@cod.nosc.mil (Curtis L. Goodhart)
Date:    Wed, 09 Nov 88 08:01:45 -0800 


>Subject: Learning with NNs
>From:    Dario Ringach >dario%TECHUNIX.BITNET@CUNYVM.CUNY.EDU>
>Date:    Wed, 19 Oct 88 13:36:32 +0200 
>
>Has anyone tried to approach the problem of learning in NNs from a
>computability-theory point of view?  For instance, let's suppose we use a
>multilayer perceptron for classification purposes.  What is the class of
>discrimination functions learnable with a polynomial number of examples
>such that the probability of misclassification will be less than P (using a
>determined learning algorithm, such as back-prop)?
>
>It seems to me that these type of questions are of importance if we really
>want to compare between different learning algorithms, and computational
>models.
>
>Does anyone have references to such a work?  Any references will be
>appreciated!


One reference addressing computability in neural networks is the section on
"Computable Functions and Complexity in Neural Networks" by Omer Egecioglu,
Terrence R. Smith and John Moody in the book "Real Brains Artificial Minds"
by John L. Casti, and Aders Kalquist (1987).  Publisher is North-Holland.

            Curtis L. Goodhart


------------------------------

Subject: Frontiers in Neuroscience
From:    terry@cs.jhu.edu (Terry Sejnowski <terry@cs.jhu.edu>)
Date:    Wed, 09 Nov 88 19:06:29 -0500 

The latest issue of Science (4 November) has a special section on Frontiers
in Neuroscience.  The cover is a spectacular image of a Purkinje cell by
Dave Tank.  Four of the major reviews in the issue make contact with
network modeling: Tom Brown et al.  on Long-Term Synatpic Potentiation;
Steve Lisberger on The Neural Basis for Learning of Simple Motor Skills;
Steve Wise and Bob Desimone on Insights into Seeing and Grasping; and Pat
Churchland and Terry Sejnowski on Perspectives on Cognitive Neuroscience.
See also the letter by Dave Tank et al. on Spatially Resolved Calcium
Dynamics of Mammalian Purkinje Cells in Cerebellar Slice.  This issue was
timed to coincide with the Annual Meeting of the Society for Neuroscience
in Toronto next week.

Terry


------------------------------

Subject: Object-Oriented Languages for NN Description
From:    Dario Ringach <dario%TECHUNIX.BITNET@CUNYVM.CUNY.EDU>
Date:    Fri, 11 Nov 88 09:58:15 +0200 

Can anyone provide me references to high-level description languages
(preferably Object-Oriented ones, like the P3 simulation system) for the
description of NN models?  What about CST (Concurrent SmallTalk)?  It seems
ideal for the description of NNs to be simulated in massive parallel
fine-grained architectures such as the J-Machine ...  Has anyone any
experience to share on this topic?

Thanks in advance.
- --Dario.

------------------------------

Subject: seperability and unbalanced data discussion
From:    Richard Rohwer <rr%eusip.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Date:    Fri, 11 Nov 88 11:29:38 +0000 

In a close inspection of convergence ailments afflicting a multilayer net,
I found that the problem boiled down to a layer which needed to learn the
separable AND function, but wasn't.  So I had a close look at the LMS error
function for AND, in terms of the the weights from each of the two inputs,
the bias weight, and the multiplicities of each of the 4 exemplars in the
truth table.  It turns out that the error can not be made exactly 0 (with
finite weights), so minimization of the error involves a tradeoff between
the contributions of the 4 exemplars, and this tradeoff is strongly
influenced by the multiplicities.  It is not difficult to find the minimum
analytically in this problem, so I was able to verify that with my highly
unbalanced training data, the actual minimum was precisely where the LMS
algorithm had terminated, miles away from a reasonable solution for AND.  I
also found that balanced data puts the minimum where it "belongs".

The relative importance of the different exemplars in the LMS error
function runs as the square root of the ratio of their multiplicities.  So
I solved my particular problem by turning to a quartic error function, for
which it is the 4th root of this ratio that matters.  (The p-norm, p-th
root of the sum of the p-th powers, approaches the MAX norm as p approaches
infinity, and 4 is much closer to infinity than 2.)

   ---Richard Rohwer, CSTR, Edinburgh

------------------------------

Subject: Cyberspace Implementation Issues (Tee into optic nerve?)
From:    "RCSDY::YOUNG%gmr.com"@RELAY.CS.NET
Date:    Fri, 11 Nov 88 09:22:00 -0400 

In article <10044@srcsip.UUCP> lowry@srcsip.UUCP () writes: 
> If you could "tee" into the optic nerve, it seems like you could feed 
> in pre-digested data at a much lower rate.

In Neuron Digest V4 #19, jdb9608@ultb.UUCP asks in response:
> I'm cross-posting ... to the neural-nets group in the hope
> that someone there will have some comment or idea on how a computer could
> possibly generate a consensual hallucination for its operator, hopefully
> entirely within the operator's mind.

In my thesis work [Young, R. A. , Some observations on temporal coding of
color vision: psychophysical results,Vision Res. 17, 957-965 (1977)] I
electrically stimulated the eyes of four volunteer subjects and got
phosphene colors that were specific to the electrical pulse stimulus
patterns. In that paper I reference a number of other papers in the
psychology and psychophysical literature regarding the production of
patterns and colors via direct electrical stimulation of the eye.

------------------------------

Subject: CA simulator for suns: ftp up again
From:    cgl%raven@lanl.gov (Chris Langton )
Date:    Fri, 11 Nov 88 10:06:14 -0700 


The CA simulator for Suns is available again via ftp in the post-virus
world. My earlier message gave an example of how to obtain it via anonymous
ftp from 128.165.96.120. The tar file is in the "pub" directory entitled
"cellsim_1.0.tar".

If you are unsure about anonymous ftp, send me a message and I will send
you a sample script.

Again, Europeans should obtain a copy from Tom Buckley at Leeds University:


     Dr. T.F. Buckley
     Department of Computer Studies
     Leeds University
     Leeds, LS2 9JT
     England, UK
     
   email: 
     
     buckley%uk.ac.leeds.dcs@UKACRL
     
     

Chris Langton

Center for Nonlinear Studies            Phone: 505-665-0059
MS B258                                 Email: cgl@LANL.GOV
Los Alamos National Laboratory
Los Alamos, New Mexico
87545

[[ Editor's Note: Several people asked me about the Cellular Automata
mailing list.  Send mail to CA-request@think.com to be added to the list.
It is relatively low volume nowadays, but I'm sure Neuron readers could
spice things up ;-) -PM ]]

------------------------------

Date: Thu, 3 Nov 88 09:35:44 PST
From: rik%cs@ucsd.edu (Rik Belew)
Subject: AI family tree

                             AI GENEALOGY
                     Building an AI family tree

Over the past several years we have been developing a collection of
bibliographic references to the literature of artificial intelligence and
cognitive science. We are also in the process of developing a system,
called BIBLIO, to make this information available to researchers over
Internet. My initial work was aimed at developing INDEXING methods which
would allow access to these citations by appropriate keywords. More
recently, we have explored the use of inter-document CITATIONS, made by the
author of one document to previous articles, and TAXONOMIC CLASSIFICATIONS,
developed by editors and librarians to describe the entire literature.

We would now like to augment this database of bibliographic information
with "cultural" information, specifically a family tree of the intellectual
lineage of the authors. I propose to operationalize this tree in terms of
each author's THESIS ADVISOR and COMMITTEE MEMBERS, and also the RESEARCH
INSTITUTIONS where they work. It is our thesis that this factual
information, in conjuction with bibliographic information about the AI
literature, can be used to characterize important intellectual developments
within AI, and thereby provide evidence about general processes of
scientific discovery. A nice practical consequence is that it will help to
make information retrievals from bibliographic databases, using BIBLIO,
smarter.

I am sending a query out to several EMail lists to ask for your help in
this enterprise. If you have a Ph.D. and consider yourself a researcher in
AI, I would like you to send me information about where you got your
degree, who your advisor and committee members were, and where you have
worked since then.  Also, please forward this query to any of your
colleagues that may not see this mailing list. The specific questions are
contained in a brief questionnaire below, and this is followed by an
example. I would appreciate it if you could "snip" this (soft copy)
questionnaire, fill it in and send back to me intact because this will make
my parsing job easier.

Also, if you know some of these facts about your advisor (committee
members), and their advisors, etc., I would appreciate it if you could send
me that information as well. One of my goals is to trace the genealogy of
today's researchers back as far as possible, to (for example) participants
in the Dartmouth conference of 1956, as well as connections to other
disciplines. If you do have any of this information, simply duplicate the
questionnaire and fill in a separate copy for each person.

Let me anticipate some concerns you may have. First, I apologize for the
Ph.D. bias. It is most certainly not meant to suggest that only Ph.D.'s are
involved in AI research. Rather, it is a simplification designed to make
the notion of "lineage" more precise. Also, be advised that this is very
much a not-for-profit operation. The results of this query will be combined
(into an "AI family tree") and made publically available as part of our
BIBLIO system.

If you have any questions, or suggestions, please let me know. Thank you
for your help.

Richard K. Belew
        Asst. Professor
        Computer Science & Engr. Dept. (C-014)
        Univ. Calif. - San Diego
        La Jolla, CA 92093
        619/534-2601
        619/534-5948  (messages)
        rik%cs@ucsd.edu

  --------------------------------------------------------------
                          AI Genealogy questionnaire
                        Please complete and return to:
                                rik%cs@ucsd.edu


NAME:   

Ph.D. year:     

Ph.D. thesis title:

Department:

University:
Univ. location: 

Thesis advisor: 
Advisor's department:   

Committee member:       
Member's department:

Committee member:       
Member's department:

Committee member:       
Member's department:

Committee member:       
Member's department:

Committee member:       
Member's department:

Committee member:       
Member's department:

Research institution:   
Inst. location:
Dates:

Research institution:   
Inst. location:
Dates:

Research institution:   
Inst. location:
Dates:


 --------------------------------------------------------------
                          AI Genealogy questionnaire
                                  EXAMPLE

NAME:                   Richard K. Belew        

Ph.D. year:             1986    

Ph.D. thesis title:     Adaptive information retrieval: machine learning 
                        in associative networks

Department:             Computer & Communication Sciences (CCS)

University:             University of Michigan

Univ. location:         Ann Arbor, Michigan

Thesis advisor:         Stephen Kaplan  
Advisor's department:   Psychology      

Thesis advisor:         Paul D. Scott
Advisor's department:   CCS     

Committee member:       Michael D. Gordon       
Member's department:    Mgmt. Info. Systems - Business School

Committee member:       John H. Holland 
Member's department:    CCS

Committee member:       Robert K. Lindsay       
Member's department:    Psychology

Research institution:   Univ. California - San Diego
                        Computer Science & Engr. Dept.
Inst. location          La Jolla, CA
Dates:                  9/1/86 - present                        

------------------------------

End of Neurons Digest
*********************