[comp.ai.neural-nets] Neuron Digest V6 #40

neuron-request@HPLMS2.HPL.HP.COM ("Neuron-Digest Moderator Peter Marvit") (06/14/90)

Neuron Digest   Wednesday, 13 Jun 1990
                Volume 6 : Issue 40

Today's Topics:
             REQ: teaching Hopfield/graded response network
                  Hopfield Networks -- How Best to Run?
                Re: Hopfield Networks -- How Best to Run?
                Re: Hopfield Networks -- How Best to Run?
                        neural nets that "count"
                        applications in chemistry
                        About Hopfield neural-net
                      Re: About Hopfield neural-net
             Library Circulation of Neural Network Journals
                   A GA Tutorial and a GA Short Course
                    Quantum devices for Neural Nets?
                       Special Issue Announcement
                        Re: Neuron Digest V6 #21
                     Another Request for NN Software
                     Request for biological NN info.
                    Cognitive Science Society Meeting


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: REQ: teaching Hopfield/graded response network
From:    smagt@cs.vu.nl (Smagt v der PPP)
Organization: V.U. Informatica, Amsterdam, the Netherlands
Date:    12 Dec 89 12:14:22 +0000

I want to use the Hopfield network [1] with graded response for
classification purposes, but I have trouble finding an appropriate
learning rule.  Are there any articles on this?  Does anyone have
experience?

Patrick van der Smagt
Reference:
[1]
%A J. J. Hopfield
%D May 1984
%J Proceedings of the National Academy of Sciences
%P 3088-3092
%T Neurons with graded response have collective computational 
%T properties like those of two-state neurons
%V 81

------------------------------

Subject: Hopfield Networks -- How Best to Run?
From:    loren@tristan.llnl.gov (Loren Petrich)
Organization: Lawrence Livermore National Laboratory
Date:    23 Mar 90 05:00:33 +0000


        I'm rather new to Neural Networks, so I'd like some advice.
I'm constructing a simulation of a Hopfield NN, which consists of
minimizing the following energy function:

E = 1/2*sum(i,j)T(i,j)S(i)S(j)

where the nodes have S(i) equal to +/- 1 and the T's are the weights.

        I'm planning to use simulated annealing for doing this, and I
wonder what is the most suitable schedule -- what behavior of temperature
over time.

        If anyone knows of a superior algorithm, I would love to learn
about it.

        I wonder if this is an NP-complete problem.

        There is a straightforward algorithm for doing this, but its time
is proportional to 2^n for dimension n. I think that simulated annealing
may be able to do a lot better; perhaps n^2.

                                                        ^    
Loren Petrich, the Master Blaster                    \  ^  /
        loren@sunlight.llnl.gov                       \ ^ /
One may need to route through any of:                  \^/
                                                <<<<<<<<+>>>>>>>>
        lll-lcc.llnl.gov                               /v\
        lll-crg.llnl.gov                              / v \
        star.stanford.edu                            /  v  \
                                                        v    
"What do you MEAN it's not in the computer?!?" -- Madonna

------------------------------

Subject: Re: Hopfield Networks -- How Best to Run?
From:    russell@minster.york.ac.uk
Organization: Department of Computer Science, University of York, England
Date:    27 Mar 90 18:36:52 +0000

>       I'm rather new to Neural Networks, so I'd like some advice.

aren't we all :)

>I'm constructing a simulation of a Hopfield NN, which consists of

(with simulated annealing, it's really the Boltzmann machine)

>       I'm planning to use simulated annealing for doing this, and I
>wonder what is the most suitable schedule -- what behavior of
>temperature over time.


Hmmm, this is tricky.  The behaviour of the system is difficult to
characterise accurately, and so the production of an ``optimal''
annealing schedule is difficult.  As you probably know, the point about
simulated annealing is to allow local minima to be escaped from by
allowing jumps to higher energy levels.  Now, there comes a stage in the
annealing schedule where jumps to lower enegry states occur much more
frequently than jumps to high energy states, and this ``transition''
period, which bears resemblances to state transitions in solids/liquids,
should be held for as long as possible.  It is easiest to see on a graph
so here goes...

|+
|  +
|    +
|     +
|      +
|      +
|       +
|       +
|        +
|         +
|           +
|              +
|                  +
_________________________
     a   b

where the vertical axis is the average energy of the system, <E>, and the
horizontal axis is the temperature.  The best nets will have spend most
time with the temperature in the range a-b on the graph, where the energy
is decreasing quickest.  Now, I suppose that you could do a gradient
analysis of the global energy function to determine your annealing
schedule, but that `feels' messy, since it involves a global summation
and removes the principle of local computation that the Boltzmann net
embodies.


Has anyone *tried* a global calculation?

Russell.

____________________________________________________________
 Russell Beale, Advanced Computer Architecture Group,
 Dept. of Computer Science, University of York, Heslington,
 YORK. YO1 5DD.  UK.               Tel: [044] (0904) 432762

 russell@uk.ac.york.minster                     JANET
 russell%minster.york.ac.uk@nsfnet-relay.ac.uk  ARPA 
 ..!mcsun!ukc!minster!russell                   UUCP
 russell@minster.york.ac.uk                     eab mail
____________________________________________________________

------------------------------

Subject: Re: Hopfield Networks -- How Best to Run?
From:    usenet@nlm-mcs.arpa (usenet news poster)
Organization: National Library of Medicine, Bethesda, Md.
Date:    29 Mar 90 06:09:03 +0000


>(with simulated annealing, it's really the Boltzmann machine)

I agree.

> Now, there comes a stage in the annealing schedule
>where jumps to lower enegry states occur much more frequently than
>jumps to high energy states, and this ``transition'' period, which
>bears resemblances to state transitions in solids/liquids, should be
>held for as long as possible...

But don't neglect a period or equilibration at the outset where the up
and down transition occur with near equal probability.  If you start with
too low an initial temperature (or a bad starting point) most of the
jumps will be downward, but you are really just using Monte Carlo as an
inefficient minimization routine.

A second point, since your "energy" function is easily differentiable you
might consider computing "forces" and running dynamics rather than doing
Monte Carlo.  The efficiency of Monte Carlo depends critically on a good
method for choosing new jumps successfully.  Dynamics is, in a sense,
mindless.  You just follow the laws of motion.  Annealing is achieved by
gradully extracting kinetic "energy" from the system.

>The best nets will have spent most time with the temperature 
>in the range a-b on the graph (deleted for brevity), where the
>energy is decreasing quickest.  

But, if you don't have some detailed knowledge about the global surface,
you could end up focusing on a local transition.

>Has anyone *tried* a global calculation?  

This is a very good suggestion. If you can work on some scaled down nets,
the reduced dimensionality of the problem will reduce the space you need
to search enormously and allow global searches that might not otherwise
be possible.  The only way of knowing you have a global solution is that
you always reach it from different starting conditions.  What you can
gain from reduced models is a feeling for the characteristics of the
"energy" surface you are dealing with, and perhaps some guidance in
attacking the full problem.

David States

Math is always the same, the difference between fields is who you think
invented it.

------------------------------

Subject: neural nets that "count"
From:    harwood@cvl.umd.edu (David Harwood)
Organization: Center for Automation Research, Univ. of Md.
Date:    23 Apr 90 18:53:26 +0000


        I recall reading something that surprised me - something reported
about 15 years ago (perhaps in Science, perhaps by neuroscientist Richard
Thompson, although I'm not sure) - that a large fraction of cortical
neurons in mammals (5% or some large fraction) - apparently "count"
specific stimulus -class events, up to a small number (6-8 or so, as I
recall). That is, a particular counter cell fires characteristically upon
an Nth event of its stimulus-class, independent of inter-stimulus
time-delays (up to a limit).  The characteristic behavior was supposed to
be reliable. The report was on untrained response, but we can't assume
that previous "learning" was not sometimes involved.
        There are important differences in power of pattern recognition
between so-called "counter" automata and counter-free automata, allowing
unbounded counting which of course we don't see in neurons. But it occurs
to me that this sort of apparently very pervasive neural phenomena should
have profound implications for neural learning and control, especially
learning of asynchronous, partially-ordered programs of behavior or
patterns of stimulus (of small, but multi-class/-modal orders). This
would seem to occur at even lowest (unconscious) levels of neural
processing.
        I don't know if there's been further neurophysiological
investigation or explanation of this. It was left as kind of puzzle in
the article, as I recall. But if things are as they were reported then,
it would seem to be very important for theories of neural nets, real and
artificial - apparently something basic, making for great complexity and
power of recognition and control, which has been ignored or overlooked.

------------------------------

Subject: applications in chemistry
From:    russo@caip.rutgers.edu (Mark F. Russo)
Organization: Rutgers Univ., New Brunswick, N.J.
Date:    24 Apr 90 13:24:55 +0000


I am interested in applications of Neural Networks in 
        protein structure prediction, 
        chemical reaction product prediction, 
        drug interaction effects ... and the like.

and I'm looking for references to applications in these and closely
related area's.  I am aware of the following references...

[1] Borman, S.  "Neural Network Applications In Chemistry Begin to
Appear", April, 24, 1989 CE$N

[2] Holly, L.  and Karplus, M.  "Protein secondary structure prediction
with a neural network", Proc.  Natl.  Acad.  Sci., Vol.  86, Jan, 1989

[3] Qian, N. and Sejnowski, T. J. Mol. Biol. Vol 202 No. 4, 1988.

Any other references or clues would be greatly appreciated!

Mark F. Russo (russo@caip.rutgers.edu)

_____________________________________________________________________________
uucp:   {from.anywhere}!rutgers!caip.rutgers.edu!russo
arpa:   russo@caip.rutgers.edu
or
uucp:   {from.anywhere}!rutgers!petsd!cccspd!ccscg!markr

------------------------------

Subject: About Hopfield neural-net
From:    IO92040@MAINE.BITNET (Dimi)
Organization: University of Maine System
Date:    01 May 90 01:12:09 +0000

Hello neural-nets fellows,

I am somehow puzzled about one thing concerning the Hopefield learning
scheme.  As you know each memory state(pattern) is supposed to be stored
in what so called memory matrix.If you want to reveal the correct memory
state all you do is to get the inner product of the memory matrix and in
the input vector which may be distorted(distorted memory state). Well,
for one memory state you need at least an 8th dimensional memory matrix.
For 2 memory states you need 16th dimensional matrix, according to the
rule M=.15N. My question is if you want to create a memory matrix to
store more than one memory states how do you really do it??How does your
input vector looks like?Does it contain the memory states you want to
reveal,sequentially?Is your memory matrix a combination of memory
matrices that represent each memory state that you want it to contain or
what??I would appreciate if somebody out there who understands more about
Hopfield nets could be able to answer my question(if you understand what
I am asking, I hope).  Thanx a lot Dimi

------------------------------

Subject: Re: About Hopfield neural-net
From:    smagt@samantha.fwi.uva.nl (Patrick van der Smagt)
Organization: FWI, University of Amsterdam
Date:    04 May 90 07:05:35 +0000


There is a lot to be answered.  First, the M=.15N looks better than it
is.  In a +1/-1 Hopfield network, storing a pattern means storing its
inverse, too (by approximation, this is also true for the 0/1 network).
So the number of actually different patterns you can effectively store is
half of 0.15N.

Note that Hopfield finds the 15% for a large number of neurons, so you
shouldn't apply this to store one or two patterns.  Furthermore, he
measures the network's success according to the recall.  When the
#neurons:#patterns ratio exceeds 15%, recall errors are very severe.  My
experiments confirmed this.  However, you can actually store as many as N
patterns in an N-neuron network.  How this is done is explained in

%A A. D. Bruce
%A A. Canning
%A B. Forrest
%A E. Gardner
%A D. J.  Wallace
%B AIP Conference Proceedings 151, Neural Networks for Computing,
%B Snowbird Utah, AIP 
%D 1986
%E J. S. Denker
%T Learning and memory properties in fully connected networks
%P 65--70


I'm presenting a paper on this at the Third International Conference on
Industrial and Engineering Applications of Artificial Intelligence and
Expert Systems (July 15-18, Charleston, SC).


Patrick van der Smagt                                               /\/\
                                                                    \  /
Organization: Faculty of Mathematics & Computer Science             /  \
              University of Amsterdam, Kruislaan 409,            _  \/\/  _
              NL-1098 SJ  Amsterdam, The Netherlands            | |      | |
Phone:        +31 20  525 7466                                  | | /\/\ | |
Telex:        10262 hef nl                                      | | \  / | |
Fax:          +31 20  592 5155                                  | | /  \ | |
email:        smagt@fwi.uva.nl                                  | | \/\/ | |
                                                                | \______/ |
                                                                 \________/

                                                                    /\/\
                                                                    \  /
                                                                    /  \
                                                                    \/\/

------------------------------

Subject: Library Circulation of Neural Network Journals
From:    will@ida.org (Craig Will)
Date:    Mon, 04 Jun 90 13:32:44 -0400



               Circulation of Neural Network
                 Journals in OCLC Libraries

Journal                         Year Started    Circulation

IEEE Trans. on Neural Nets          1990        130
Neural Networks                     1988        115
Neural Computation                  1989         29
J. of Neural Network Comput         1989         21
Connection Science                  1989          9
Network                             1990          5
Inter. J. of Neural Networks        1989          4

Based on a search June 4, 1990 using the OCLC database of about 10,000
member libraries in the United States.  Reli- able figures for Neural
Network Review aren't available, apparently because of the change of
publisher.  No records were found for the International Journal of
Neurocomputing.


Craig A. Will
will@ida.org
Institute for Defense Analyses

------------------------------

Subject: A GA Tutorial and a GA Short Course
From:    "Dave Goldberg (dgoldber@ua1vm.ua.edu)" <DGOLDBER@UA1VM.ua.edu>
Date:    Thu, 07 Jun 90 16:02:23 -0500


A tutorial entitled "Genetic Algorithms and Classifier Systems" will be
presented on Wednesday afternoon, August 1, at the AAAI conference in
Boston, MA by David E. Goldberg (Alabama) and John R. Koza (Stanford).
The course will survey GA mechanics, power, applications, and advances
together with similar information regarding classifier systems and other
genetics-based machine learning systems.  For further information
regarding this tutorial write to AAAI-90, Burgess Drive, Menlo Park, CA
94025, (415)328-3123.

A five-day short course entitled "Genetic Algorithms in Search,
Optimization, and Machine Learning" will be presented at Stanford
University's Western Institute in Computer Science on August 6-10 by
David E. Goldberg (Alabama) and John R. Koza (Stanford).  The course
presents in-depth coverage of GA mechanics, theory and application in
search, optimization, and machine learning.  Students will be encouraged
to solve their own problems in hands-on computer workshops monitored by
the course instructors.  For further information regarding this course
contact Joleen Barnhill, Western Institute in Computer Science, PO Box
1238, Magalia, CA 95954, (916)873-0576.

------------------------------

Subject: Quantum devices for Neural Nets?
From:    FOO@EVAX0.ENG.FSU.EDU
Date:    Fri, 08 Jun 90 09:51:43 -0400

I'd appreciate if anybody could give me pointers to info on quantum-well
devices for implementing neural nets (i.e., atomic-level computing).  My
e-mail address is:

        foo@evax0.eng.fsu.edu

I'll post a summary of all responses I received. Thank you.

 --simon foo
dept. of electrical engineering
florida state university
tallahassee, fl 32306

------------------------------

Subject: Special Issue Announcement
From:    Alex.Waibel@SPEECH2.CS.CMU.EDU
Date:    Sun, 10 Jun 90 20:09:38 -0400



                               ANNOUNCEMENT

    MACHINE  LEARNING  will  be  publishing  a special issue devoted to
    connectionist models under the title:

    "Structured Connectionist Learning Systems:  Methods and Real World
    Applications"

    MACHINE  LEARNING  publishes  articles  on  all  aspects of Machine
    Learning,  and  on  occasion  runs  special  issues  on  particular
    subtopics  of  special  interest.    This issue of the journal will
    emphasize conectionist learning systems  that  aim  at  real  world
    applications.  Papers are solicited on this topic.

    Five copies of the manuscript should be sent by August 3, 1990 to:

    Dr. Alex Waibel
    School of Computer Science
    Carnegie Mellon University
    Pittsburgh, PA 15213
    Telephone: (412) 268-7676

    Papers will be subject to the standard review process.




------------------------------

Subject: Re: Neuron Digest V6 #21
From:    <OOMIDVAR%UDCVAX.BITNET@CORNELLC.cit.cornell.edu>
Date:    Mon, 11 Jun 90 18:45:00 -0400

  I like to know if there is any actual working industrial inspection
system based on neural networks technology? if so where and who is the
technical person(Brain) in charge of it.

  2. Is any one involved in parallel implementation of well known learning
algorithms such as Backprop, LVQ, and Boltzmann machine? if so please let
me know . I can also use technical papers in these areas.  Thank you very
much in advance and I will post the results of this .
                                        OOMIDVAR@UDC.BITNET

------------------------------

Subject: Another Request for NN Software
From:    SADLER@ADELPHI-HDLSIG1.ARMY.MIL
Organization: Harry Diamond Labs, Adelphi, MD.
Date:    12 Jun 90 13:12:17 -0500

I'm looking for NN simulation software, HOWEVER we are not a UNIX house.
Platforms of choice are:

1. Vax running VMS
2. IBM/PC
3. Macintosh

I currently do most signal processing simulation work in MATLAB on the
VAX.  (MATLAB is just great by the way.)  Source code would also be of
interest.

thanks!!
brian sadler


------------------------------

Subject: Request for biological NN info.
From:    Garth S. Barbour <garth@wam.umd.edu>
Date:    Wed, 13 Jun 90 10:36:40 -0400

Hi folks.

I'm writing a paper on the relationship between natural and artificial
neural networks as an independent study project for the summer.  In
particular, I will be looking for historical discoveries in the study of
biological neural networks and how they did or did not influence the
design of artificial neural networks.  If anyone knows of any references
which may be of use, I would greatly appreciate hearing about it.  I am
especially looking for any references on biological neural networks. I
will publish a summary of the references recieved.  My e-mail address is
garth@cscwam.umd.edu.

Thanks,
Garth Barbour


------------------------------

Subject: Cognitive Science Society Meeting
From:    Stevan Harnad <harnad@clarity.Princeton.EDU>
Date:    Tue, 12 Jun 90 00:11:15 -0400

The XII Annual Conference of the Cognitive Science Society will take
place at MIT, July 25-28, 1990.  (Immediately preceding the meeting of
the AAAI, also to take place in the Boston area).

Conference Chair: M. Piattelli-Palmarini (MIT) Scientific Advisors: Beth
Adelson (Tufts), Stephen M. Kosslyn (Harvard) Steven Pinker (MIT),
Kenneth Wexler (MIT)

Registration fees: 
Members 150$ (before July 1), $200 after July 1
non-members 185      225
student      90      110

Contact the MIT Conference Services, MIT 
Room 7- 111 Cambridge, MA 02139  
 Tel. (617) 253-1700
_______________________________________________

Outline of the program

Tuesday July 24, Wednesday July 25
Tutorials: "Cognitive Aspects of Linguistic Theory", 
"Logic and Computability",
Cognitive Neuroscience"  
(Require separate registrations)

Wednesday, July 26
4.00 - 7.30 pm   Registration at Kresge Auditorium
7.30 - 9.00  First plenary session : Kresge Main 
Auditorium

Welcoming address by Samuel Jay Keyser, Assistant Provost of MIT,
Co-Director of the MIT Center for Cognitive Science; Welcoming address by
David E. Rumelhart (Stanford), Chairman of the Board of the Cognitive
Science Society

Keynote speaker: Noam Chomsky (MIT) "Language and Cognition"

__________

Thursday, July 26      9.00 am - 11.15 am  

Symposia:

Execution-Time Response: Applying Plans in a Dynamic World
        Kristian J. Hammond (University of Chicago),  Chair
        Phil Agre (University of Chicago)
        Richard Alterman (Brandeis University)
        Reid Simmons (Carnegie Mellon University)
        R. James Firby (NASA Jet Propulsion Lab)

Cognitive Aspects of Linguistic Theory
        Howard Lasnik (University of Connecticut),  Chair
        David Pesetsky (Massachusetts Institute of Technology),  Chair
        James T. Higginbotham (Massachusetts Institute of Technology)
        John McCarthy (University of Massachusetts)

Perception, Computation and Categorization
        Whitman Richards (Massachusetts Institute of Technology),  Chair
        Aaron Bobick (SRI International)
        Ken Nakayama (Harvard University)
        Allan Jepson (University of Toronto)

Paper Presentations:

 Rule-Based Reasoning,Explanation and Problem-Solving

Reasoning II: Planning

11.30 - 12.45    Plenary session  Kresge Main 
Auditorium
Keynote Speaker:  Morris Halle (MIT)  "Words and their Parts"
Chair: Kenneth Wexler (MIT)
__________________

Thursday, July 26   Afternoon     2.00 pm - 4.15 pm  

Symposia:

Principle-Based Parsing
        Robert C. Berwick (Massachusetts Institute of Technology),  Chair
        Steven P. Abney (Bell Communications Research)
        Bonnie J. Dorr (Massachusetts Institute of Technology)
        Sandiway Fong (Massachusetts Institute of Technology)
        Mark Johnson (Brown University)
        Edward P. Stabler, Jr. (University of California,  Los Angeles)

Recent Results in Formal Learning Theory
        Kevin T. Kelly (Carnegie Mellon University)
        Clark Glymour (Carnegie Mellon University),  Chair

Self-Organizing Cognitive and Neural Systems
        Stephen Grossberg (Boston University),  Chair
        Ennio Mingolla (Boston University)
        Michael Rudd (Boston University)
        Daniel Bullock (Boston University)
        Gail A. Carpenter (Boston University)

Action Systems:  Planning and Execution
        Emilio Bizzi (Massachusetts Institute of 
Technology),  Chair
        Michael I. Jordan (Massachusetts Institute of 
Technology)

Paper presentations

Reasoning :  Analogy

Learning and Memory : Acquisition

4.30 - 5.45   Plenary Session  (Kresge Main Auditorium)
Keynote Speaker: Amos Tversky (Stanford) 
"Decision under conflict"
Chair: Daniel N. Osherson (MIT)

Banquet
______________

Friday July 27   9.00 - 11.45 am

Symposia:

What's New in Language Acquisition ?
        Steven Pinker  and Kenneth Wexler (MIT),  Chair
        Stephen Crain (University of Connecticut)
        Myrna Gopnik (McGill University)
        Alan Prince (Brandeis University)
        Michelle Hollander, John Kim, Gary Marcus, 
        Sandeep Prasada, Michael Ullman (MIT)

Attracting Attention
        Ann Treisman (University of California,Berkeley),  Chair
        Patrick Cavanagh (Harvard University)
        Ken Nakayama (Harvard University)
        Jeremy M. Wolfe (Massachusetts Institute of Technology)
        Steven Yantis (Johns Hopkins University)

A New Look at Decision Making 
        Susan Chipman (Office of Naval Research) and Judith Orasanu (Army
                Research Institute and Princeton University),Chair
        Gary Klein (Klein Associates)
        John A. Swets (Bolt Beranek & Newman Laboratories)
        Paul Thagard (Princeton University)
        Marvin S. Cohen (Decision Science Consortium, Inc.)

Designing an Integrated Architecture: The Prodigy View
        Jaime G. Carbonell (Carnegie Mellon University),  Chair
        Yolanda Gil (Carnegie Mellon University)
        Robert Joseph (Carnegie Mellon University)
        Craig A. Knoblock (Carnegie Mellon University)
        Steve Minton (NASA Ames Research Center)
        Manuela M. Veloso (Carnegie Mellon University)

Paper presentations:

Reasoning : Categories and Concepts

Language :  Pragmatics and Communication

11.30 - 12. 45 Plenary Session (Kresge main 
Auditorium)
Keynote speaker: Margaret Livingstone (Harvard)
"Parallel Processing of Form, Color and Depth"   
Chair: Richard M. Held (MIT)
_________________________

Friday, July 27  afternoon    2.00 - 4.15 pm

Symposia:

What is Cognitive Neuroscience?
        David Caplan (Harvard Medical School) and 
                Stephen M. Kosslyn  (Harvard University), Chair
        Michael S. Gazzaniga (Dartmouth Medical School)
        Michael I. Posner (University of Oregon)
        Larry Squire (University of California,  San Diego)

Computational Models of Category Learning
        Pat Langley (NASA Ames Research Center) and 
                Michael Pazzani (University of  California,  Irvine), Chair
        Dorrit Billman (Georgia Institute of Technology)
        Douglas Fisher (Vanderbilt University)
        Mark Gluck (Stanford University)

The Study of Expertise: Prospects and Limits
        Anders Ericsson (University of Colorado,  Boulder),Chair
        Neil Charness (University of Waterloo)
        Vimla L. Patel and Guy Groen (McGill University)  
        Yuichiro Anzai (Keio University)
        Fran Allard and Jan Starkes (University of Waterloo)
        Keith Holyoak (University of California,  Los Angeles), Discussant

Paper presentations:

Language  (Panel 1) :  Phonology  
Language (Panel 2) :  Syntax   

4.30 - 5.45  Keynote speaker: Anne Treisman (UC Berkeley) 
"Features and Objects" 
Chair: Stephen M. Kosslyn (Harvard)

  Poster Sessions
I.   Connectionist Models
II.  Machine Simulations and Algorithms
III.  Knowledge and Problem-Solving
__________________________________

Saturday, July 28   9.00 am - 11.15 am

Symposia:

SOAR as a Unified Theory of Cognition:  Spring 1990
        Allen Newell (Carnegie Mellon University),  
Chair
        Richard L. Lewis (Carnegie Mellon University)
        Scott B. Huffman (University of Michigan)
        Bonnie E. John (Carnegie Mellon University)
        John E. Laird (University of Michigan)
        Jill Fain Lehman (Carnegie Mellon University)
        Paul S. Rosenbloom (University of Southern California)
        Tony Simon (Carnegie Mellon University)
        Shirley G. Tessler (Carnegie Mellon University)

Neonate Cognition
        Richard Held (Massachusetts Institute of Technology),  Chair
        Jane Gwiazda (Massachusetts Institute of Technology)
        Renee Baillargeon (University of Illinois)
        Adele Diamond (University of Pennsylvania)
        Jacques Mehler (CNRS,  Paris,  France) Discussant

Conceptual Coherence in Text and Discourse
        Arthur C. Grasser (Memphis State University),  Chair
        Richard Alterman (Brandeis University)
        Kathleen Dahlgren (Intelligent Text Processing,  Inc.)
        Bruce K. Britton (University of Georgia)
        Paul van den Broek (University of Minnesota)
        Charles R. Fletcher (University of Minnesota)
        Roger J. Kreuz (Memphis State University)
        Richard M. Roberts (Memphis State University)
        Tom Trabasso and Nancy Stein

Paper presentations:

Causality,Induction and Decision-Making

Vision  (Panel 1) :  Objects and Features    
Vision (Panel 2) :  Imagery            

Language : Lexical Semantics

Case-Based Reasoning


11.30 - 12.45  Keynote Speaker
Ellen Markman (Stanford)
 "Constraints Children Place on Possible Word Meanings"
Chair: Susan Carey (MIT)

Lunch presentation: "Cognitive Science in Europe: A Panorama"
Chair: Willem Levelt (Max Planck, Nijmegen). 
Informal presentations by: Jacques Mehler (CNRS, 
Paris), Paolo Viviani (University of Geneva), Paolo 
Legrenzi (University of Trieste), Karl Wender 
(University of Trier).
_____________________________
  
Saturday 28  Afternoon   2.00 - 3.00 pm

Paper presentations:
   
Vision :  Attention

Language Processing

Educational Methods

Learning and Memory

Agents,  Goals and Constraints

3.15 - 4.30 Keynote Speaker: Roger Schank 
(Norhwestern) "The Story is the Message: Memory and Instruction"
Chair: Beth Adelson (Tufts)
4.30 - 5.45 Keynote Speaker: Stephen Jay Gould (Harvard) 
"Evolution and Cognition"
Chair: Steven Pinker (MIT)

------------------------------

End of Neuron Digest [Volume 6 Issue 40]
****************************************

dsikka@td2cad.intel.com (Digvijay Sikka) (06/14/90)

Hi Folks:
	Several months ago there was a posting on the net regarding a
paper that attempted to study relationships between Bayesian
(Probabilistic) Networks, and Neural Networks. Since I am interested in
such a study, I would appreciate if someone out there can either repost
it or send it me via e-mail. In case if it is not available, I would 
appreciate any pointers in this regard.

Thanx,

Digvijay.

mgv@usceast.UUCP (Marco Valtorta) (06/15/90)

In article <3142@td2cad.intel.com> dsikka@aries.intel.com (Digvijay Sikka) writes:
>Hi Folks:
>	Several months ago there was a posting on the net regarding a
>paper that attempted to study relationships between Bayesian
>(Probabilistic) Networks, and Neural Networks. Since I am interested in
>such a study, I would appreciate if someone out there can either repost
>it or send it me via e-mail. In case if it is not available, I would 
>appreciate any pointers in this regard.

I have some comments on that in a paper in the Proceedings of the Seventh
International Conference on Machine Learning.  The Proceedings are to
be published by Morgan Kaufmann.  The conference will take place next week.
If you are going there, look out for me!  Apart from presenting the paper,
I am organizing a discussion group on knowledge base refinement.

I have a couple of other papers that address the topic you are interested
in.  One is going to be published by the *International Journal of
Approximate Reasoning*, probably in January 1991.  The other is
somewhere (:-)) in the submission/revision process.

From the algorithmic standpoint, there are many similarities between
knowledge base refinement and neural network training.  My dissertation
was on the topic of knowledge base refinement.  At the time I started
that work, neural networks were a dead field.  Now, the balance of interest
has shifted in the opposite direction!
>
>Thanx,
>
>Digvijay.

You are welcome!


Marco Valtorta			usenet: ...!ncrcae!usceast!mgv
Department of Computer Science	internet: mgv@cs.scarolina.edu
University of South Carolina	tel.: (1)(803)777-4641
Columbia, SC 29208		tlx: 805038 USC
U.S.A.				fax: (1)(803)777-3065
usenet from Europe: ...!mcvax!uunet!ncrlnk!ncrcae!usceast!mgv