[comp.ai.neural-nets] Neuron Digest V6 #55

neuron-request@HPLMS2.HPL.HP.COM ("Neuron-Digest Moderator Peter Marvit") (09/26/90)

Neuron Digest   Tuesday, 25 Sep 1990
                Volume 6 : Issue 55

Today's Topics:
                       Notice of Technical Reports
                 PDP backprop on the Connection Machine
                   Mactivation Word docs coming to ftp
           Shallice/Neuropsychology: BBS Multiple Book Review
         Re: Shallice/Neuropsychology: BBS Multiple Book review
         Re: Shallice/Neuropsychology: BBS Multiple Book review
                             Reply Naturalo
                   UCSD job opening: Cognitive Science
                            Edelmannian nets?
                      putting Edelman into practice
                        Marr's VISION out of date
                       MIND Workshop Announcement


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Notice of Technical Reports
From:    "Laveen N. KANAL" <kanal@cs.UMD.EDU>
Date:    Thu, 13 Sep 90 20:34:25 -0400


What follows is the abstract of a TR printed this summer which has been
submitted for publication. Also included in this message are the titles of
two earlier reports by the same authors which were put out in Dec. 1988
but which may be of interest now in view of some titles I have seen on the
net.

    UMIACS-TR-90-99                                         July 1990
    CS-TR-2508                    

                    ASYMMETRIC MEAN-FIELD NEURAL
                    NETWORKS FOR MULTIPROCESSOR
                              SCHECDULING
                         
                           Benjamin J. Hellstrom
                            Laveen N. Kanal

                           


                                Abstract

        Hopfield and Tank's proposed technique for embedding
        optimization problems, such as the travelling salesman, in
        mean-field thermodynamic networks suffers from several
        restrictions. In particular, each discrete optimization problem
        must be reduced to the minimization of a 0-1 Hamiltonian.
        Hopfield and Tank's technique yields fully-connected networks
        of functionally homogeneous visible units with low-order
        symmetric connections. We present a program-constructive
        approach to embedding difficult problems in neural networks.
        Our derivation method overcomes the Hamiltonian reducibility
        requirement and promotes networks with functionally
        heterogeneous hidden units and asymmetric connections of
        both low and high-order. The underlying mechanism involves the
        decomposition of arbitrary problem energy gradients into
        piecewise linear functions which can be modeled as the outputs
        of sets of hidden units. To illustrate our method, we derive
        thermodynamic mean-field neural networks for multiprocessor
        scheduling. The performance of these networks is analyzed by
        observing phase transitions and several improvements are
        suggested. Tuned networks of up to 2400 units are shown to
        yield very good, and often exact solutions.




The earlier reports are

CS-TR-2149  Dec. 1988 by Hellstrom and Kanal, titled " Linear Programming
Approaches to Learning in Thermodynamic Models of Neural Networks"

Cs-TR-2150, Dec. 1988 by Hellstrom and Kanal, titled " Encoding via Meta-Stable
Activation Levels: A Case Study of the 3-1-3 Encoder".

Reports are available free until the current supply lasts after which they
will be available(for a small charge) from the publications group at the
Computer Science Center of the Univ. of Maryland, College Park, Md., 20742.

The address for the current su
supply is : Prof. L.N. Kanal, Dept. of Computer Science, A.V. Williams Bldg,
Univ. ofMaryland, College Park, MD. 20742.

L.K.

------------------------------

Subject: PDP backprop on the Connection Machine
From:    Sebastian Thrun <gmdzi!st@relay.EU.net>
Date:    Sat, 15 Sep 90 13:21:06 -0200

The following might be interesting for everybody who works with the PDP
backpropagation simulator and has access to a Connection Machine:


        ********************************************************
        **                                                    **
        **   PDP-Backpropagation on the Connection Machine    **
        **                                                    **
        ********************************************************


For testing our new Connection Machine CM/2 I extended the PDP
backpropagation simulator by Rumelhart, McClelland et al. with a parallel
training procedure for the Connection Machine (Interface C/Paris, Version
5).

Following some ideas by R.M. Faber and A. Singer I simply made use of the
inherent parallelism of the training set: Each processor on the
connection machine (there are at most 65536) evaluates the forward and
backward propagation phase for one training pattern only. Thus the whole
training set is evaluated in parallel and the training time does not
depend on the size of this set any longer. Especially at large training
sets this reduces the training time greatly. For example:

I trained a network with 28 nodes, 133 links and 23 biases to approximate
the differential equations for the pole balancing task adopted from
Anderson's dissertation.  With a training set of 16384 patterns, using
the conventional "strain" command, one learning epoch took about 110.6
seconds on a SUN 4/110 - the connection machine with this SUN on the
frontend managed the same in 0.076 seconds.

 --> This reduces one week exhaustive training to approximately seven minutes!

(By parallelizing the networks themselves similar acceleration can be
achieved also with smaller training sets.)



The source is written in C (Interface to Connection Machine: PARIS) and
can easily be embedded into the PDP software package. All origin
functions of the simulator are not touched - it is also still possible to
use the extended version without a Connection Machine. If you want to
have the source, please mail me!


                                             Sebastian Thrun, st@gmdzi.uucp


You can also obtain the source via ftp:

            ftp 129.26.1.90
Name:       anonymous
Password:   <transmit your full e-mail address, e.g. st@gmdzi.uucp>
ftp>        cd pub
ftp>        cd gmd
ftp>        get pdp-cm.c
ftp>        bye



------------------------------

Subject: Mactivation Word docs coming to ftp
From:    Mike Kranzdorf <mikek@wasteheat.colorado.edu>
Date:    Mon, 17 Sep 90 09:31:50 -0600

[[ Editor's note at the end ]]

I thought Connectionists might be interested in the end result of this,
specifically that I will be posting a new copy of Mactivation 3.3
including MS Word documentation to alumni.colorado.edu real soon now.

   Date: Sun, 16 Sep 90 03:37:36 GMT-0600
   From: james@visual2.tamu.edu (James Saxon)
   Message-Id: <9009160937.AA25939@visual2.tamu.edu>
   To: mikek@boulder.colorado.edu
   Subject: Mactivation Documentation
        
   I was going to post this to the net but I figured I'd let you do it if
   you feel it's necessary.
        
   If you're going to give out the bloody program, you might as well have
   just stuck in the decent readable documentation because nobody in
   their right mind is going to pay $5.00 for it.  It's really a cheap
   move and if you don't replace the ftp file you might just lose all
   your business because, I like many others just started playing with
   the package.  I don't see any macros for learning repetitive things
   and so I was going to give up because I don't want to spend all day
   trying to figure out how to not switch from the mouse to the keyboard
   trying to set the layer outputs for everything...  And then I'm
   certainly not going to turn to an unformatted Geneva document just to
   prove that the program is not very powerful...

   So you can decide what you want do do but I suggest not making
   everybody pissed off at you.

I sincerely apologize if my original posting gave the impression that I
was trying to make money from this. Mactivation, along with all the
documentation, has been available via ftp for over 3 years now. Since I
recently had to switch ftp machines here, I thought I would save some
bandwidth and post a smaller copy (in fact this was suggested by several
people). Downloading these things over a 1200 baud modem is very slow.

The point of documentation in this case is to be able to use the program,
and I still think a text file does fine. The $5 request was not for
prettier docs, but for the disk, the postage, and my time. I get plenty
of letters saying "Thank you for letting me avoid ftp", and that was the
idea. The $5 actually started as an alternative for people who didn't
want to bother sending me a disk and a self addressed stamped envelope,
which used to be part of my offer. However, I got too many 5 1/4" disks
and unstamped envelopes, so I dropped that option this round.

        
        I am presently collecting NN software for a class that my
        professor is teaching here at A&M and will keep your program
        around for the students but I warn them about the users manual.
        :-0 And while this isn't a contest, your program will be
        competing with the Rochester Connectionist Simulator, SFINX,
        DESCARTES, and a bunch more...  Lucky I don't have MacBrain...
        which if you haven't seen, you should.  Of course, that's
        $1000, but the manual's free.

If you think you're getting MacBrain for free or a Mac version of the
Rochester Simulator, then don't bother downloading Mactivation.  You will
be dissapointed. I wrote Mactivation for myself, and it is not supported
by a company or a university. It's not for research, it's an introduction
which can be used to teach some basics. (Actually you can do research,
but only on the effects of low-level parameters on small nets.  As a
point of interest, my research involved making optical neural nets out of
spatial light modulators, and these parameters were important while the
ability to make large or complex nets was not.)  

        James Saxon
        Scientific Visualization Laboratory
        Texas A&M University
        james@#visual2.tamu.edu
        
***The end result of this is that I will post a new copy complete with
the Word docs. I am not a proficient telecommunicator though, so it may
take a week or so. I apologize for the delay.

 --mikek

internet: mikek@boulder.colorado.edu
uucp:{ncar|nbires}!boulder!mikek
AppleLink: oblio

[[ Editor's Note: I hope Mike does not get disgruntled by the tone of Mr.
Saxon who seems to want a great deal for very little $$$.  I, for one,
appreciate Mactivation for what it is (and has been advertised). I have
given copies to several of my colleagues; they are all pleased with the
functionality and the price.  I'm sure Mr. Saxon will educate himself,
however, on the subtleties of acquiring commercial AND academic software
and dealing with fellow researchers in a professional manner -PM ]]

------------------------------

Subject: Shallice/Neuropsychology: BBS Multiple Book Review
From:    Stevan Harnad <harnad@clarity.Princeton.EDU>
Date:    Mon, 17 Sep 90 23:02:16 -0400

Below is the abstract of a book that will be accorded multiple book
review in Behavioral and Brain Sciences (BBS), an international,
interdisciplinary journal that provides Open Peer Commentary on important
and controversial current research in the biobehavioral and cognitive
sciences. Commentators must be current BBS Associates or nominated by a
current BBS Associate. To be considered as a commentator on this book, to
suggest other appropriate commentators, or for information about how to
become a BBS Associate, please send email to:

harnad@clarity.princeton.edu  or harnad@pucc.bitnet        or write to:
BBS, 20 Nassau Street, #240, Princeton NJ 08542  [tel: 609-921-7771]

To help us put together a balanced list of commentators, please give some
indication of the aspects of the topic on which you would bring your
areas of expertise to bear if you are selected as a commentator.
____________________________________________________________________
          BBS Multiple Book Review of:

         FROM NEUROPSYCHOLOGY TO MENTAL STRUCTURE

              Tim Shallice
              MRC Applied Psychology Unit
              Cambridge, UK

ABSTRACT: Studies of the effects of brain lesions on human behavior are
now cited more widely than ever, yet there is no agreement on which
neuropsychological findings are relevant to our understanding of normal
function. Despite the range of artefacts to which inferences from
neuropsychological studies are potentially subject -- e.g., resource
differences between tasks, premorbid individual differences and
reorganisation of function -- they are corroborated by similar findings
in studies of normal cognition (short-term memory, reading, writing, the
relation between input and output systems and visual perception).  The
functional dissociations found in neuropsychological studies suggest that
not only are input systems organized modularly, but so are central
systems.  This conclusion is supported by considering impairments of
knowledge, visual attention, supervisory functions, memory and
consciousness.

------------------------------

Subject: Re: Shallice/Neuropsychology: BBS Multiple Book review
From:    jpk@ingres.com (Jon Krueger)
Date:    20 Sep 90 04:03:36 +0000

[[ Editor's note: This topic (and Mr. Krueger's comments, are revelant to
Neural Networks for the following questions: How do we analyze networks
to know what their parts are doing? Do the connectionist models of brain
lesions provide any insight into the biology? What level of analysis is
appropriate for these artifical models 9as well as for the brain lesion
steudies themselves)? -PM ]]

> ABSTRACT: Studies of the effects of brain lesions on human behavior are
> now cited more widely than ever

Wrong.  No one has studied the effect of brain lesions on human behavior,
and no one is about to.

Observations of the behavior of individuals with lesions are reported,
sometimes reliably.  Testing before and after the lesion is seldom done.
Random assignment of subjects or lesions is never done.  Ethical
restrictions simply don't permit it.  Therefore, you can't vary
independent variables like location of lesion, hold other variables
constant or randomize for them, and discover the effect on dependent
variables like behavior.  We have some guesses about what brain lesions
do to human behavior, but we can't study it scientifically.

Therefore it shouldn't surprise anyone that

> there is no agreement on which neuropsychological findings are relevant
> to our understanding of normal function.

Since there are some manipulations we can do ethically, we might expect
to get some agreement by doing some science using them.

You're also engaging in egregious sort-crossing.  Brain events are not
mixable with mental ones.  Cutting remarks don't produce lesions.
Injecting dye into brains doesn't produce colorful thoughts.  Neurons
don't have ideas.  Holmes can't ask Doyle for more interesting cases.
Holmes can't count the number of pages in the book.  Similarly, brain and
mentality are not the same sort of phenomena.  Statements that mix terms
from the two lexicons are unlikely to mean anything.

  -- Jon

Jon Krueger, jpk@ingres.com 

------------------------------

Subject: Re: Shallice/Neuropsychology: BBS Multiple Book review
From:    tony@nexus.yorku.ca (Tony Wallis)
Organization: York University Department of Computer Science
Date:    21 Sep 90 01:10:13 +0000

Responding to Stevan Harnad, Jon Krueger writes :
| >  [Review of] FROM NEUROPSYCHOLOGY TO MENTAL STRUCTURE [by] Tim Shallice
| > ...
| > ABSTRACT: Studies of the effects of brain lesions on human behavior are
| > now cited more widely than ever. ...
| Wrong.  No one has studied the effect of brain lesions on human
| behavior, and no one is about to. ...
| You're also engaging in egregious sort-crossing.  Brain events are not
| mixable with mental ones.  ...
| ... Holmes can't ask Doyle for more interesting cases.   ...
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Yes he can.  Holmes can review his philosophical position, decide that he
has a creator and ask that creator to modify his world.  From "below"
(within the fictional world of Holmes) this appears to be religious or
something similar.  From "above" (the world of you, me and the mind and
writing of Doyle) this appears as Doyle dialoging with himself.  In
either case, it is a quite valid thing to do.  I am not being facetious
here.  Just pointing out that you are making some metaphysical
assumptions in your strict partitioning of brain and mind events.

... tony@nexus.yorku.ca = Tony Wallis, York University, Toronto, Canada

------------------------------

Subject: Reply Naturalo
From:    SCHOLTES%ALF.LET.UVA.NL@CUNYVM.CUNY.EDU
Date:    Tue, 18 Sep 90 22:12:00 +0700

Subject: RE: Natural Language Parsing (E.S. Atwell, Vol. 90-54)

Dear Eric,

Here are some references on parsing NLP in NN.  Do also pay attention to
other, less classical attempt like the work done by Elman and Allen. They
change the nature of the parsing problem by representing it as a problem
in time. As a result, recursion techniques are not needed anymore. Even
more interesting are Self-organizing NN for the automatic detemination of
structure by exposing the system to language strings, right now I am
working on this aspect, I can send you (and the rest of the digest) more
on it at the end of the month.

Good Luck

Jan C. Scholtes
Univ. of Amsterdam
Department of  Computational Linguistics
Faculty of Arts
THE NETHERLANDS

 --------------------------
References

[Akker et al., 1989]: Akker, R. op den, Alblas, H., Nijholt, A. and Oude
Luttinghuis, P. (1989). An Annotated Bibliography on Parallel Parsing.
Memoranda Informatia 89-67, Universiteit van Twente,

[Fanty, 1985]: Fanty, M. (1985). Context-Free Parsing in Connectionist
Networks.  TR 174, Computer Science, University of Rochester,

[Hanson et al., 1987]: Hanson, S.J. and Kegl, J. (1987) PARSNIP: A
connectionist Network that Learns Language Grammar From Exposure to
Natural Langauage Sentences, Proceedings of the Cog. Sci. Conf, Seattle,
1987.

[Jelinek et al., 1989]: Jelinek, F., Fusijaki, T., Cocke, J., Black, E.
and Nishino, T. (1989). A Probabilistic Parsing Method for Sentence
Disambiguation.  CMU International Parsing Workshop, 1989, pp. 85-94.

[Li et al., 1987]: Li, T. and Chun, H.W. (1987). A Massively Parallel
Network-based Natural Language Parsing System. Proceedings of the 2nd
International Conference on Computers and Applications,

[McClelland et al., 1986]: McClelland, J.L. and Kawamoto, A.H. (1986).
Mechanisms of Sentence Processing: Assigning Roles to Constituents of
Sentences.  Parallel Distributed Processing, (D.E. Rumelhart, J.L.
McClelland Eds.), Vol 2, pp. 273-325. MIT Press

[Nijholt, 1989]: Nijholt, A. (1989). Parallel Parsing Strategies in
Natural Language Processing. Memoranda Informatica 89-41, Universiteit
Twente,

[Nijholt, 1990]: Nijholt, A. (1990). Meta-Parsing in Neural Networks.
Memoranda Informatica 90-08, Universiteit Twente,

[Nijholt, 1990]: Nijholt, A. (1990). The CYK-Approach to Serial and
Parallel Parsing. Memoranda Informatica 90-13, Universteit Twente,

[Selman et al., 1987]: Selman, B. and Hirst, G. (1987). Parsing as an
Energy Minimization Problem. Genetic Algorithms and Simulated Annealing,
(L. Davis Editor), Pian, London.

[Sikkel, 1990]: Sikkel, N. (1990). Connectionist Parsing of Context-Free
Grammars. Memoranda Informatica 90-30, Universiteit Twente,

[Small et al., 1982]: Small, S., Cottrell, G. and Shastri, L. (1982).
Toward Connectionist Parsing. Proceedings of the National Conference on
AI, Pittsburg, PA, August 1982, pp. 247-250.

[Tanenhaus et al., 1987]: Tanenhaus, M.K., Dell, S.G. and Carlson, G.
(1987).  Context Effects and Lexical Processing: A Connectionist Approach
to Modularity.  Modularity in Knowledge Representation and NLU, (J.L.
Garfield, Ed.), MIT Press, Cambridge.

[Waltz et al., 1985]: Waltz, D.L. and Pollack, J.B. (1985). Massively
Parallel Parsing. Cognitive Science, Vol. 9, Number 1, January-March, pp.
51-74.


Also consider: The chapter on NLP in Mathew Zeidenbergs' excellent book
on NN: Neural Networks in Artificial Intelligence. Here a good review of
the efforts made in NLP and NN is given in a clear and understandable
way.


------------------------------

Subject: UCSD job opening: Cognitive Science
From:    elman@amos.ucsd.edu (Jeff Elman)
Date:    Tue, 18 Sep 90 12:44:31 -0700


                            Assistant Professor
                             Cognitive Science
                    UNIVERSITY OF CALIFORNIA, SAN DIEGO

The Department of Cognitive Science at UCSD expects to receive permission
to hire one person at the assistant professor level (tenure-track). We
seek someone whose interests cut across conventional disciplines.  The
Department takes a broadly based approach covering experimental,
theoretical, and computational investigations of the biological basis of
cognition, cognition in individuals and social groups, and machine
intelligence.

Candidates should send a vita, reprints, a short letter describing their
background and interests, and names and addresses of at least three
references to:

     UCSD
     Search Committee/Cognitive Science 0515e
     9500 Gilman Dr.
     La Jolla, CA  92093-0515-e

Applications must be received prior to January 15, 1991.  Salary will be
commensurate with experience and qualifications, and will be based upon
UC pay schedules.  Women and minorities are especially encouraged to
apply.  The University of California, San Diego is an Affirmative
Action/Equal Opportunity Employer.




------------------------------

Subject: Edelmannian nets?
From:    "Bruce E. Nevin" <bnevin@ccb.bbn.com>
Date:    Wed, 19 Sep 90 08:46:46 -0400

Vivek Anumolu (anumolu@cis.uab.edu) asks "fellow NN researchers" the
question "Anyone using Eldelman's theories?"

NN researcher I am not, I am only an interested consumer of this list, so
this may very well be far off the mark, but it seem to me that work on
GANNET (Generation and Adaptation of Neural Networks by Evolutionary
Techniques) may fill the bill.  This British project involves Logica,
Meiko, the engineering department of the University of Cambridge, and the
physiology department of the University of Oxford.  I have read only a
summary report of it in _New Scientist_ for 25 August 1990 (p. 28).  It
refers not to Edelman but to Richard Dawkins' book _The Blind
Watchmaker_.  They use genetic algorithm techniques in the design and
iterative refinement of neural nets.  Don't know how they deal with
scaling problems (toy problems and/or small nets in the pool and/or small
number of trial nets in the pool, so as not to overrun resources, problem
then is can you scale results up to more complex situations).

Perhaps some UK participant can say more.  The NS article quotes Clifton
Hughes at Logica.  Someone on the GA list may know more too.

        Bruce


------------------------------

Subject: putting Edelman into practice
From:    Stephen Smoliar <smoliar@vaxa.isi.edu>
Date:    Wed, 19 Sep 90 07:04:22 -0700

Vivek Anumolu inquired as to why there is currently not a lot of activity
in pursuing Gerald Edelman's theory of Neuronal Group Selection.  First
of all, for the benefit of all readers interested in this question, I
offer up the following entry from the list of technical reports currently
on file at ISI:

NEURONAL GROUP SELECTION THEORY : A GROUNDING IN ROBOTICS.  Donnett, Jim;
Smithers, Tim.  University of Edinburgh, DAI RP 461.  November, 1989.

I have read this report, and it is rather preliminary.  I shall be
passing through Edinburgh at the end of this month and hope to learn more
about their effort then.

Of course, a single report does not constitute "a lot of activity."  The
primary answer to Vivek's question is magnitude.  Computational models of
Neuronal Group Selection require FAR more computational resources than
connectionism.  Consequently, the sorts of phenomena which the Edelman
group can currently demonstrate may be written off as trivial by
connectionists.  (Needless to say, the Edelman group is not building such
systems to generate trivialities.  They are actually more interested in
modeling the underlying BIOLOGICAL phenomena, so it makes sense to begin
with the simplest of tasks.)  Furthermore, if you want to rise above the
level of simple perceptual categorization, you have to build up several
additional layers of selection mechanisms.  Edelman's latest book, THE
REMEMBERED PRESENT, is his current working hypothesis of what this
layered architecture might look like.  Needless to say, if you do not
have enough compute power to build an implementation of the lowest layer
for "real" data, building on top of that layer is practically out of the
question.

Needless to say, many of Edelman's ideas are still appealing.  I am
particularly taken with his attempt to describe memory as a process of
"recategorization."  I must confess that I still to not have a clear idea
of what he means by this because it runs against my intuitions of a "file
cabinet" memory where objects remain static in fixed locations.
Nevertheless, I think it is worth while to try to pursue Edelman's vision
of a memory which is constantly in a state of flux, responding as a
dynamic system to every interaction with some sort of global
reorganization.  I can think of two questions which deserve investigation
and can probably be pursued independent of attempts to implement
perceptual categorization for "real" data:

        1.  From a computational point of view, what would such a memory
                look like?  Would it be a variation on a PDP architecture?
                Could it maintain an explicit representation of symbols;
                or would all phenomena have to "emerge" from patterns of
                activation?

        2.  Could we build such a memory to serve a hybrid system?  Suppose,
                for example, we are trying to retrieve from a memory of actions
                in order to establish what action to take in a given situation.
                Could that memory of actions be maintained with a system which
                employs recategorization?  How would the agent trying to decide
                what actions to take interact with that memory?

I am just beginning to think about these questions and would certainly
appreciate thoughts offered up by other readers of this Digest.

------------------------------

Subject: Marr's VISION out of date 
From:    slehar@bucasd.bu.edu
Date:    Thu, 20 Sep 90 07:11:15 -0400


In the last Neuron Digest (V6  #54) honig@ICS.UCI.EDU (David A. Honig)
quotes my comments on Grossberg's BCS/FCS vision model, and goes on to
say...

 "Connectionists interested  in this reasoning,  and in the  important
  relationship between functionality,  algorithm,  and implementation,
  and how these should be analyzed,  might want  to read  David Marr's
  book, _Vision_ (WH Freeman & Co, 1982)."

David   Marr's book  VISION   is  delightfully lucid   and beautifully
illustrated, and I thoroughly  agree  with  his analysis of  the three
levels of modelling.  Nevertheless I believe that  there are two fatal
flaws in the philosophy of his vision model.

The first fatal flaw is the feedforward nature of  his model, from the
raw primal sketch  through  the  2&1/2 D  sketch    to the 3-D   model
representation.    Decades   of "image   understanding"  and  "pattern
recognition" research have  shown us that such feed-forward processing
has a great deal of difficulty with natural imagery.  The problem lies
in the fact that whenever "feature extraction"  or "image enhancement"
are  performed, they  recognize or  enhance  some features  but in the
process they inevitably degrade others or  introduce artifacts.   With
successive levels of processing the artifacts accumulate until  at the
highest levels of processing  there is no way  to distinguish the real
features from the artifacts.  Even  in our own   vision, with all  its
sophistication, we occasionally see things that are  not  there.   The
real problem here is that once a stage of processing  is performed, it
is never reviewed or reconsidered.

Grossberg has shown how nature solves this problem, by use of top-down
feedback.  Whenever a feature  is recognized at any level,  a copy  of
that feature  is passed back   DOWN the processing   hierarchy   in an
attempt to improve the match at the lower levels.   If  for instance a
set of disconnected edges suggest a larger continuous edge to a higher
level, that "hypothesis" is passed down to the local edge detectors to
see if they can  find  supporting evidence  for  the missing pieces by
locally  lowering their detection   thresholds.   If  a faint edge  is
indeed found, it is  enhanced by resonant  feedback.  If however there
is strong local opposition to  the hypothesis then  the enhancement is
NOT performed.  This is the cooperative  / competitive loop of the BCS
model which serves to  disambiguate the image by simultaneous matching
at  multiple levels.   This explains  why, when   we  occasionally see
something that isn't there, we see it in such detail until at a higher
level a conflict occurs, at  which  time the apparation "pops" back to
being something more reasonable.

The second fatal flaw in Marr's vision model is  related to the first.
In the   finest tradition of  "AI", Marr's  3-D  model is  an abstract
symbolic representation of the visual input, totally divorced from the
lower level stimuli  which  generated  it.  The great  advance  of the
"neural network"  perspective  is  that  manipulation   of  high level
symbols is meaningless without regard to the  hierarchy of lower level
representations to which  they  are attached.  When  you  look at your
grandmother for instance, some high level node (or nodes) must fire in
recognition.  At the same time  however you are  very conscious of the
low level details  of  the image, the  strands  of hair, the  wrinkles
around the eyes etc.  In fact, even in her absence the high level node
conjurs up such low level features, without which that node would have
no real meaning.  It is only because that node rests on the pinacle of
a    hierarchy  of  such   lower nodes  that   is  has  a  meaning  of
"grandmother".  The  perfectly  gramatical  sentence  "Grandmother  is
purple" is  only recognized as  nonsense when visualized at the lowest
level, illustrating that logical  processing cannot be  separated from
low level visualization.

Although I recognize Marr's valuable and historic contribution  to the
understanding of vision, I believe  that in this  fast moving field we
have already progressed   to new  insights  and   radically  different
models.  I would be delighted  to provide further information by email
to interested parties on Grossberg's  BCS model  and my implementation
of it.

(O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O)
(O)((O))(((              slehar@bucasb.bu.edu              )))((O))(O)
(O)((O))(((    Steve Lehar Boston University Boston MA     )))((O))(O)
(O)((O))(((    (617) 424-7035 (H)   (617) 353-6741 (W)     )))((O))(O)
(O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O)


------------------------------

Subject: MIND Workshop Announcement
From:    elsberry@arrisun3.arl.utexas.edu (Wes Elsberry)
Date:    Tue, 18 Sep 90 21:09:28 -0500

[[ Editor's Note: Abstracts will follow in the next issue. -PM ]]
 
 
 
                             Announcement
 
NEURAL NETWORKS FOR KNOWLEDGE REPRESENTATION AND INFERENCE Fourth Annual
Workshop of the Metroplex Institute for Neural Dynamics (MIND)
 
 
                           October 4-6, 1990
                            IBM Westlake, TX
                   (near Dallas - Fort Worth Airport)
 
 
Conference Organizers:
 
Daniel Levine, University of Texas at Arlington (Mathematics)
Manuel Aparicio, IBM Application Solutions Division
 
 
Speakers will include:
 
James Anderson, Brown University (Psychology)
Jean-Paul Banquet, Hopital de la Salpetriere, Paris
John Barnden, New Mexico State University (Computer Science)
Claude Cruz, Plexus Systems Incorporated
Robert Dawes, Martingale Research Corporation
Richard Golden, University of Texas at Dallas (Human Development)
Sam Leven, Radford University (Brain Research Institute)
Janet Metcalfe, Dartmouth College (Psychology)
Jordan Pollack, Ohio State University (Computer Science)
Karl Pribram, Radford University (Brain Research Institute)
Lokendra Shastri, University of Pennsylvania (Computer Science)
 
 
Topics will include:
 
Connectionist models of semantic comprehension.  Architectures for
evidential and case-based reasoning.  Connectionist approaches to symbolic
problems in AI such as truth maintenance and dynamic binding.
Representations of logical primitives, data structures, and constitutive
relations.  Biological mechanisms for knowledge representation and
knowledge-based planning.
 
We plan to follow the talks by a structured panel discussion on the
questions: Can neural networks do numbers?  Will architectures for pattern
matching also be useful for precise reasoning, planning, and inference?
 
Tutorial Session:
 
Robert Dawes, President of Martingale Research Corporation, will present a
three hour tutorial on neurocomputing the evening of October 3.  This
preparation for the workshop will be free of charge to all pre-registrants.
 
 -----------------------------------------------------------------------------
 
                           Registration Form
 
NEURAL NETWORKS FOR KNOWLEDGE REPRESENTATION AND INFERENCE Fourth Annual
Workshop of the Metroplex Institute for Neural Dynamics (MIND)
 
Name: _____________________________________________________
 
Affiliation: ______________________________________________
 
Address: __________________________________________________
         __________________________________________________
         __________________________________________________
         __________________________________________________
 
Telephone number: _________________________________________
 
Electronic mail: __________________________________________
 
 
Conference fee enclosed (please check appropriate line):
 
$50   for MIND members         before September 30   ______
$60   for MIND members       on/after September 30   ______
$60   for  non-members         before September 30   ______
$70   for  non-members       on/after September 30   ______
$10   for student MIND members            any time   ______
$20   for student non-members             any time   ______
 
 
Tutorial session (check if you plan to attend):      ______
Note: This is free of charge to pre-registrants.
 
Suggested Hotels:
 
Solana Marriot Hotel.  Next to IBM complex, with continuous
shuttle
bus available to meeting site; ask for MIND conference rate of
$80/night. Call (817) 430-3848 or (800) 228-9290.
 
Campus Inn, Arlington.  30 minutes from conference, but rides are
available if needed; $39.55 for single/night.  Call (817)
860-2323.
 
 
Conference programs, maps, and other information will be mailed to pre-
registrants in mid-September.
 
Please send this form with check or money order to:
 
Dr. Manuel Aparicio
IBM  Mail Stop 03-04-40
5 West Kirkwood Blvd.
Roanoke, TX  76299-0001
(817) 962-5944
 
[I have a set of abstracts available for download from Central Neural
System BBS, U.S. telephone 817-551-9363.  The filename is WORKABST.MND.  
 -- Wesley R. Elsberry]
 


------------------------------

End of Neuron Digest [Volume 6 Issue 55]
****************************************