[comp.ai] Symbolic Connectionism?

lammens@sunybcs.uucp (Jo Lammens) (04/23/89)

In article <17467@cup.portal.com> dan-hankins@cup.portal.com (Daniel B Hankins) writes:
>     This is important.  Traditional AI systems deal in English (or
>whatever language) words as their base symbol system.  The system is taught
>to manipulate these symbols on the basis of what the programmer has in mind
>as the meaning of the symbols.  Therefore, in some important sense, any
>meaning in the computer program was put there by the programmer.
>
>     However, Neural Networks are qualitatively different.  They use
>physical quantities (like synapse conductivities and neuron activation
>levels) as their base symbol system.  These 'symbols' then really do end up
>representing (in the sense you defined) external reality, because any
>'meaning' they acquire is a result of external experience rather than
>implicit assignment of meaning by a programmer.

As far as I know (correct me if I'm wrong), all neural network
topologies today are hand-crafted, and there is no prospect for any
sort of general-purpose topology. In other words, based on what the
'programmer' (should I say builder, or perhaps grower) has in mind as
the purpose of the network and the meaning of the symbols it
manipulates, a certain topology is chosen (number of layers, number of
neurons, number of connections, type of input/activation/output
functions, and what have you). Then the external symbols the net is
supposed to manipulate are assigned to (combinations of) input and/or
output values for the net. Then the net sets off, merrily manipulating
its 'symbols' (activation levels and connection strengths), until the
programmer/builder decides that the net converges successfully, or
not. This decision is based on the EXTERNAL INTERPRETATIONS of the
net's 'symbols', which are no more intrinsically related to the
activation levels etc. than the meaning of words are related to their
form, or the meaning of other symbols to the form of the symbols
(leaving the inevitable exceptions like pictograms etc out of
consideration). So where is the qualitative difference? The technique
used to learn the system how to manipulate the symbols is different,
but the relation between the symbols and their external interpretation
is not. This is not a plea for or against neural networks.

Jo Lammens

BITNET: lammens@sunybcs.BITNET          Internet:  lammens@cs.Buffalo.EDU
UUCP: ...!{watmath,boulder,decvax,rutgers}!sunybcs!lammens

dan-hankins@cup.portal.com (Daniel B Hankins) (04/24/89)

In article <5418@cs.Buffalo.EDU> lammens@sunybcs.uucp (Jo Lammens) writes:

>As far as I know (correct me if I'm wrong), all neural network topologies
>today are hand-crafted, and there is no prospect for any sort of
>general-purpose topology.

     Okay.  I have seen abstracts and conference topics for (a) random
connections and (b) connection generating systems based on biological
models.

     I'll grant that *most* ANN systems in use are fixed-topology, where
the topology is designed in advance.  I personally don't find them very
interesting - I'm into generality.


>In other words, based on what the 'programmer' [...] has in mind [...]
>a certain topology is chosen (number of layers, number of neurons, number
>of connections, type of input/activation/output functions, and what have
>you).

     The number of units, connections, and types of functions really are
not part of the topology... but they are often chosen by the programmer in
response to a particular problem setup.  Again, I consider this cheating,
and am interested in more general-purpose systems.

>Then the external symbols [...] are assigned to [...] input and/or output
>values for the net. Then the net sets off, merrily manipulating its
>'symbols' [...] until the programmer/builder decides that the net
>converges successfully, or not.
>This decision is based on the EXTERNAL INTERPRETATIONS of the net's
>'symbols', which are no more intrinsically related to the activation
>levels etc. than the meaning of words are related to their form, or the
>meaning of other symbols to the form of the symbols (leaving the
>inevitable exceptions like pictograms etc out of consideration). So where
>is the qualitative difference? The technique used to learn the system how
>to manipulate the symbols is different, but the relation between the
>symbols and their external interpretation is not. This is not a plea for
>or against neural networks.

     You're writing here of supervised learning, which is again the most
common form used (because it's the easiest to work with, both
computationally and mathematically).  I still think it's cheating, because
it isn't the way biological systems do it.  Nobody supervises directly the
learning of bio networks.  I think that many do this because it makes the
network converge faster on 'the solution'.

     When I write of connectionist systems that can achieve intelligence
(or combinations of connectionist and symbolic), I am thinking of more
biologically accurate approaches - non-back-propagation, self-organizing
topology, and unsupervised learning.

     I agree that the sort of systems you are thinking of are more like
traditional AI approaches, in that they embody the _programmer's_
symbol-referent associations rather than generating their own.

     However, that is not the be-all and end-all of connectionist systems.
Sorry if I misled anyone into thinking I was talking about garden-variety
connectionist systems such as are in use.


Dan Hankins

sarima@gryphon.COM (Stan Friesen) (04/30/89)

In article <17548@cup.portal.com> dan-hankins@cup.portal.com (Daniel B Hankins) writes:
>
>     I'll grant that *most* ANN systems in use are fixed-topology, where
>the topology is designed in advance.  I personally don't find them very
>interesting - I'm into generality.
>
	Yes, generality is nice, but I know of no natural neural-nets that
have generality at a low level.  The human brain only achieves generality by
consisting of numerous complex subnets that are individually quite specialized. 
>
>     The number of units, connections, and types of functions really are
>not part of the topology... but they are often chosen by the programmer in
>response to a particular problem setup.  Again, I consider this cheating,
>and am interested in more general-purpose systems.

	Again, how is this cheating, most natural neural nets are pre-
programmed, by evolution, for special puproses.  It is just that a complex
collection of specialized neural nets tends to approximate generality.

>
>     You're writing here of supervised learning, which is again the most
>common form used (because it's the easiest to work with, both
>computationally and mathematically).  I still think it's cheating, because
>it isn't the way biological systems do it.  Nobody supervises directly the
>learning of bio networks.  I think that many do this because it makes the
>network converge faster on 'the solution'.
>
	Now here you have what I consider a serious limitation of current
neural nets.

>     When I write of connectionist systems that can achieve intelligence
>(or combinations of connectionist and symbolic), I am thinking of more
>biologically accurate approaches - non-back-propagation, self-organizing
>topology, and unsupervised learning.

	Hold on! What's wrong with back propagation!  Every bioogical
neural system more complex than a reflex loop has extensive back-propagation!
It is a key element of feed-back based control in such systems.  I do not
beieve that any adaptive system can be achieved without it.
-- 
Sarima Cardolandion			sarima@gryphon.CTS.COM
aka Stanley Friesen			rutgers!marque!gryphon!sarima
					Sherman Oaks, CA

dan-hankins@cup.portal.com (Daniel B Hankins) (05/03/89)

In article <15323@gryphon.COM> sarima@gryphon.COM (Stan Friesen) writes:

>In article <17548@cup.portal.com> dan-hankins@cup.portal.com (Daniel B
>Hankins) writes:
>
>>     I'll grant that *most* ANN systems in use are fixed-topology, where
>>the topology is designed in advance.  I personally don't find them very
>>interesting - I'm into generality.
>
>     Yes, generality is nice, but I know of no natural neural-nets that
>have generality at a low level.  The human brain only achieves generality
>by consisting of numerous complex subnets that are individually quite
>specialized. 

     I imagine that you are thinking of perceptrons (Did I spell that
right?), and other recent findings in neuroscience.

     You may well be right.  But I think that the functions that are
pre-programmed are likely the ones that are most closely tied to the
sensory organs and self-regulating systems (such as the heart and lungs).

     In any case, we don't need to understand these organizations in great
detail.  Nature didn't design the subnet structures; they evolved. 
Likewise, we can use GAs to evolve the subnets, later integrating them into
a more coherent whole.

     I understand that there is a lot of pre-built structure in places like
the optical cortex, the auditory center and so on.  Is there a lot of this
kind of specialization in the cerebrum?  Since its function appears to be
to integrate the functioning of the other areas and produce meaningful
action (i.e. it thinks), I would hazard a guess that it shows less of this
kind of specialized topology than the other areas.  I do have a foggy
recollection of a finding that the neurons in that area were arranged in
clumps, and the clumps in clumps and so on, but I didn't hear anything to
indicate that there was any specific ordering to the clumping.


>>     When I write of connectionist systems that can achieve intelligence
>>(or combinations of connectionist and symbolic), I am thinking of more
>>biologically accurate approaches - non-back-propagation, self-organizing
>>topology, and unsupervised learning.
>
>     Hold on! What's wrong with back propagation!  Every biological neural
>system more complex than a reflex loop has extensive back-propagation! It
>is a key element of feed-back based control in such systems.  I do not
>believe that any adaptive system can be achieved without it.

     I think I may not have made myself clear.  When I said non-back-prop,
I meant systems that do not conform to the standard backprop systems in use
in the vast majority of applications.  In any case, back-propagation in the
sense it is used in these nets is a terrible
oversimplification.

     My latest information on synaptic reinforcement indicates that
postsynaptic neuron firing does not, by itself, cause synaptic
reinforcement.  Rather, the postsynaptic neuron must fire _quite soon_
after the presynaptic neuron does in order to cause reinforcement.  Or
perhaps it is the other way around.  In any case, firing of pre- and
post-synaptic neurons must be proximate in time in order to cause synapse
reinforcement.

     And of course, even that is only a part of the story.  Some evidence
suggests that neurotransmitters are enhanced exponentially rather than
linearly.  And how many ANN systems take into account the local suppression
effect, where the firing of a neuron inhibits the firing of neighboring
neurons briefly, avoiding destructive synchronization?


Dan Hankins

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (05/04/89)

From article <17877@cup.portal.com>, by dan-hankins@cup.portal.com (Daniel B Hankins):

" ...  In any case, we don't need to understand these organizations in great
" detail. ...
"      I understand that there is a lot of pre-built structure in places like
" the optical cortex, the auditory center and so on.  Is there a lot of this
" kind of specialization in the cerebrum?  Since its function appears to be
" to integrate the functioning of the other areas and produce meaningful
" action (i.e. it thinks), I would hazard a guess that it shows less of this
" kind of specialized topology than the other areas. ...

I would hazard an opposite guess, on the grounds that higher level
integrative functions have evolved later, and on the general
principle that evolution works by making use of present materials.
Thus one expects later evolved structures to depend on earlier
ones, at least in their fine level detail, and perhaps in their
overall form.

For instance, in phonetics one has [mb] replacing [nb] because
given some low-level details about the structure of the mouth,
the former is easier to say.  In turn, this has the consequnce
that the labiality is redundantly represented in two segments,
the [m] and the [b], making it possible for just one to act
as a sufficient perceptual cue in some circumstances, so that
the other may be lost.  There are grammatical assimilations
too -- verbs commonly agree with their subjects in languages
of the world.  Then, in such cases, pronominal subjects need
not be expressed, because there is sufficient information
in the verb.

But maybe it's just a coincidence.

		Greg, lee@uhccux.uhcc.hawaii.edu