[comp.ai] Grounds / the evolution of symbols

silber@sbphy.ucsb.edu (03/15/89)

Re: symbol-grounding etc: perhaps evolutionary considerations can shed light
upon the "symbol grounding" problem.  Presumably, the gross evolutionary
steps w.r.t the nervous system are: 1) early development of sensory systems
(e.g. the eye-spot of the euglena) ... 2) development of "pre-cognitive"
modes of complex neural activity (e.g. instinctual behaviour), 
.. ... ... ... ... 3) cognitive modes GROUNDED in all the
previous modes.  My rather limited knowledge of "semiotics" etc.,
prevents me from speculating as to when 'symbols' evolved.
The relationship of a protozoan eye-spot to its motility is
clearly (?) not 'symbolic'. does what WE call an 'instinct'
function 'symbolically' in, for example, fish?

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/15/89)

silber@sbphy.ucsb.edu
of UC, Santa Barbara. Physics Computer Services wrote:

" Re: symbol-grounding etc... My rather limited knowledge of "semiotics" etc.,
" prevents me from speculating as to when 'symbols' evolved.

My self-limited knowledge of "semiotics" (based on enough to convince
me there are no answers there) does not prevent me from speculating
about HOW symbols evolved. ("When" is probably too vague a question.)
Here is the abstract of a paper on that subject I will be giving at
the section on "The Emergence of Symbolic Structures" at the CNLS
9th International Congress on Emergent Computation in Los Alamos,
New Mexico, May 22 - 26:

     GROUNDING SYMBOLS IN A NONSYMBOLIC SUBSTRATE

                 Stevan Harnad
         Behavioral and Brain Sciences
                 Princeton NJ

There has been much discussion recently about the scope and limits of
purely symbolic models of the mind and of the proper role of
connectionism in mental modeling. In this paper the "symbol grounding
problem" -- the problem of how the meanings of meaningless symbols,
manipulated only on the basis of their shapes, can be grounded in
anything but more meaningless symbols in a purely symbolic system -- is
described, and then a potential solution is sketched: Symbolic
representations must be grounded bottom-up in nonsymbolic
representations of two kinds:  (1) iconic representations are analogs
of the sensory projections of objects and events and (2) categorical
representations are learned or innate feature-detectors that pick out
the invariant features of object and event CATEGORIES. Elementary
symbols are the names of object and event categories, picked out by
their (nonsymbolic) categorical representations. Higher-order symbols
are then grounded in these elementary symbols. Connectionism is a
natural candidate for the mechanism that learns the invariant features.
In this way connectionism can be seen as a complementary component in a
hybrid nonsymbolic/symbolic model of the mind, rather than a rival to
purely symbolic modeling. Such a hybrid model would not have an
autonomous symbolic module, however; the symbolic functions would
emerge as an intrinsically "dedicated" symbol system as a consequence
of the bottom-up grounding of categories and their names.

Ref: Harnad, S. (1987) (Ed.) Categorical Perception: The Groundwork of
Cognition. NY: Cambridge University Press
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771