[comp.ai] Tech Report: Symbol Grounding Problem

harnad@phoenix.Princeton.EDU (S. R. Harnad) (08/04/89)

            THE SYMBOL GROUNDING PROBLEM

                 Stevan Harnad
            Department of Psychology
              Princeton University

ABSTRACT: There has been much discussion recently about the scope and
limits of purely symbolic models of the mind and about the proper role
of connectionism in cognitive modeling. This paper describes the
"symbol grounding problem" for a semantically interpretable symbol
system:  How can its semantic interpretation be made intrinsic to the
symbol system, rather than just parasitic on the meanings in our heads?
How can the meanings of the meaningless symbol tokens, manipulated
solely on the basis of their (arbitrary) shapes, be grounded in
anything but other meaningless symbols? The problem is analogous to
trying to learn Chinese from a Chinese/Chinese dictionary alone.

A candidate solution is sketched: Symbolic representations must be
grounded bottom-up in nonsymbolic representations of two kinds:
(1) iconic representations, which are analogs of the proximal sensory
projections of distal objects and events, and (2) categorical
representations, which are learned and innate feature-detectors that
pick out the invariant features of object and event categories from
their sensory projections. Elementary symbols are the names of these
object and event categories, assigned on the basis of their
(nonsymbolic) categorical representations. Higher-order (3) symbolic
representations, grounded in these elementary symbols, consist of
symbol strings describing category membership relations ("An X is a Y
that is Z").

Connectionism is one natural candidate for the mechanism that learns
the invariant features underlying categorical representations, thereby
connecting names to the proximal projections of the distal objects they
stand for. In this way connectionism can be seen as a complementary
component in a hybrid nonsymbolic/symbolic model of the mind, rather
than a rival to purely symbolic modeling. Such a hybrid model would not
have an autonomous symbolic "module," however; the symbolic functions
would emerge as an intrinsically "dedicated" symbol system as a
consequence of the bottom-up grounding of categories' names in their
sensory representations. Symbol manipulation would be governed not just
by the arbitrary shapes of the symbol tokens, but by the nonarbitrary
shapes of the icons and category invariants in which they are grounded.

[Presented at CNLS Conference on Emergent Computation, June 1989
        Submitted to Physica D  -- Preprint Available]
-- 
Stevan Harnad  INTERNET:  harnad@confidence.princeton.edu   harnad@princeton.edu
srh@flash.bellcore.com      harnad@elbereth.rutgers.edu    harnad@princeton.uucp
CSNET:    harnad%confidence.princeton.edu@relay.cs.net
BITNET:   harnad1@umass.bitnet      harnad@pucc.bitnet            (609)-921-7771

jps@cat.cmu.edu (James Salsman) (08/05/89)

In article <9753@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (S. R. Harnad) writes:

>             THE SYMBOL GROUNDING PROBLEM

No problem.

The    |         /"Iconic"\  Distributed  /    \        /    \Predicates &/
Outside|--------<  Buffer  >-------------< EPAM >------< SOAR >----------< LTM
World  | Signals \  STM   / Representation\    / Symbols\    / Productions\
                           [1]              [2]           [3]

[1] Rumelhart D., McClelland J., et. al., "Parallel Distributed Processing,"
      MIT Press, 1986
[2] Richman, H., and Simon, H., "Context Effects in Letter Perception:
      Comparison of Two Theories," Psychological Review, 1989, v96n3p417.
[3] Laird, J., Newell, A., and Rosenbloom, P., "Soar: An Architecture for
      General Intelligence," Artificial Intelligence, 1987 vol. 33.

:James

Disclaimer:  I think CMU is the world leader in cognitive science.
-- 

:James P. Salsman (jps@CAT.CMU.EDU)

harnad@phoenix.Princeton.EDU (Stevan Harnad) (08/07/89)

James Salsman (jps@cat.cmu.edu) of Carnegie Mellon University wrote:

> THE SYMBOL GROUNDING PROBLEM... No problem:
>
> The    |         /"Iconic"\  Distributed  /    \        /    \Predicates &/
> Outside|--------<  Buffer  >-------------< EPAM >------< SOAR >----------< LTM
> World  | Signals \  STM   / Representation\    / Symbols\    / Productions\

This is exactly the kind of naive "hook-em-all-up-together" modularism that I
wrote the preprint in question in order to refute. Now one can either hold
onto these unexamined beliefs (and the dead-ends they lead to in cognitive
modeling) or wake up and smell the coffee...

                   REFERENCES

Harnad, S. (submitted to Physica D) "The Symbol Grounding Problem."
           Presented at CNLS Conference on Emergent Computation, 1989.

           (1989) "Minds, Machines and Searle." Journal of Experimental
           and Theoretical Artificial Intelligence 1: 5 - 25.

           (1987) "Category Induction and Representation" In: (S. Harnad, Ed.)
           Categorical Perception: The Groundwork of Cognition. New York:
           Cambridge University Press.
-- 
Stevan Harnad  INTERNET:  harnad@confidence.princeton.edu   harnad@princeton.edu
srh@flash.bellcore.com      harnad@elbereth.rutgers.edu    harnad@princeton.uucp
CSNET:    harnad%confidence.princeton.edu@relay.cs.net
BITNET:   harnad1@umass.bitnet      harnad@pucc.bitnet            (609)-921-7771