[sci.philosophy.tech] Two Cog Sci Talks at Swarthmore

srh@wind.bellcore.com (stevan r harnad) (10/19/89)

The following two talks will be given back-to-back on Monday, October
30 at Swarthmore College in the Dupont Lecture Hall. They are
co-sponsored by the Psychology Department and the Linguistics Program.
For further information contact Judy Kegl (328-8437;
kegl@campus.swarthmore.edu).

               TWO TALKS IN COGNITIVE SCIENCE
     
                     Stevan Harnad
                Department of Psychology
                   Princeton University

(1)           "Minds, Machines, and Searle"
              DUPONT LECTURE ROOM, 1:30-4:00
                   Monday, October 30

SYNOPSIS: The philosopher John Searle's celebrated "Chinese Room
Argument" has not stopped causing frustration in the artificial
intelligence (AI) community since it first appeared in 1980. The
Argument tries to show that a computer running a program can't have a
mind even if it can pass the "Turing Test" (which means you can write
to it as a pen-pal till doomsday and never have reason to doubt that
it's really a person you're writing to). Searle shows that he can do
everything the computer does without understanding the messages he is
sending back and forth, so the computer couldn't be understanding them
either. AI people think the "system" understands, even if Searle
doesn't. Searle replies that he IS the system... Having umpired this
debate for 10 years, I will try to show who's right about what.

(2)           "The Symbol Grounding Problem"
              DUPONT LECTURE ROOM, 4:15-5:45
                    Monday, October 30

SYNOPSIS: There is a deeper side to the Searle debate. Computer
programs just manipulate meaningless symbols in various symbolic
codes. The interpretation of those symbols is made by us. Without our
interpretations, a symbol system is like a Chinese/Chinese dictionary:
Look up one meaningless symbol and all you find is some more
meaningless symbols. This means that a mind cannot be just a symbol
manipulating system, as many today believe. The symbols in a symbol
system are ungrounded, whereas the symbols in our heads are grounded in
the objects and events they stand for. I will try to show how the
meanings in a symbol system could be grounded bottom-up in two kinds of
nonsymbolic representation (analog copies of the sensory surfaces and
feature detectors that pick out object and event categories) with the
currently fashionable neural nets providing the learned "connection"
between elementary symbols and the things they stand for.

REFERENCES

Searle, J. (1980) "Minds, Brains, and Programs" Behavioral and Brain
                   Sciences 3: 417-457

Harnad, S. (1989)  "Minds, Machines and Searle" Journal of Theoretical
                    and Experimental Artificial Intelligence, 1: 5-25

Harnad, S. (1990)  "The Symbol Grounding Problem" Physica D (in press)

Stevan Harnad  INTERNET:  harnad@confidence.princeton.edu
srh@flash.bellcore.com      harnad@elbereth.rutgers.edu    harnad@princeton.uucp
BITNET:   harnad1@umass.bitnet      harnad@pucc.bitnet            (609)-921-7771