[net.periphs] Searle, Turing, Symbols, Categories

harnad@mind.UUCP (Stevan Harnad) (09/27/86)

The following are the Summary and Abstract, respectively, of two papers
I've been giving for the past year on the colloquium circuit. The first
is a joint critique of Searle's argument AND of the symbolic approach
to mind-modelling, and the second is an alternative proposal and a
synthesis of the symbolic and nonsymbolic approach to the induction
and representation of categories.

I'm about to publish both papers, but on the off chance that
there is a still a conceivable objection that I have not yet rebutted,
I am inviting critical responses. The full preprints are available
from me on request (and I'm still giving the talks, in case anyone's
interested).

***********************************************************
Paper #1:
(Preprint available from author)

                 MINDS, MACHINES AND SEARLE

                       Stevan Harnad
                Behavioral & Brain Sciences
                      20 Nassau Street
                     Princeton, NJ 08542

Summary and Conclusions:

Searle's provocative "Chinese Room Argument" attempted to
show that the goals of "Strong AI" are unrealizable.
Proponents of Strong AI are supposed to believe that (i) the
mind is a computer program, (ii) the brain is irrelevant,
and (iii) the Turing Test is decisive. Searle's point is
that since the programmed symbol-manipulating instructions
of a computer capable of passing the Turing Test for
understanding Chinese could always be performed instead by a
person who could not understand Chinese, the computer can
hardly be said to understand Chinese. Such "simulated"
understanding, Searle argues, is not the same as real
understanding, which can only be accomplished by something
that "duplicates" the "causal powers" of the brain. In the
present paper the following points have been made:

1.  Simulation versus Implementation:

Searle fails to distinguish between the simulation of a
mechanism, which is only the formal testing of a theory, and
the implementation of a mechanism, which does duplicate
causal powers. Searle's "simulation" only simulates
simulation rather than implementation. It can no more be
expected to understand than a simulated airplane can be
expected to fly. Nevertheless, a successful simulation must
capture formally all the relevant functional properties of a
successful implementation.

2.  Theory-Testing versus Turing-Testing:

Searle's argument conflates theory-testing and Turing-
Testing. Computer simulations formally encode and test
models for human perceptuomotor and cognitive performance
capacities; they are the medium in which the empirical and
theoretical work is done. The Turing Test is an informal and
open-ended test of whether or not people can discriminate
the performance of the implemented simulation from that of a
real human being. In a sense, we are Turing-Testing one
another all the time, in our everyday solutions to the
"other minds" problem.

3.  The Convergence Argument:

Searle fails to take underdetermination into account. All
scientific theories are underdetermined by their data; i.e.,
the data are compatible with more than one theory. But as
the data domain grows, the degrees of freedom for
alternative (equiparametric) theories shrink. This
"convergence" constraint applies to AI's "toy" linguistic
and robotic models as well, as they approach the capacity to
pass the Total (asympototic) Turing Test. Toy models are not
modules.

4.  Brain Modeling versus Mind Modeling:

Searle also fails to note that the brain itself can be
understood only through theoretical modeling, and that the
boundary between brain performance and body performance
becomes arbitrary as one converges on an asymptotic model of
total human performance capacity.

5.  The Modularity Assumption:

Searle implicitly adopts a strong, untested "modularity"
assumption to the effect that certain functional parts of
human cognitive performance capacity (such as language) can
be be successfully modeled independently of the rest (such
as perceptuomotor or "robotic" capacity). This assumption
may be false for models approaching the power and generality
needed to pass the Total Turing Test.

6.  The Teletype versus the Robot Turing Test:

Foundational issues in cognitive science depend critically
on the truth or falsity of such modularity assumptions. For
example, the "teletype" (linguistic) version of the Turing
Test could in principle (though not necessarily in practice)
be implemented by formal symbol-manipulation alone (symbols
in, symbols out), whereas the robot version necessarily
calls for full causal powers of interaction with the outside
world (seeing, doing AND linguistic understanding).

7.  The Transducer/Effector Argument:

Prior "robot" replies to Searle have not been principled
ones. They have added on robotic requirements as an
arbitrary extra constraint. A principled
"transducer/effector" counterargument, however, can be based
on the logical fact that transduction is necessarily
nonsymbolic, drawing on analog and analog-to-digital
functions that can only be simulated, but not implemented,
symbolically.

8.  Robotics and Causality:

Searle's argument hence fails logically for the robot
version of the Turing Test, for in simulating it he would
either have to USE its transducers and effectors (in which
case he would not be simulating all of its functions) or he
would have to BE its transducers and effectors, in which
case he would indeed be duplicating their causal powers (of
seeing and doing).

9.  Symbolic Functionalism versus Robotic Functionalism:

If symbol-manipulation ("symbolic functionalism") cannot in
principle accomplish the functions of the transducer and
effector surfaces, then there is no reason why every
function in between has to be symbolic either.  Nonsymbolic
function may be essential to implementing minds and may be a
crucial constituent of the functional substrate of mental
states ("robotic functionalism"): In order to work as
hypothesized, the functionalist's "brain-in-a-vat" may have
to be more than just an isolated symbolic "understanding"
module -- perhaps even hybrid analog/symbolic all the way
through, as the real brain is.

10.  "Strong" versus "Weak" AI:

Finally, it is not at all clear that Searle's "Strong
AI"/"Weak AI" distinction captures all the possibilities, or
is even representative of the views of most cognitive
scientists.

Hence, most of Searle's argument turns out to rest on
unanswered questions about the modularity of language and
the scope of the symbolic approach to modeling cognition. If
the modularity assumption turns out to be false, then a
top-down symbol-manipulative approach to explaining the mind
may be completely misguided because its symbols (and their
interpretations) remain ungrounded -- not for Searle's
reasons (since Searle's argument shares the cognitive
modularity assumption with "Strong AI"), but because of the
transdsucer/effector argument (and its ramifications for the
kind of hybrid, bottom-up processing that may then turn out
to be optimal, or even essential, in between transducers and
effectors). What is undeniable is that a successful theory
of cognition will have to be computable (simulable), if not
exclusively computational (symbol-manipulative). Perhaps
this is what Searle means (or ought to mean) by "Weak AI."

*************************************************************

Paper #2:
(To appear in: "Categorical Perception"
S. Harnad, ed., Cambridge University Press 1987
Preprint available from author)

            CATEGORY INDUCTION AND REPRESENTATION

                       Stevan Harnad
                Behavioral & Brain Sciences
                      20 Nassau Street
                     Princeton NJ 08542

Categorization is a very basic cognitive activity. It is
involved in any task that calls for differential responding,
from operant discrimination to pattern recognition to naming
and describing objects and states-of-affairs.  Explanations
of categorization range from nativist theories denying that
any nontrivial categories are acquired by learning to
inductivist theories claiming that most categories are learned.

"Categorical perception" (CP) is the name given to a
suggestive perceptual phenomenon that may serve as a useful
model for categorization in general: For certain perceptual
categories, within-category differences look much smaller
than between-category differences even when they are of the
same size physically. For example, in color perception,
differences between reds and differences between yellows
look much smaller than equal-sized differences that cross
the red/yellow boundary; the same is true of the phoneme
categories /ba/ and /da/. Indeed, the effect of the category
boundary is not merely quantitative, but qualitative.

There have been two theories to explain CP effects. The
"Whorf Hypothesis" explains color boundary effects by
proposing that language somehow determines our view of
reality. The "motor theory of speech perception" explains
phoneme boundary effects by attributing them to the patterns
of articulation required for pronunciation. Both theories
seem to raise more questions than they answer, for example:
(i) How general and pervasive are CP effects? Do they occur
in other modalities besides speech-sounds and color?  (ii)
Are CP effects inborn or can they be generated by learning
(and if so, how)? (iii) How are categories internally
represented? How does this representation generate
successful categorization and the CP boundary effect?

Some of the answers to these questions will have to come
from ongoing research, but the existing data do suggest a
provisional model for category formation and category
representation. According to this model, CP provides our
basic or elementary categories. In acquiring a category we
learn to label or identify positive and negative instances
from a sample of confusable alternatives. Two kinds of
internal representation are built up in this learning by
"acquaintance": (1) an iconic representation that subserves
our similarity judgments and (2) an analog/digital feature-
filter that picks out the invariant information allowing us
to categorize the instances correctly. This second,
categorical representation is associated with the category
name. Category names then serve as the atomic symbols for a
third representational system, the (3) symbolic
representations that underlie language and that make it
possible for us to learn by "description."

This model provides no particular or general solution to the
problem of inductive learning, only a conceptual framework;
but it does have some substantive implications, for example,
(a) the "cognitive identity of (current) indiscriminables":
Categories and their representations can only be provisional
and approximate, relative to the alternatives encountered to
date, rather than "exact." There is also (b) no such thing
as an absolute "feature," only those features that are
invariant within a particular context of confusable
alternatives. Contrary to prevailing "prototype" views,
however, (c) such provisionally invariant features MUST
underlie successful categorization, and must be "sufficient"
(at least in the "satisficing" sense) to subserve reliable
performance with all-or-none, bounded categories, as in CP.
Finally, the model brings out some basic limitations of the
"symbol-manipulative" approach to modeling cognition,
showing how (d) symbol meanings must be functionally
anchored in nonsymbolic, "shape-preserving" representations
-- iconic and categorical ones. Otherwise, all symbol
interpretations are ungrounded and indeterminate. This
amounts to a principled call for a psychophysical (rather
than a neural) "bottom-up" approach to cognition.