[mod.ai] Seminar - Minds, Machines, and Searle

chandros@topaz.RUTGERS.EDU.UUCP (02/20/87)

                             USACS


                     is pleased to announce

                   a talk by Stevan Harnad on

                   Minds, Machines, and Searle

                     Tuesday, February 24th

                      Hill Center Room 705

                           at 5:30 PM



For those of you who aren't familiar with Stevan Harnad, he is the
editor of the Brain and Behaviorial Sciences journal (where Searle's
Chinese Room argument first appeared), as well as a regular poster 
to mod.ai.  

If you would like to come to dinner with us please send mail to: 
rutgers!topaz!chandross.  I need to know by Monday (2/23) at the 
latest to make reservations.  For further information, or a transcript 
of the talk, send email.  



SUMMARY AND CONCLUSIONS:

Searle's provocative "Chinese Room Argument" attempted to show that the
goals of "Strong AI" are unrealizable. Proponents of Strong AI are supposed
to believe that (i) the mind is a computer program, (ii) the brain is
irrelevant, and (iii) the Turing Test is decisive. Searle's point is that
since the programmed symbol-manipulating instructions of a computer capable of
passing the Turing Test for understanding Chinese could always be performed
instead by a person who could not understand Chinese, the computer can hardly
be said to understand Chinese. Such "simulated" understanding, Searle argues,
is not the same as real understanding, which can only be accomplished by
something that "duplicates" the "causal powers" of the brain. In the present
paper the following points have been made:

1.  Simulation versus Implementation:
Searle fails to distinguish between the simulation of a mechanism, which is
only the formal testing of a theory, and the implementation of a mechanism,
which does duplicate causal powers. Searle's "simulation" only simulates
simulation rather than implementation. It can no more be expected to understand
than a simulated airplane can be expected to fly. Nevertheless, a successful
simulation must capture formally all the relevant functional properties of a
successful implementation.

2.  Theory-Testing versus Turing-Testing:
Searle's argument conflates theory-testing and Turing-Testing. Computer
simulations formally encode and test models for human perceptuomotor and
cognitive performance capacities; they are the medium in which the empirical
and theoretical work is done. The Turing Test is an informal and open-ended
test of whether or not people can discriminate the performance of the
implemented simulation from that of a real human being. In a sense, we are
Turing-Testing one another all the time, in our everyday solutions to the
"other minds" problem.

3.  The Convergence Argument:
Searle fails to take underdetermination into account. All scientific theories
are underdetermined by their data; i.e., the data are compatible with more
than one theory. But as the data domain grows, the degrees of freedom for
alternative (equiparametric) theories shrink. This "convergence" constraint
applies to AI's "toy" linguistic and robotic models as well, as they approach
the capacity to pass the Total (asympototic) Turing Test. Toy models are not
modules.

4.  Brain Modeling versus Mind Modeling:
Searle also fails to note that the brain itself can be understood only through
theoretical modeling, and that the boundary between brain performance and body
performance becomes arbitrary as one converges on an asymptotic model of total
human performance capacity.

5.  The Modularity Assumption: 
Searle implicitly adopts a strong, untested "modularity" assumption to the
effect that certain functional parts of human cognitive performance capacity
(such as language) can be be successfully modeled independently of the rest
(such as perceptuomotor or "robotic" capacity). This assumption may be false
for models approaching the power and generality needed to pass the Total 
Turing Test.

6.  The Teletype versus the Robot Turing Test: 
Foundational issues in cognitive science depend critically on the truth or
falsity of such modularity assumptions. For example, the "teletype"
(linguistic) version of the Turing Test could in principle (though not
necessarily in practice) be implemented by formal symbol-manipulation alone
(symbols in, symbols out), whereas the robot version necessarily calls for
full causal powers of interaction with the outside world (seeing, doing
AND linguistic understanding).

7.  The Transducer/Effector Argument:
Prior "robot" replies to Searle have not been principled ones. They have added
on robotic requirements as an arbitrary extra constraint. A principled
"transducer/effector" counterargument, however, can be based on the logical
fact that transduction is necessarily nonsymbolic, drawing on analog and
analog-to-digital functions that can only be simulated, but not implemented,
symbolically.

8.  Robotics and Causality:
Searle's argument hence fails logically for the robot version of the Turing
Test, for in simulating it he would either have to USE its transducers and
effectors (in which case he would not be simulating all of its functions) or
he would have to BE its transducers and effectors, in which case he would
indeed be duplicating their causal powers (of seeing and doing).

9.  Symbolic Functionalism versus Robotic Functionalism:
If symbol-manipulation ("symbolic functionalism") cannot in principle
accomplish the functions of the transducer and effector surfaces, then there
is no reason why every function in between has to be symbolic either.
Nonsymbolic function may be essential to implementing minds and may be a
crucial constituent of the functional substrate of mental states ("robotic
functionalism"): In order to work as hypothesized, the functionalist's
"brain-in-a-vat" may have to be more than just an isolated symbolic
"understanding" module -- perhaps even hybrid analog/symbolic all the way
through, as the real brain is.

10.  "Strong" versus "Weak" AI:
Finally, it is not at all clear that Searle's "Strong AI"/"Weak AI"
distinction captures all the possibilities, or is even representative of the
views of most cognitive scientists.

Hence, most of Searle's argument turns out to rest on unanswered questions
about the modularity of language and the scope of the symbolic approach to
modeling cognition. If the modularity assumption turns out to be false, then
a top-down symbol-manipulative approach to explaining the mind may be
completely misguided because its symbols (and their interpretations) remain
ungrounded -- not for Searle's reasons (since Searle's argument shares the
cognitive modularity assumption with "Strong AI"), but because of the
transdsucer/effector argument (and its ramifications for the kind of hybrid,
bottom-up processing that may then turn out to be optimal, or even essential,
in between transducers and effectors). What is undeniable is that a successful
theory of cognition will have to be computable (simulable), if not exclusively
computational (symbol-manipulative). Perhaps this is what Searle means (or
ought to mean) by "Weak AI."