[comp.ai] Behaviorism/Cognitivism, LTT/TTT: An Aside

harnad@elbereth.rutgers.edu (Stevan Harnad) (03/04/89)

(1) THE LTT VS. THE TTT AND (2) BEHAVIORISM VS. COGNITIVISM: A CLARIFICATION

Aside 1: There are deep differences between the LTT (the Linguistic
version of the Turing Test: symbols-In, symbols-Out) and the TTT (the
Total robotic version of the Turing Test: sensory projections of
real-world objects and states of affairs In, motor operations on
real-world objects and states of affairs Out). The LTT is just a proper
subset of the TTT (because symbols are objects). Searle's Argument is
directed at the LTT only; the TTT is immune to it.

Aside 2: Behaviorists are concerned only with behavior and the
reinforcement histories that "shape" it. Cognitive theorists are
concerned with inferring the internal structures and processes that
generate the behavioral capacities themselves. But both behaviorists
and cognitivists are empiricists, in that they recognize that
observable behavior is all they will ever have by way of objective
DATA.

Aside 3: The status of neural data (neural and molecular "behavior") is
not yet clear: It may turn out to be the subject matter of a distinct,
independent empirical domain (as some functionalists contend), or it
may turn out (1) to suggest early functional hunches about the wherewithal
to pass the TTT as well as (2) to prove useful in fine-tuning a
near-complete TTT model (as I suspect). The boundary between bodily
and neural "behavior" is fuzzy in any case.

Aside 4: As to subjective data: I recommend "methodological
epiphenomenalism," except inasmuch as they suggest objectively viable
functional hunches; otherwise we are likely to be tempted to do
premature hermeneutics (mentalistic overinterpretation) on toy models
with sub-TTT performance capacities instead of pressing on to pass the
TTT. Once we're near passing the TTT, subjective data might also help
in fine-tuning our functional candidate, but they can never be decisive
or binding, since they can never be objectively tested (except by BEING
the candidate -- and that doesn't help the rest of us). A complete
functional theory that can be implemented to pass the TTT (and is
fine-tuned as closely as we like, say) will always be equally true of
creatures with minds, like ourselves, and insentient robots that only
BEHAVE exactly as if they had minds (if such insentient robots are
possible). That's the other-minds problem and the mind/body problem,
and the empirical buck stops there.

Ref: Harnad (1989) Minds, Machines and Searle. Journal of Experimental and
Theoretical Artificial Intelligence" 1: 5-25
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771