[net.ai] Searle, Turing, Symbols, Categories

harnad@mind.UUCP (Stevan Harnad) (09/27/86)

The following are the Summary and Abstract, respectively, of two papers
I've been giving for the past year on the colloquium circuit. The first
is a joint critique of Searle's argument AND of the symbolic approach
to mind-modelling, and the second is an alternative proposal and a
synthesis of the symbolic and nonsymbolic approach to the induction
and representation of categories.

I'm about to publish both papers, but on the off chance that
there is a still a conceivable objection that I have not yet rebutted,
I am inviting critical responses. The full preprints are available
from me on request (and I'm still giving the talks, in case anyone's
interested).

***********************************************************
Paper #1:
(Preprint available from author)

                 MINDS, MACHINES AND SEARLE

                       Stevan Harnad
                Behavioral & Brain Sciences
                      20 Nassau Street
                     Princeton, NJ 08542

Summary and Conclusions:

Searle's provocative "Chinese Room Argument" attempted to
show that the goals of "Strong AI" are unrealizable.
Proponents of Strong AI are supposed to believe that (i) the
mind is a computer program, (ii) the brain is irrelevant,
and (iii) the Turing Test is decisive. Searle's point is
that since the programmed symbol-manipulating instructions
of a computer capable of passing the Turing Test for
understanding Chinese could always be performed instead by a
person who could not understand Chinese, the computer can
hardly be said to understand Chinese. Such "simulated"
understanding, Searle argues, is not the same as real
understanding, which can only be accomplished by something
that "duplicates" the "causal powers" of the brain. In the
present paper the following points have been made:

1.  Simulation versus Implementation:

Searle fails to distinguish between the simulation of a
mechanism, which is only the formal testing of a theory, and
the implementation of a mechanism, which does duplicate
causal powers. Searle's "simulation" only simulates
simulation rather than implementation. It can no more be
expected to understand than a simulated airplane can be
expected to fly. Nevertheless, a successful simulation must
capture formally all the relevant functional properties of a
successful implementation.

2.  Theory-Testing versus Turing-Testing:

Searle's argument conflates theory-testing and Turing-
Testing. Computer simulations formally encode and test
models for human perceptuomotor and cognitive performance
capacities; they are the medium in which the empirical and
theoretical work is done. The Turing Test is an informal and
open-ended test of whether or not people can discriminate
the performance of the implemented simulation from that of a
real human being. In a sense, we are Turing-Testing one
another all the time, in our everyday solutions to the
"other minds" problem.

3.  The Convergence Argument:

Searle fails to take underdetermination into account. All
scientific theories are underdetermined by their data; i.e.,
the data are compatible with more than one theory. But as
the data domain grows, the degrees of freedom for
alternative (equiparametric) theories shrink. This
"convergence" constraint applies to AI's "toy" linguistic
and robotic models as well, as they approach the capacity to
pass the Total (asympototic) Turing Test. Toy models are not
modules.

4.  Brain Modeling versus Mind Modeling:

Searle also fails to note that the brain itself can be
understood only through theoretical modeling, and that the
boundary between brain performance and body performance
becomes arbitrary as one converges on an asymptotic model of
total human performance capacity.

5.  The Modularity Assumption:

Searle implicitly adopts a strong, untested "modularity"
assumption to the effect that certain functional parts of
human cognitive performance capacity (such as language) can
be be successfully modeled independently of the rest (such
as perceptuomotor or "robotic" capacity). This assumption
may be false for models approaching the power and generality
needed to pass the Total Turing Test.

6.  The Teletype versus the Robot Turing Test:

Foundational issues in cognitive science depend critically
on the truth or falsity of such modularity assumptions. For
example, the "teletype" (linguistic) version of the Turing
Test could in principle (though not necessarily in practice)
be implemented by formal symbol-manipulation alone (symbols
in, symbols out), whereas the robot version necessarily
calls for full causal powers of interaction with the outside
world (seeing, doing AND linguistic understanding).

7.  The Transducer/Effector Argument:

Prior "robot" replies to Searle have not been principled
ones. They have added on robotic requirements as an
arbitrary extra constraint. A principled
"transducer/effector" counterargument, however, can be based
on the logical fact that transduction is necessarily
nonsymbolic, drawing on analog and analog-to-digital
functions that can only be simulated, but not implemented,
symbolically.

8.  Robotics and Causality:

Searle's argument hence fails logically for the robot
version of the Turing Test, for in simulating it he would
either have to USE its transducers and effectors (in which
case he would not be simulating all of its functions) or he
would have to BE its transducers and effectors, in which
case he would indeed be duplicating their causal powers (of
seeing and doing).

9.  Symbolic Functionalism versus Robotic Functionalism:

If symbol-manipulation ("symbolic functionalism") cannot in
principle accomplish the functions of the transducer and
effector surfaces, then there is no reason why every
function in between has to be symbolic either.  Nonsymbolic
function may be essential to implementing minds and may be a
crucial constituent of the functional substrate of mental
states ("robotic functionalism"): In order to work as
hypothesized, the functionalist's "brain-in-a-vat" may have
to be more than just an isolated symbolic "understanding"
module -- perhaps even hybrid analog/symbolic all the way
through, as the real brain is.

10.  "Strong" versus "Weak" AI:

Finally, it is not at all clear that Searle's "Strong
AI"/"Weak AI" distinction captures all the possibilities, or
is even representative of the views of most cognitive
scientists.

Hence, most of Searle's argument turns out to rest on
unanswered questions about the modularity of language and
the scope of the symbolic approach to modeling cognition. If
the modularity assumption turns out to be false, then a
top-down symbol-manipulative approach to explaining the mind
may be completely misguided because its symbols (and their
interpretations) remain ungrounded -- not for Searle's
reasons (since Searle's argument shares the cognitive
modularity assumption with "Strong AI"), but because of the
transdsucer/effector argument (and its ramifications for the
kind of hybrid, bottom-up processing that may then turn out
to be optimal, or even essential, in between transducers and
effectors). What is undeniable is that a successful theory
of cognition will have to be computable (simulable), if not
exclusively computational (symbol-manipulative). Perhaps
this is what Searle means (or ought to mean) by "Weak AI."

*************************************************************

Paper #2:
(To appear in: "Categorical Perception"
S. Harnad, ed., Cambridge University Press 1987
Preprint available from author)

            CATEGORY INDUCTION AND REPRESENTATION

                       Stevan Harnad
                Behavioral & Brain Sciences
                      20 Nassau Street
                     Princeton NJ 08542

Categorization is a very basic cognitive activity. It is
involved in any task that calls for differential responding,
from operant discrimination to pattern recognition to naming
and describing objects and states-of-affairs.  Explanations
of categorization range from nativist theories denying that
any nontrivial categories are acquired by learning to
inductivist theories claiming that most categories are learned.

"Categorical perception" (CP) is the name given to a
suggestive perceptual phenomenon that may serve as a useful
model for categorization in general: For certain perceptual
categories, within-category differences look much smaller
than between-category differences even when they are of the
same size physically. For example, in color perception,
differences between reds and differences between yellows
look much smaller than equal-sized differences that cross
the red/yellow boundary; the same is true of the phoneme
categories /ba/ and /da/. Indeed, the effect of the category
boundary is not merely quantitative, but qualitative.

There have been two theories to explain CP effects. The
"Whorf Hypothesis" explains color boundary effects by
proposing that language somehow determines our view of
reality. The "motor theory of speech perception" explains
phoneme boundary effects by attributing them to the patterns
of articulation required for pronunciation. Both theories
seem to raise more questions than they answer, for example:
(i) How general and pervasive are CP effects? Do they occur
in other modalities besides speech-sounds and color?  (ii)
Are CP effects inborn or can they be generated by learning
(and if so, how)? (iii) How are categories internally
represented? How does this representation generate
successful categorization and the CP boundary effect?

Some of the answers to these questions will have to come
from ongoing research, but the existing data do suggest a
provisional model for category formation and category
representation. According to this model, CP provides our
basic or elementary categories. In acquiring a category we
learn to label or identify positive and negative instances
from a sample of confusable alternatives. Two kinds of
internal representation are built up in this learning by
"acquaintance": (1) an iconic representation that subserves
our similarity judgments and (2) an analog/digital feature-
filter that picks out the invariant information allowing us
to categorize the instances correctly. This second,
categorical representation is associated with the category
name. Category names then serve as the atomic symbols for a
third representational system, the (3) symbolic
representations that underlie language and that make it
possible for us to learn by "description."

This model provides no particular or general solution to the
problem of inductive learning, only a conceptual framework;
but it does have some substantive implications, for example,
(a) the "cognitive identity of (current) indiscriminables":
Categories and their representations can only be provisional
and approximate, relative to the alternatives encountered to
date, rather than "exact." There is also (b) no such thing
as an absolute "feature," only those features that are
invariant within a particular context of confusable
alternatives. Contrary to prevailing "prototype" views,
however, (c) such provisionally invariant features MUST
underlie successful categorization, and must be "sufficient"
(at least in the "satisficing" sense) to subserve reliable
performance with all-or-none, bounded categories, as in CP.
Finally, the model brings out some basic limitations of the
"symbol-manipulative" approach to modeling cognition,
showing how (d) symbol meanings must be functionally
anchored in nonsymbolic, "shape-preserving" representations
-- iconic and categorical ones. Otherwise, all symbol
interpretations are ungrounded and indeterminate. This
amounts to a principled call for a psychophysical (rather
than a neural) "bottom-up" approach to cognition.

rush@cwrudg.UUCP (rush) (10/01/86)

In article <158@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>6.  The Teletype versus the Robot Turing Test:
>
>For example, the "teletype" (linguistic) version of the Turing...
> whereas the robot version necessarily
>calls for full causal powers of interaction with the outside
>world (seeing, doing AND linguistic understanding).
>
Uh...I never heard of the "robot version" of the Turing Test,
could someone please fill me in?? I think that understanding
the reasons for such a test would help me (I make
no claims for anyone else) make some sense out of the rest
of this article. In light of my lack of knowledge, please forgive
my presumption in the following comment. 

>7.  The Transducer/Effector Argument:
>
>A principled
>"transducer/effector" counterargument, however, can be based
>on the logical fact that transduction is necessarily
>nonsymbolic, drawing on analog and analog-to-digital
>functions that can only be simulated, but not implemented,
>symbolically.
>
[ I know I claimed no commentary, but it seems that this argument
  depends heavily on the meaning of the term "symbol". This could
  be a problem that only arises when one attempts to implement some
  of the stranger possibilities for symbolic entities. ]

	Richard Rush	- Just another Jesus freak in computer science
	decvax!cwruecmp!cwrudg!rush

harnad@mind.UUCP (Stevan Harnad) (10/02/86)

In his commentary-not-reply to my <158@mind.UUCP>, Richard Rush
<150@cwrudge.UUCP> asks:

(1)
>     I never heard of the "robot version" of the Turing Test,
>     could someone please fill me in?

He also asks (in connection with my "transducer/effector" argument)
about the analog/symbolic distinction:

(2)
>     I know I claimed no commentary, but it seems that this argument
>     depends heavily on the meaning of the term "symbol". This could
>     be a problem that only arises when one attempts to implement some
>     of the stranger possibilities for symbolic entities. 

In reply to (1): The linguistic version of the turing test (turing's
original version) is restricted to linguistic interactions:
Language-in/Language-out. The robotic version requires the candidate
system to operate on objects in the world. In both cases the (turing)
criterion is whether the system can PERFORM indistinguishably from a human
being. (The original version was proposed largely so that your
judgment would not be prejudiced by the system's nonhuman appearance.)

On my argument the distinction between the two versions is critical,
because the linguistic version can (in principle) be accomplished by
nothing but symbols-in/symbols-out (and symbols in between) whereas
the robotic version necessarily calls for non-symbolic processes
(transducer, effector, analog and A/D). This may represent a
substantive functional limitation on the symbol-manipulative approach
to the modeling of mind (what Searle calls "Strong AI").

In reply to (2): I don't know what "some of the stranger possibilities
for symbolic entities" are. I take symbol-manipulation to be
syntactic: Symbols are arbitrary tokens manipulated in accordance with
certain formal rules on the basis of their form rather than their meaning.
That's symbolic computation, whether it's done by computer or by
paper-and-pencil. The interpretations of the symbols (and indeed of
the manipulations and their outcomes) are ours, and are not part of
the computation. Informal and figurative meanings of "symbol" have
little to do with this technical concept.

Symbols as arbitrary syntactic tokens in a formal system can be
contrasted with other kinds of objects. The ones I singled out in my
papers were "icons" or analogs of physical objects, as they occur in
the proximal physical input/output in transduction, as they occur in
the A-side of A/D and D/A transformations, and as they may function in
any part of a hybrid system to the extent that their functional role
is not merely formal and syntactic (i.e., to the extent that their
form is not arbitrary and dependent on convention and interpretation
to link it to the objects they "stand for," but rather, the link is
one of physical resemblance and causality).

The category-representation paper proposes an architecture for such a
hybrid system.

Stevan Harnad
princeton!mind!harnad

me@utai.UUCP (Daniel Simon) (10/06/86)

In article <160@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>In reply to (1): The linguistic version of the turing test (turing's
>original version) is restricted to linguistic interactions:
>Language-in/Language-out. The robotic version requires the candidate
>system to operate on objects in the world. In both cases the (turing)
>criterion is whether the system can PERFORM indistinguishably from a human
>being. (The original version was proposed largely so that your
>judgment would not be prejudiced by the system's nonhuman appearance.)
>
I have no idea if this is a relevant issue or a relevant place to bring it up,
but this whole business of the Turing test makes me profoundly suspicious.  For
example, we all know about Weizenbaum's ELIZA, which, he claimed, convinced
many clever, relatively computer-literate (for their day) people that it was 
intelligent.  This fact leads me to some questions which, in my view, ought to 
be seriously addressed before the phrase "Turing test" is bandied about (and 
probably already have been addressed, but I didn't notice, and will thank 
everybody in advance for telling me where to find a treatment of them and 
asking me to kindly buzz off):

	1)  To what extent is our discernment of intelligent behaviour context-
	    dependent?  ELIZA was able to appear intelligent because of the
	    clever choice of context (in a Rogerian therapy session, the kind
	    of dull, repetitive comments made by ELIZA seem perfectly 
	    appropriate, and hence, intelligent).  Mr. Harnad has brought up 
	    the problem of physical appearance as a prejudicing factor in the 
	    assessment of "human" qualities like intelligence.  Might not the 
	    robot version lead to the opposite problem of testers being 
	    insufficiently skeptical of a machine with human appearance (or 
	    even of a machine so unlike a human being in appearance that mildly 	    human-like behaviour takes on an exaggerated significance in the 
	    tester's mind)?  Is it ever possible to trust the results of any 
	    instance of the test as being a true indicator of the properties of 
	    the tested entity itself, rather than those of the environment in 
	    which it was tested?

	2)  Assuming that some "neutral" context can be found which would not
	    "distort" the results of the test (and I'm not at all convinced
	    that such a context exists, or even that the idea of such a context
	    has any meaning), what would be so magic about the level of 
	    perceptiveness of the shrewdest, most perspicacious tester
	    available, that would make his inability to distinguish man from 
	    machine in some instance the official criterion by which to judge
	    intelligence?  In short, what does passing (or failing) the Turing
	    test really mean?

	3)  If the Turing test is in fact an unacceptable standard, and 
	    building a machine that can pass it an inappropriate goal (and, as 
	    questions 1 and 2 have probably already suggested, this is what I 
	    strongly suspect), are there more appropriate means by which we 
	    could evaluate the human-like or intelligent properties of an AI 
	    system?  In effect, is it possible to formulate the qualities that 
	    constitute intelligence in a manner which is more intuitively 
	    satisfying than the standard AI stuff about reasoning, but still 
	    more rigorous than the Turing test?

As I said, I don't know if my questions are legitimate, or if they have already
been satisfactorily resolved, or if they belong elsewhere; I merely bring them
up here because this is the first place I have seen the Turing test brought up
in a long time.  I am eager to see what others have to say on the subject.


>Stevan Harnad
>princeton!mind!harnad


					Daniel R. Simon

"Look at them yo-yo's, that's the way to do it
 Ya go to grad school, get your PhD"

drew@ukma.uky.csnet (Andrew Lawson) (10/09/86)

In article <160@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>On my argument the distinction between the two versions is critical,
>because the linguistic version can (in principle) be accomplished by
>nothing but symbols-in/symbols-out (and symbols in between) whereas
>the robotic version necessarily calls for non-symbolic processes
>(transducer, effector, analog and A/D).

This is not clear.  When I look at my surroundings, you are no
more than a symbol (just as is anything outside of my being).
Remember that "symbol" is not rigidly defined most of the time.
  When I recognize the symbol of a car heading toward me, I respond
by moving out of the way.  This is not essentially different from
a linguistic system recognizing a symbol and responding with another
symbol.

-- 
Drew Lawson                             cbosgd!ukma!drew
"Parts is parts."			drew@uky.csnet
					drew@UKMA.BITNET

harnad@mind.UUCP (Stevan Harnad) (10/10/86)

In response to what I wrote in article <160@mind.UUCP>, namely:

>On my argument the distinction between the two versions
>[of the turing test] is critical,
>because the linguistic version can (in principle) be accomplished by
>nothing but symbols-in/symbols-out (and symbols in between) whereas
>the robotic version necessarily calls for non-symbolic processes
>(transducer, effector, analog and A/D).

Drew Lawson replies:

>	This is not clear.  When I look at my surroundings, you are no
>	more than a symbol (just as is anything outside of my being).
>	Remember that "symbol" is not rigidly defined most of the time.
>	When I recognize the symbol of a car heading toward me, I respond
>	by moving out of the way.  This is not essentially different from
>	a linguistic system recognizing a symbol and responding with another
>	symbol.

It's important, when talking about what is and is not a symbol, to
speak literally and not symbolically. What I mean by a symbol is an
arbitrary formal token, physically instantiated in some way (e.g., as
a mark on a piece of paper or the state of a 0/1 circuit in a
machine) and manipulated according to certain formal rules. The
critical thing is that the rules are syntactic, that is, the symbol is
manipulated on the basis of its shape only -- which is arbitrary,
apart from the role it plays in the formal conventions of the syntax
in question. The symbol is not manipulated in virtue of its "meaning."
Its meaning is simply an interpretation we attach to the formal
goings-on. Nor is it manipulated in virtue of a relation of
resemblance to whatever "objects" it may stand for in the outside
world, or in virtue of any causal connection with them. Those
relations are likewise mediated only by our interpretations.

This is why the distinction between symbolic and nonsymbolic processes
in cognition (and robotics) is so important. It will not do to simply
wax figurative on what counts as a symbol. If I'm allowed to use the
word metaphorically, of course everything's a "symbol." But if I stick
to a specific, physically realizable sense of the word, then it
becomes a profound theoretical problem just exactly how I (or any
device) can recognize you, or a car, or anything else, and how I (or it)
can interact with such external objects robotically. And the burden of
my paper is to show that this capacity depends crucially on nonsymbolic
processes.

Finally, apart from the temptation to lapse into metaphor about
"symbols," there is also the everpresent lure of phenomenology in
contemplating such matters. For, apart from my robotic capacity to
interact with objects in the world -- to recognize them, manipulate
them, name them, describe them -- there is also my concsiousness: My
subjective sense, accompanying all these capacities, of what it's
like (qualitatively) to recognize, manipulate, etc. That, as I argue
in another paper (and only hint at in the two under discussion), is a
problem that we'd do best to steer clear of in AI, robotics and
cognitive modeling, at least for the time being. We already have our hands
full coming up with a model that can successfully pass the (robotic
and/or linguistic) turing test -- i.e., perform exactly AS IF it had
subjective experiences, the way we do, while it successfully accomplishes
all those clever things. Until we manage that, let's not worry too much
about whether the outcome will indeed be merely "as if." Overinterpreting
our tools phenomenologically is just as unproductive as overinterpreting them
metaphorically.

Stevan Harnad
princeton!mind!harnad

harnad@mind.UUCP (Stevan Harnad) (10/10/86)

In response to my article <160@mind.UUCP>, Daniel R. Simon asks:

>	1)  To what extent is our discernment of intelligent behaviour
>	    context-dependent?...Might not the robot version [of the
>	    turing test] lead to the...problem of testers being 
>	    insufficiently skeptical of a machine with human appearance?
>	    ...Is it ever possible to trust the results of any 
>	    instance of the test...?

My reply to these questions is quite explicit in the papers in
question: The turing test has two components, (i) a formal, empirical one,
and (ii) an informal, intuitive one. The formal empirical component (i)
is the requirement that the system being tested be able to generate human
performance (be it robotic or linguistic). That's the nontrivial
burden that will occupy theorists for at least decades to come, as we
converge on (what I've called) the "total" turing test -- a model that
exhibits all of our robotic and lingistic capacities. The informal,
intuitive component (ii) is that the system in question must perform in a
way that is indistinguishable from the performance of a person, as
judged by a person.

It is not always clear which of the two components a sceptic is
worrying about. It's usually (ii), because who can quarrel with the
principle that a veridical model should have all of our performance
capacities? Now the only reply I have for the sceptic about (ii) is
that he should remember that he has nothing MORE than that to go on in
the case of any other mind than his own. In other words, there is no
rational reason for being more sceptical about robots' minds (if we
can't tell their performance apart from that of people) than about
(other) peoples' minds. The turing test is ALREADY the informal way we
contend with the "other-minds" problem [i.e., how can you be sure
anyone else but you has a mind, rather than merely acting AS IF it had
a mind?], so why should we demand more in the case of robots? It's
surely not because of any intuitive or a priori knowledge we have
about the FUNCTIONAL basis of our own minds, otherwise we could have put
those intuitive ideas to work in designing successful candidates for the
turing test long ago. 

So, since we have absolutely no intuitive idea about the functional
(symbolic, nonsymbolic, physical, causal) basis of the mind, our only
nonarbitrary basis for discriminating robots from people remains their
performance.

As to "context," as I argue in the paper, the only one that is
ultimately defensible is the "total" turing test, since there is no
evidence at all that either capacities or contexts are modular. The
degrees of freedom of a successful total-turing model are then reduced
to the usual underdetermination of scientific theory by data. (It's always
possible to carp at a physicist that his theoretic model of the
universe "is turing-indistinguishable from the real one, but how can
you be sure it's `really true' of the world?")

>	2)  Assuming that some "neutral" context can be found...
>	    what does passing (or failing) the Turing test really mean?

It means you've successfully modelled the objective observables under
investigation. No empirical science can offer more. And the only
"neutral" context is the total turing test (which, like all inductive
contexts, always has an open end, namely, the everpresent possibility
that things could turn out differently tomorrow -- philosophers call
this "inductive risk," and all empirical inquiry is vulnerable to it).

>	3)  ...are there more appropriate means by which we 
>	    could evaluate the human-like or intelligent properties of an AI 
>	    system?  ...is it possible to formulate the qualities that 
>	    constitute intelligence in a manner which is more intuitively 
>	    satisfying than the standard AI stuff about reasoning, but still 
>	    more rigorous than the Turing test?

I don't think there's anything more rigorous than the total turing
test since, when formulated in the suitably generalized way I
describe, it can be seen to be identical to the empirical criterion for
all of the objective sciences. Residual doubts about it come from
four sources, as far as I can make out, and only one of these is
legitimate. The legitimate one (a) is doubts about autonomous
symbolic processes (that's what my papers are about). The three
illegitimate ones (in my view) are (b) misplaced doubts about
underdetermination and inductive risk, (c) misplaced hold-outs for
the nervous system, and (d) misplaced hold-outs for consciousness.

For (a), read my papers. I've sketched an answer to (b) above.

The quick answer to (c) [brain bias] -- apart from the usual
structure/function and multiple-realizability arguments in engineering,
computer science and biology -- is that as one approaches the
asymptotic Total Turing Test, any objective aspect of brain
"performance" that anyone believes is relevant -- reaction time,
effects of damage, effects of chemicals -- is legitimate performance
data too, including microperformance (like pupillary dilation,
heart-rate and perhaps even synactic transmission). I believe that
sorting out how much of that is really relevant will only amount to the
fine-tuning -- the final leg of our trek to theoretic Utopia,
with most of the substantive theoretical work already behind us.

Finally, my reply to (d) [mind bias] is that holding out for
consciousness is a red herring. Either our functional attempts to
model performance will indeed "capture" consciousness at some point, or
they won't. If we do capture it, the only ones that will ever know for
sure that we've succeeded are our robots. If we don't capture it,
then we're stuck with a second level of underdetermination -- call it
"subjective" underdetermination -- to add to our familiar objective
underdetermination (b): Objective underdetermination is the usual
underdetermination of objective theories by objective data; i.e., there
may be more than one way to skin a cat; we may not happen to have
converged on nature's way in any of our theories, and we'll never be
able to know for sure. The subjective twist on this is that, apart
from this unresolvable uncertainty about whether or not the objective models
that fit all of our objective (i.e., intersubjective) observations capture
the unobservable basis of everything that is objectively observable,
there may be a further unresolvable uncertainty about whether or not
they capture the unobservable basis of everything (or anything) that is
subjectively observable.

AI, robotics and cognitive modeling would do better to learn to live
with this uncertainty and put it in context, rather than holding out
for the un-do-able, while there's plenty of the do-able to be done.

Stevan Harnad
princeton!mind!harnad

cda@entropy.berkeley.edu (10/13/86)

In article <167@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
	<as one approaches the
	<asymptotic Total Turing Test, any objective aspect of brain
	<"performance" that anyone believes is relevant -- reaction time,
	<effects of damage, effects of chemicals -- is legitimate performance
	<data too, including microperformance (like pupillary dilation,
	<heart-rate and perhaps even synactic transmission). 

Does this mean that in order to successfully pass the Total Turing Test,
a robot will have to be able to get high on drugs?  Does this imply that the
ability of the brain to respond to drugs is an integral component of
intelligence?  What will Ron, Nancy, and the DOD think of this idea?

Turing said that the way to give a robot free will was to incorporate
sufficient randomness into its actions, which I'm sure the DOD won't like
either.

It seems that intelligence is not exactly the quality our government is
trying to achieve in its AI hard and software.

michaelm@bcsaic.UUCP (michael maxwell) (10/14/86)

In article <167@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>...since there is no
>evidence at all that either capacities or contexts are modular.

Maybe I'm reading this out of context (not having read your books or papers),
but could you explain this statement?  I know of lots of evidence for the
modularity of various aspects of linguistic behavior.  In fact, we have a
parser + grammar of English here that captures a large portion of English
syntax, but has absolutely no semantics (yet).  That is, it could parse
Jabberwocky or your article (well, I can't quite claim that it would parse
*all* of either one!) without having the least idea that your article is
meaningful whereas Jabberwocky isn't (apart from an explanation by Humpty
Dumpty). On the other hand, it wouldn't parse something like "book the table
on see I", despite the fact that we might make sense of the latter (because
of our world knowledge).  Likewise, human aphasics often show similar deficits
in one or another area of their speech or language understanding.  If this
isn't modular, what is?  But as I say, maybe I don't understand what you
mean by modular...
-- 
Mike Maxwell
Boeing Advanced Technology Center
	...uw-beaver!uw-june!bcsaic!michaelm

franka@mmintl.UUCP (Frank Adams) (10/15/86)

In article <166@mind.UUCP> harnad@mind.UUCP writes:
>What I mean by a symbol is an
>arbitrary formal token, physically instantiated in some way (e.g., as
>a mark on a piece of paper or the state of a 0/1 circuit in a
>machine) and manipulated according to certain formal rules. The
>critical thing is that the rules are syntactic, that is, the symbol is
>manipulated on the basis of its shape only -- which is arbitrary,
>apart from the role it plays in the formal conventions of the syntax
>in question. The symbol is not manipulated in virtue of its "meaning."
>Its meaning is simply an interpretation we attach to the formal
>goings-on. Nor is it manipulated in virtue of a relation of
>resemblance to whatever "objects" it may stand for in the outside
>world, or in virtue of any causal connection with them. Those
>relations are likewise mediated only by our interpretations.

I see two problems with respect to this viewpoint.  One is that relating
purely symbolic functions to external events is essentially a solved
problem.  Digital audio recording, for example, works quite well.  Robotic
operations generally fail, when they do, not because of any problems with
the digital control of an analog process, but because the purely symbolic
portion of the process is inadequate.  In other words, there is every reason
to expect that a computer program able to pass the Turing test could be
extended to one able to pass the robotic version of the Turing test,
requiring additional development effort which is tiny by comparison (though
likely still measured in man-years).

Secondly, even in a purely formal environment, there turn out to be a lot of
real things to talk about.  Primitive concepts of time (before and after)
are understandable.  One can talk about nouns and verbs, sentences and
conversations, self and other.  I don't see any fundamental difference
between the ability to deal with symbols as real objects, and the ability to
deal with other kinds of real objects.

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Multimate International    52 Oakland Ave North    E. Hartford, CT 06108

me@utai.UUCP (Daniel Simon) (10/16/86)

In article <167@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>In response to my article <160@mind.UUCP>, Daniel R. Simon asks:
>
>>	1)  To what extent is our discernment of intelligent behaviour
>>	    context-dependent?...Might not the robot version [of the
>>	    turing test] lead to the...problem of testers being 
>>	    insufficiently skeptical of a machine with human appearance?
>>	    ...Is it ever possible to trust the results of any 
>>	    instance of the test...?
>
>My reply to these questions is quite explicit in the papers in
>question: 
>The turing test has two components, (i) a formal, empirical one,
>and (ii) an informal, intuitive one. The formal empirical component (i)
>is the requirement that the system being tested be able to generate human
>performance (be it robotic or linguistic). That's the nontrivial
>burden that will occupy theorists for at least decades to come, as we
>converge on (what I've called) the "total" turing test -- a model that
>exhibits all of our robotic and lingistic capacities. 

By "nontrivial burden", do you mean the task of defining objective criteria
by which to characterize "human performance"?  If so, you are after the same
thing as I am, but I fail to see what this has to do with the Turing test as
originally conceived, which involved measuring up AI systems against observers'
impressions, rather than against objective standards.  Apparently, you're not
really defending the Turing test at all, but rather something quite different.

Moreover, you haven't said anything concrete about what this test might look 
like.  On what foundation could such a set of defining characteristics for 
"human performance" be based?  Would it define those attributes common to all 
human beings?  Most human beings? At least one human being?  How would we 
decide by what criteria to include observable attributes in our set of "human" 
ones?  How could such attributes be described?  Is such a set of descriptions 
even feasible?  If not, doesn't it call into question the validity of seeking 
to model what cannot be objectively characterized?  And if such a set of 
describable attributes is feasible, isn't it an indispensable prerequisite for 
the building of a working Turing-test-passing model?

Please forgive my impertinent questions, but I haven't read your articles, and
I'm not exactly clear about what this "total" Turing test entails.

>The informal,
>intuitive component (ii) is that the system in question must perform in a
>way that is indistinguishable from the performance of a person, as
>judged by a person.
>
>Now the only reply I have for the sceptic about (ii) is
>that he should remember that he has nothing MORE than that to go on in
>the case of any other mind than his own. In other words, there is no
>rational reason for being more sceptical about robots' minds (if we
>can't tell their performance apart from that of people) than about
>(other) peoples' minds. The turing test is ALREADY the informal way we
>contend with the "other-minds" problem [i.e., how can you be sure
>anyone else but you has a mind, rather than merely acting AS IF it had
>a mind?], so why should we demand more in the case of robots? ...
>
I'm afraid I must disagree.  I believe that people in general dodge the "other
minds" problem simply by accepting as a convention that human beings are by 
definition intelligent.  For example, we use terms such as "autistic", 
"catatonic", and even "sleeping" to describe people whose behaviour would in 
most cases almost certainly be described as unintelligent if exhibited by a 
robot.  Such people are never described as "unintelligent" in the sense of the 
word that we would use to describe a robot who showed the exact same behaviour 
patterns.  Rather, we imply by using these terms that the people being 
described are human, and therefore *would* be behaving intelligently, but for 
(insert neurophysiological/psychological explanation here).  This implicit 
axiomatic attribution of intelligence to humans helps us to avoid not only 
the "other minds" problem, but also the problem of assessing intelligence 
despite the effect of what I previously referred to loosely as the "context" of 
our observations.  In short, we do not really use the Turing test on each 
other, because we are all well acquainted with how easily we can be fooled by 
contextual traps.  Instead, we automatically associate intelligence with human 
beings, thereby making our intuitive judgment even less useful to the AI 
researcher working with computers or robots.

>As to "context," as I argue in the paper, the only one that is
>ultimately defensible is the "total" turing test, since there is no
>evidence at all that either capacities or contexts are modular. The
>degrees of freedom of a successful total-turing model are then reduced
>to the usual underdetermination of scientific theory by data. (It's always
>possible to carp at a physicist that his theoretic model of the
>universe "is turing-indistinguishable from the real one, but how can
>you be sure it's `really true' of the world?")
>
Wait a minute--You're back to component (i).  What you seem to be saying is
that the informal component (component (ii)) has no validity at all apart from
the "context" of having passed component (i).  The obvious conclusion is that
component (ii) is superfluous; any system that passes the "total Turing test"
exhibits "human behaviour", and hence must by definition be indistinguishable
from a human to another human.

>>	2)  Assuming that some "neutral" context can be found...
>>	    what does passing (or failing) the Turing test really mean?
>
>It means you've successfully modelled the objective observables under
>investigation. No empirical science can offer more. And the only
>"neutral" context is the total turing test (which, like all inductive
>contexts, always has an open end, namely, the everpresent possibility
>that things could turn out differently tomorrow -- philosophers call
>this "inductive risk," and all empirical inquiry is vulnerable to it).
>
Again, you have all but admitted that the "total" Turing test you have 
described has nothing to do with the Turing test at all--it is a set of 
"objective observables" which can be verified through scientific examination.
The thoughtful examiner and "comparison human" have been replaced with
controlled scientific experiments and quantifiable results.  What kinds of 
experiments?  What kinds of results?  WHAT DOES THE "TOTAL TURING TEST"
LOOK LIKE?

>>	3)  ...are there more appropriate means by which we 
>>	    could evaluate the human-like or intelligent properties of an AI 
>>	    system?  ...is it possible to formulate the qualities that 
>>	    constitute intelligence in a manner which is more intuitively 
>>	    satisfying than the standard AI stuff about reasoning, but still 
>>	    more rigorous than the Turing test?
>
>I don't think there's anything more rigorous than the total turing
>test since, when formulated in the suitably generalized way I
>describe, it can be seen to be identical to the empirical criterion for
>all of the objective sciences... 

One question you haven't addressed is the relationship between intelligence and
"human performance".  Are the two synonymous?  If so, why bother to make 
artificial humans when making natural ones is so much easier (not to mention
more fun)?  And if not, how does your "total Turing test" relate to the
discernment of intelligence, as opposed to human-like behaviour?

I know, I know.  I ask a lot of questions.  Call me nosy.
>
>
>Stevan Harnad
>princeton!mind!harnad

					Daniel R. Simon

"We gotta install database systems 
 Custom software delivery
 We gotta move them accounting programs
 We gotta port them all to PC's...."

harnad@mind.UUCP (Stevan Harnad) (10/16/86)

In reply to a prior iteration D. Simon writes:

>	I fail to see what [your "Total Turing Test"] has to do with
>	the Turing test as originally conceived, which involved measuring
>	up AI systems against observers' impressions, rather than against
>	objective standards... Moreover, you haven't said anything concrete
>	about what this test might look like.

How about this for a first approximation: We already know, roughly
speaking, what human beings are able to "do" -- their total cognitive
performance capacity: They can recognize, manipulate, sort, identify and
describe the objects in their environment and they can respond and reply
appropriately to descriptions. Get a robot to do that. When you think
he can do everything you know people can do formally, see whether
people can tell him apart from people informally.

>	I believe that people in general dodge the "other minds" problem
>	simply by accepting as a convention that human beings are by 
>	definition intelligent.

That's an artful dodge indeed. And do you think animals also accept such
conventions about one another? Philosophers, at least, seem to
have noticed that there's a bit of a problem there. Looking human
certainly gives us the prima facie benefit of the doubt in many cases,
but so far nature has spared us having to contend with any really
artful imposters. Wait till the robots begin giving our lax informal
turing-testing a run for its money.

>	What you seem to be saying is that [what you call] 
>	the informal component [(i) of the turing test -- 
>	i. e., indistinguishability from a person, as judged by a
>	person] has no validity at all apart from the "context" of
>	having passed [your] component (i) [i.e., the generation of
>	our total cognitive performance capacity]. The obvious
>	conclusion is that component (ii) is superfluous.

It's no more superfluous than, say, the equivalent component in the
design of an artificial music composer. First you get it to perform in
accordance with what you believe to be the formal rules of (diatonic)
composition. Then, when it successfully performs according to the
rules, see whether people like its stuff. Peoples' judgments, after
all, were not only the source of those rules in the first place, but
without the informal aesthetic sense that guided them, the rules would
amount to just that -- meaningless acoustic syntax.

Perhaps another way of putting it is that I doubt that what guides our
informal judgments (and underlies our capacities) can be completely
formalized in advance. The road to Total-Turing Utopia will probably
be a long series of feedback cycles between the formal and informal
components of the test before we ever achieve our final passing grade.

>	One question you haven't addressed is the relationship between
>	intelligence and "human performance". Are the two synonymous?
>	If so, why bother to make artificial humans... And if not, how
>	does your "total Turing test" relate to the discernment of
>	intelligence, as opposed to human-like behaviour?

Intelligence is what generates human performance. We make artificial
humans to implement and test our theories about the substrate of human
performance capacity. And there's no objective difference between
human and (turing-indistinguishably) human-like.

>	WHAT DOES THE "TOTAL TURING TEST" LOOK LIKE?...  Please
>	forgive my impertinent questions, but I haven't read your
>	articles, and I'm not exactly clear about what this "total"
>	Turing test entails.

Try reading the articles.

			******

I will close with an afterthought on "blind" vs. "nonblind" turing
testing that I had after the last iteration:

In the informal component of the total turing test it may be
arguable that a sceptic would give a robot a better run for its money
if he were pre-alerted to the possibility that it was a robot (i.e., if the
test were conducted "nonblind" rather than "blind"). That way the robot
wouldn't be inheriting so much of the a priori benefit of the doubt that
had accrued from our lifetime of successful turing-testing of biological
persons of similar appearance (in our everyday informal solutions to
the "other-minds" problem). The blind/nonblind issue does not seem critical
though, since obviously the turing test is an open-ended one (and
probably also, like all other empirical conjectures, confirmable only
as a matter of degree); so we probably wouldn't want to make up our minds
too hastily in any case. I would say that several years of having lived
amongst us, as in the sci-fi movies, without arousing any suspicions -- and
eliciting only shocked incredulity from its close friends once the truth about
its roots was revealed -- would count as a pretty good outcome on a "blind"
total turing test.

Stevan Harnad
princeton!mind!harnad

harnad@mind.UUCP (Stevan Harnad) (10/16/86)

In reply to the following by me in <167@mind.UUCP>:

>	there is no evidence at all that
>	either capacities or contexts are modular.

michaelm@bcsaic.UUCP (michael maxwell) writes:

>> Maybe I'm reading this out of context (not having read your books or papers),
>> but could you explain this statement?  I know of lots of evidence for the
>> modularity of various aspects of linguistic behavior.  In fact, we have a
>> parser + grammar of English here that captures a large portion of English
>> syntax, but has absolutely no semantics (yet).

I'm afraid this extract is indeed a bit out of context. The original
context concerned what I've dubbed the "Total Turing Test," one
in which ALL of our performance capacities -- robotic and linguistic --
are "captured." In the papers under discussion I described several
arguments in favor of the Total Turing Test over any partial
turing test, such as "toy" models that only simulate a small
chunk of our cognitive performance capacity, or even the (subtotal)
linguistic ("teleteype") version of the Total Turing Test. These
arguments included:

(3) The "Convergence Argument" that `toy' problems are arbitrary,
that they have too many degrees of freedom, that the d.f. shrink as the
capacities of the toy grow to life-size, and that the only version that
reduces the underdetermination to the normal proportions of a
scientific theory is the `Total' one.

(5) The "Nonmodularity Argument" that no subtotal model constitutes a
natural module (insofar as the turing test is concerned); the only
natural autonomous modules are other organisms, with their complete
robotic capacities (more of this below).

(7) The "Robotic Functionalist Argument" that the entire symbolic
functional level is no macromodule either, and needs to be grounded
in robotic function.

I happen to have views on the "autonomy of syntax" (which is of
course the grand-daddy of the current modulo-mania), but they're not
really pertinent to the total vs modular turing-test issue. Perhaps
the only point about an autonomous parser that is relevant here is
that it is in the nature of the informal, intuitive component of the
turing test that lifeless fragments of mimicry (such as Searle's isolated
`thirst' module) are not viable; they simply fail to convince us of
anything. And rightly so, I should think; otherwise the turing test
would be a pretty flimsy one.

Let me add, though, that even "convincing" autonomous parsing performance
(in the non-turing sense of convincing) seems to me to be rather weak
evidence for the psychological reality of a syntactic module -- let
alone that it has a mind. (On my theory, semantic performance has to be
grounded in robotic performance and syntactic performance must in turn
be grounded in semantic performance.)

Stevan Harnad
(princeton!mind!harnad)

greid@adobe.UUCP (10/17/86)

It seems to me that the idea of concocting a universal Turing test is sort
of useless.

Consider, for a moment, monsters.  There have been countless monsters on TV
and film that have had varying degrees of human-ness, and as we watch the
plot progress, we are sort of administering the Turing test.  Some of the
better aliens, like in "Blade Runner", are very difficult to detect as being
non-human.  However, given enough time, we will eventually notice that they
don't sleep, or that they drink motor oil, or that they don't bleed when
they are cut (think of "Terminator" and surgery for a minute), and we start
to think of alternative explanations for the aberrances we have noticed.  If
we are watching TV, we figure it is a monster.  If we are walking down the
street and we see somebody get their arm cut off and they don't bleed, we
think *we* are crazy (or we suspect "special effects" and start looking for
the movie camera), because there is no other plausible explanation.

There are even human beings whom we question when one of our subconscious
"tests" fails--like language barriers, brain damage, etc.  If you think
about it, there are lots of human beings who would not pass the Turing test.

Let's forget about it.

Glenn Reid
Adobe Systems

Adobe claims no knowledge of anything in this message.

harnad@mind.UUCP (Stevan Harnad) (10/18/86)

In response to some of the arguments in favor of the robotic over the
symbolic version of the turing test in (the summaries of) my articles
"Minds, Machines and Searle" and "Category Induction and Representation"
franka@mmintl.UUCP (Frank Adams) replies:

>	[R]elating purely symbolic functions to external events is
>	essentially a solved problem. Digital audio recording, for
>	example, works quite well. Robotic operations generally fail,
>	when they do, not because of any problems with the digital
>	control of an analog process, but because the purely symbolic
>	portion of the process is inadequate. In other words, there is
>	every reason to expect that a computer program able to pass the
>	[linguistic version of the] Turing test could be extended to one
>	able to pass the robotic version...requiring additional development
>	effort which is tiny by comparison (though likely still measured
>	in man-years).

This argument has become quite familiar to me from delivering the oral
version of the papers under discussion. It is the "Triviality of
Transduction [A/D conversion, D/A conversion, Effectors] Argument" (TT
for short).

Among my replies to TT the central one is the principled
Antimodularity Argument: There are reasons to believe that the neat
partitioning of function into autonomous symbolic and nonsymbolic modules
may break down in the special case of mind modeling. These reasons
include my "Groundedness" Argument: that unless cognitive symbols are
grounded (psychophysically, bottom-up) in nonsymbolic processes they remain
meaningless. (This amounts to saying that we must be intrinsically
"dedicated" devices and that our A/D and our "decryption/encryptions"
are nontrivial; in passing, this is also a reply to Searle's worries
about "intrinsic" versus "derived" intentionality. It may also be the
real reason why "the purely symbolic portion of the process is inadequate"!)
This problem of grounding symbolic processes in nonsymbolic ones in the
special case of cognition is also the motivation for the material on category
representation.

Apart from nonmodularity and groundedness, other reasons include:

(1) Searle's argument itself, and the fact that only the transduction
argument can block it; that's some prima facie ground for believing
that the TT may be false in the special case of mind-modeling.

(2) The triviality of ordinary (nonbiological) transduction and its
capabilities, comparared to what organisms with senses (and minds) can
do. (Compare the I/O capacities of "audio" devices with those of
"auditory" ones; the nonmodular road to the capacity to pass the total
turing test suggests that we are talking here about qualitative
differences, not quantitative ones.)

(3) Induction (both ontogenetic and phylogentetic) and inductive capacity
play an intrinsic and nontrivial role in bio-transduction that they do
not play in ordinary engineering peripherals, or the kinds of I/O
problems these have been designed for.

(4) Related to the Simulation/Implementation Argument: There are always
more real-world contingencies than can be anticipated in a symbolic
description or simulation. That's why category representations are
approximate and the turing test is open-ended.

For all these reasons, I believe that Object/Symbol conversion in
cognition is a considerably more profound problem than ordinary A/D;
orders of magnitude more profound, in fact, and hence that TT is
false.

>	[E]ven in a purely formal environment, there turn out to be a
>	lot of real things to talk about. Primitive concepts of time
>	(before and after) are understandable. One can talk about nouns
>	and verbs, sentences and conversations, self and other. I don't
>	see any fundamental difference between the ability to deal with
>	symbols as real objects, and the ability to deal with other kinds
>	of real objects.

I don't completely understand the assumptions being made here. (What
is a "purely formal environment"? Does anyone you know live in one?)
Filling in with some educated guesses here, I would say that again the
Object/Symbol conversion problem in the special case of organisms'
mental capacities is being vastly underestimated. Object-manipulation
(including discrimination, categorization, identification and
description) is not a mere special case of symbol-manipulation or
vice-versa. One must be grounded in the other in a principled way, and
the principles are not yet known.

On another interpretation, perhaps you are talking about "deixis" --
the necessity, even in the linguistic (symbolic) version of the turing
test, to be able to refer to real objects in the here-and-now. I agree that
this is a deep problem, and conjecture that its solution in the
symbolic version will have to draw on anterior nonsymbolic (i.e.,
robotic) capacities.

Stevan Harnad
princeton!mind!harnad

harnad@mind.UUCP (Stevan Harnad) (10/19/86)

greid@adobe.UUCP (Glenn Reid) writes:

>	[C]oncocting a universal Turing test is sort of useless... There
>	have been countless monsters on TV...[with] varying degrees of
>	human-ness...Some...very difficult to detect as being non-human.
>	However, given enough time, we will eventually notice that they
>	don't sleep, or that they drink motor oil...

The objective of the turing test is to judge whether the candidate
has a mind, not whether it is human or drinks motor oil. We must
accordingly consult our intuitions as to what differences are and are
not relevant to such a judgment. [Higher animals, for example, have no
trouble at all passing (the animal version) of the turing test as far
as I'm concerned. Why should aliens, monsters or robots, if they have what
it takes in the relevant respects? As I have argued before, turing-testing
for relevant likeness is really our only way of contending with the
"other-minds" problem.]

>	[T]here are lots of human beings who would not pass the Turing
>	test [because of brain damage, etc.].

And some of them may not have minds. But we give them the benefit of
the doubt for humanitarian reasons anyway.

Stevan Harnad
(princeton!mind!harnad)

rggoebel@watdragon.UUCP (Randy Goebel LPAIG) (10/19/86)

Stevan Harnad writes:
> ...The objective of the turing test is to judge whether the candidate
> has a mind, not whether it is human or drinks motor oil.

This stuff is getting silly.  I doubt that it is possible to test whether
something has a mind, unless you provide a definition of what you believe
a mind is.  Turing's test wasn't a test for whether or not some artificial
or natural entity had a mind. It was his prescription for an evaluation of
intelligence.

harnad@mind.UUCP (Stevan Harnad) (10/20/86)

rggoebel@watdragon.UUCP (Randy Goebel LPAIG) replies:

>	I doubt that it is possible to test whether something has a mind,
>	unless you provide a definition of what you believe a mind is.
>	Turing's test wasn't a test for whether or not some artificial
>	or natural entity had a mind. It was his prescription for an
>	evaluation of intelligence.

And what do you think "having intelligence" is? Turing's criterion
effectively made it: having performance capacity that is indistinguishable
from human performance capacity. And that's all "having a mind"
amounts to (by this objective criterion). There's no "definition" in
any of this, by the way. We'll have definitions AFTER we have the
functional answers about what sorts of devices can and cannot do what
sorts of things, and how and why. For the time being all you have is a
positive phenomenon -- having a mind, having intelligence -- and
an objective and intuitive criterion for inferring its presence in any
other case than one's own. (In your own case you presumable know what
it's like to have-a-mind/have-intelligence on subjective grounds.)

Stevan Harnad
princeton!mind!harnad

michaelm@bcsaic.UUCP (10/20/86)

In article <1862@adobe.UUCP> greid@adobe.UUCP (Glenn Reid) writes:
>...If you think
>about it, there are lots of human beings who would not pass the Turing test.

He must mean me, on a Monday morning :-)
-- 
Mike Maxwell
Boeing Advanced Technology Center
	...uw-beaver!uw-june!bcsaic!michaelm

michaelm@bcsaic.UUCP (10/21/86)

>Stevan Harnad writes:
> ...The objective of the turing test is to judge whether the candidate
> has a mind, not whether it is human or drinks motor oil.

In a related vein, if I recall my history correctly, the Turing test has been
applied several times in history.  One occasion was the encounter between the
New World and the Old.  I believe there was considerable speculation on the
part of certain European groups (fueled, one imagines, by economic motives) as
to whether the American Indians had souls.  The (Catholic) church ruled that 
they did, effectively putting an end to the controversy.  The question of
whether they had souls was the historical equivalent to the question of
whether they had mind and/or intelligence, I suppose.

I believe the Turing test was also applied to oranguatans, although I don't
recall the details (except that the orangutans flunked).

As an interesting thought experiment, suppose a Turing test were done with a
robot made to look like a human, and a human being who didn't speak English--
both over a CCTV, say, so you couldn't touch them to see which one was soft,
etc.  What would the robot have to do in order to pass itself off as human?
-- 
Mike Maxwell
Boeing Advanced Technology Center
	...uw-beaver!uw-june!bcsaic!michaelm

lishka@uwslh.UUCP (a) (10/21/86)

In article <5@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>rggoebel@watdragon.UUCP (Randy Goebel LPAIG) replies:
>
>>	I doubt that it is possible to test whether something has a mind,
>>	unless you provide a definition of what you believe a mind is.
>>	Turing's test wasn't a test for whether or not some artificial
>>	or natural entity had a mind. It was his prescription for an
>>	evaluation of intelligence.
>
>And what do you think "having intelligence" is? Turing's criterion
>effectively made it: having performance capacity that is indistinguishable
>from human performance capacity. And that's all "having a mind"
>amounts to (by this objective criterion). There's no "definition" in
>any of this, by the way. We'll have definitions AFTER we have the
>functional answers about what sorts of devices can and cannot do what
>sorts of things, and how and why. For the time being all you have is a
>positive phenomenon -- having a mind, having intelligence -- and
>an objective and intuitive criterion for inferring its presence in any
>other case than one's own. (In your own case you presumable know what
>it's like to have-a-mind/have-intelligence on subjective grounds.)
>
>Stevan Harnad

	How does one go about testing for something when one does not know
what that something is?  My basic problem with all this are the two
keywords 'mind' and 'intelligence'.  I don't think that what S. Harnad
is talking about when referring to 'mind' and 'intelligence' are what
I believe is the 'mind' and 'intelligence', and I presume others are having
this problem (see first article above).  
	I think a fair example is trying to 'test' for UFO's.  How does one
do this if (a) we don't know what they are and (b) we don't really know if
they exist (is it the same thing with magnetic monpoles?).  What are really
testing for in the case of UFO's?   I think this answer is a little more 
clear than for 'mind', because people generally seem to have an idea of
what a UFO is (an Unidentified Flying Object).  Therefore, the minute we
come across something really strange that falls from the sky and can in
no way be identified we label it a UFO (and then try to explain it somehow).
However, until this happens (and whether this has already happened depends
on what you believe) we can't test specifically for UFO's [at least from
how I look at it].
	How then does one test for 'mind' or 'intelligence'?  These
definitions are even less clear.  Ask a particular scientist what he thinks
is 'mind' and 'intelligence', and then ask another.  Chances are that their
definitions will be different.  Now ask a Christian and a Buddhist.  These
answers will be even more different.  However, I don't think any one will
be more valid than the other.  Now, if one is to define 'mind' before
testing for it, then everyone will have a pretty good idea of what he was
testing for.  But if one refuses to define it, there are going to be a 
h*ll of a lot of arguments (as it seems there already have been in this
discussion).  The same works for intelligence.
	I honestly don't see how one can apply the Total Turing Test,
because the minute one finds a fault, the test has failed.  In fact, even
if the person who created the 'robot' realizes somehow that his creation
is different, then for me the test fails.  But this has all been discussed
before.  However, trying to use 'intelligence' or having a 'mind' as one
of the criteria for this test when one expects to arrive at a useful 
definition "along the way" seems to be sort of silly (from my point of
view).  
	I speak only for myself.  I do think, though, that the above reasons
have contributed to what has become more a fight of basic beliefs than 
anything else.  I will also add my vote that this discussion move away from
'the Total Turing Test' and continue on to something a little less "talked
into the dirt".
					Chris Lishka
					Wisconsin State Lab of Hygiene

	[qualifier: nothing above reflects the views of my employers,
		    although my pets may be in agreement with these views]

harnad@mind.UUCP (Stevan Harnad) (10/22/86)

lishka@uwslh.UUCP (Chris Lishka) asks:


>	How does one go about testing for something when one does not know
>	what that something is?  My basic problem with all this 
>	[discussion about the Total Turing Test] are the two
>	keywords 'mind' and 'intelligence'. I don't think that what S. Harnad
>	is talking about when referring to 'mind' and 'intelligence' are what
>	I believe is the 'mind' and 'intelligence', and I presume others are
>	having this problem...

You bet others are having this problem. It's called the "other minds"
problem: How can you know whether anyone/anything else but you has a mind?

>	Now, if one is to define 'mind' before testing for it, then
>	everyone will have a pretty good idea of what he was testing for.

What makes people think that the other-minds problem will be solved or
simplified by definitions? Do you need a definition to know whether
YOU have a mind or intelligence? Well then take the (undefined)
phenomenon that you know is true of you to be what you're trying to
ascertain about robots (and other people). What's at issue here is not the
"definition" of what that phenomenon is, but whether the Total Turing
Test is the appropriate criterion for inferring its presence in entities
other than yourself.

[I don't believe, by the way, that empirical science or even
mathematics proceeds "definition-first." First you test for the
presence and boundary conditions of a phenomenon (or, in mathematics,
you test whether a conjecture is true), then you construct and test
a causal explanation (or, in mathematics, you do a formal proof), THEN
you provide a definition, which usually depends heavily on the nature
of the explanatory theory (or proof) you've come up with.]

Stevan Harnad
princeton!mind!harnad

harnad@mind.UUCP (Stevan Harnad) (10/23/86)

michaelm@bcsaic.UUCP (michael maxwell) writes:

>	I believe the Turing test was also applied to orangutans, although
>	I don't recall the details (except that the orangutans flunked)...
>	As an interesting thought experiment, suppose a Turing test were done
>	with a robot made to look like a human, and a human being who didn't
>	speak English-- both over a CCTV, say, so you couldn't touch them to
>	see which one was soft, etc. What would the robot have to do in order
>	to pass itself off as human?

They should all three in principle have a chance of passing. For the orang,
we would need to administer the ecologically valid version of the
test. (I think we have reasonably reliable cross-species intuitions
about mental states, although they're obviously not as sensitive as
our intraspecific ones, and they tend to be anthropocentric and
anthropomorphic -- perhaps necessarily so; experienced naturalists are
better at this, just as cross-cultural ethnographic judgments depend on
exposure and experience.) We certainly have no problem in principle with
foreign speakers (the remarkable linguist, polyglot and bible-translator
Kenneth Pike has a "magic show" in which, after less than an hour of "turing"
interactions with a speaker of any of the [shrinking] number of languages he
doesn't yet know, they are babbling mutually intelligibly before your very
eyes), although most of us may have some problems in practice with such a
feat, at least, without practice.

Severe aphasics and mental retardates may be tougher cases, but there
perhaps the orang version would stand us in good stead (and I don't
mean that disrespectfully; I have an extremely high regard for the mental
states of our fellow creatures, whether human or nonhuman).

As to the robot: Well that's the issue here, isn't it? Can it or can it not
pass the appropriate total test that its appropriate non-robot counterpart
(be it human or ape) can pass? If so, it has a mind, by this criterion (the
Total Turing Test). I certainly wouldn't dream of flunking either a human or
a robot just because he/it didn't feel soft, if his/its total performance
was otherwise turing indistinguishable.

Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet

freeman@spar.SPAR.SLB.COM (Jay Freeman) (10/24/86)

Possibly a more interesting test would be to give the computer
direct control of the video bit map and let it synthesize an
image of a human being.

harnad@mind.UUCP (Stevan Harnad) (10/26/86)

freeman@spar.UUCP (Jay Freeman) replies:

>	Possibly a more interesting test [than the robotic version of
>	the Total Turing Test] would be to give the computer
>	direct control of the video bit map and let it synthesize an
>	image of a human being.

Manipulating digital "images" is still only symbol-manipulation. It is
(1) the causal connection of the transducers with the objects of the
outside world, including (2) any physical "resemblance" the energy
pattern on the transducers may have to the objects from which they
originate, that distinguishes robotic functionalism from symbolic
functionalism and that suggests a solution to the problem of grounding
the otherwise ungrounded symbols (i.e., the problem of "intrinsic vs.
derived intentionality"), as argued in the papers under discussion.

A third reason why internally manipulated bit-maps are not a new way
out of the problems with the symbolic version of the turing test is
that (3) a model that tries to explain the functional basis of our
total performance capacity already has its hands full with anticipating
and generating all of our response capacities in the face of any
potential input contingency (i.e., passing the Total Turing Test)
without having to anticipate and generate all the input contingencies
themselves. In other words, its enough of a problem to model the mind
and how it interacts successfully with the world without having to 
model the world too.

Stevan Harnad
{seismo, packard, allegra} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

freeman@spar.UUCP (10/27/86)

<*munch*>

In article <12@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>freeman@spar.UUCP (Jay Freeman) replies:
>
>>	Possibly a more interesting test [than the robotic version of
>>	the Total Turing Test] would be to give the computer
>>	direct control of the video bit map and let it synthesize an
>>	image of a human being.
>
> Manipulating digital "images" is still only symbol-manipulation. [...]

Very well, let's equip the robot with an active RF emitter so
it can jam the camera's electronics and impose whatever bit map it
wishes, whether the camera likes it or not.  Too silly?  Very well,
let's design a robot in the shape of a back projector, and let it
create internally whatever representation of a human being it wishes
the camera to see, and project it on its screen for the camera to
pick up.  Such a robot might do a tolerable job of interacting with
other parts of the "objective" world, using robot arms and whatnot
of more conventional design, so long as it kept them out of the
way of the camera.  Still too silly?  Very well, let's create a
vaguely anthropomorphic robot and equip its external surfaces with
a complete covering of smaller video displays, so that it can
achieve the minor details of human appearance by projection rather
than by mechanical motion.  (We can use a crude electronic jammer to
limit the amount of detail that the camera can see, if necessary.)
Well, maybe our model shop is good enough to do most of the details
of the robot convincingly, so we'll only have to project subtle
details of facial expression.  Maybe just the eyes.

Slightly more seriously, if you are going to admit the presence of
electronic or mechanical devices between the subject under test and
the human to be fooled, you must accept the possibility that the test
subject will be smart enough to detect their presence and exploit their
weaknesses.  Returning to a more facetious tone, consider a robot that
looks no more anthropomorphic than your vacuum cleaner, but that is
possessed of moderate manipulative abilities and a good visual perceptive
apparatus, and furthermore, has a Swiss Army knife.

Before the test commences, the robot sneakily rolls up to the
camera and removes the cover.  It locates the connections for the
external video output, and splices in a substitute connection to 
an external video source which it generates.  Then it replaces the
camera cover, so that everything looks normal.  And a test time,
the robot provides whatever image it wants the testers to see.

A dumb robot might have no choice but to look like a human being
in order to pass the test.  Why should a smart one be so constrained?


					-- Jay Freeman

michaelm@bcsaic.UUCP (michael maxwell) (10/28/86)

In article <10@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>michaelm@bcsaic.UUCP (me) wrote:
>
>>	As an interesting thought experiment, suppose a Turing test were done
>>	with a robot made to look like a human, and a human being who didn't
>>	speak English-- both over a CCTV, say, so you couldn't touch them to
>>	see which one was soft, etc. What would the robot have to do in order
>>	to pass itself off as human?
>
>...We certainly have no problem in principle with
>foreign speakers (the remarkable linguist, polyglot and bible-translator
>Kenneth Pike has a "magic show" in which, after less than an hour of "turing"
>interactions with a speaker of any of the [shrinking] number of languages he
>doesn't yet know, they are babbling mutually intelligibly before your very
>eyes), although most of us may have some problems in practice with such a
>feat, at least, without practice.

Yes, you can do (I have done) such "magic shows" in which you begin to learn a
language using just gestures + what you pick up of the language as you go
along.  It helps to have some training in linguistics, particularly field
methods.  The Summer Institute of Linguistics (of which Pike is President
Emeritus) gives such classes.  After one semester you too can give a magic
show!

I guess what I had in mind for the revised Turing test was not using language
at all--maybe I should have eliminated the sound link (and writing).  What
in the way people behave (facial expressions, body language etc.) would cue
us to the idea the one is a human and the other a robot?  What if you showed
pictures to the examinees--perhaps beautiful scenes, and revolting ones?  This
is more a test for emotions than for mind (Mr. Spock would probably fail).
But I think that a lot of what we think of as human is tied up in this
nonverbal/ emotional level.

BTW, I doubt whether the number of languages Pike knows is shrinking because
of these monolingual demonstrations (aka "magic shows") he's doing.  After the
tenth language, you tend to forget what the second or third language was--
much less what you learned!
-- 
Mike Maxwell
Boeing Advanced Technology Center
	...uw-beaver!uw-june!bcsaic!michaelm

me@utai.UUCP (10/30/86)

In article <1@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>In reply to a prior iteration D. Simon writes:
>
>>	I fail to see what [your "Total Turing Test"] has to do with
>>	the Turing test as originally conceived, which involved measuring
>>	up AI systems against observers' impressions, rather than against
>>	objective standards... Moreover, you haven't said anything concrete
>>	about what this test might look like.
>
>How about this for a first approximation: We already know, roughly
>speaking, what human beings are able to "do" -- their total cognitive
>performance capacity: They can recognize, manipulate, sort, identify and
>describe the objects in their environment and they can respond and reply
>appropriately to descriptions. Get a robot to do that. When you think
>he can do everything you know people can do formally, see whether
>people can tell him apart from people informally.
>
"respond and reply appropriately to descriptions".  Very nice.  Should be a
piece of cake to formalize--especially once you've formalized recognition,
manipulation, identification, and description (and, let's face it, any dumb
old computer can sort).  This is precisely what I was wondering when I asked
you what this total Turing test looks like.  Apparently, you haven't the
foggiest idea, except that it would test roughly the same things that the
old-fashioned, informal, does-it-look-smart-or-doesn't-it Turing test checks.

In fact, none of the criteria you have described above seems defineable in any
sense other than by reference to standard Turing test results ("gee, it sure 
classified THAT element the way I would've!").  And if you WERE to define the
entire spectrum of human behaviour in an objective fashion ("rule 1:  
answering, 'splunge!' to any question is hereby defined as an 'appropriate 
reply'"), how would you determine whether the objective definition is useful?  
Why, build a robot embodying it, and see if people consider it intelligent, of
course!  The illusion of a "total" Turing test, distinct from the 
old-fashioned, subjective variety, thus vanishes in a puff of empiricism.

And forget the well-that's-the-way-Science-does-it argument.  It won't wash
--see below.

>>	I believe that people in general dodge the "other minds" problem
>>	simply by accepting as a convention that human beings are by 
>>	definition intelligent.
>
>That's an artful dodge indeed. And do you think animals also accept such
>conventions about one another? Philosophers, at least, seem to
>have noticed that there's a bit of a problem there. Looking human
>certainly gives us the prima facie benefit of the doubt in many cases,
>but so far nature has spared us having to contend with any really
>artful imposters. Wait till the robots begin giving our lax informal
>turing-testing a run for its money.
>
I haven't a clue whether animals think, or whether you think, for that matter.  
This is precisely my point.  I don't believe we humans have EVER solved the 
"other minds" problem, or have EVER used the Turing test, even to try to 
resolve the question of whether there exist "other minds".  The fact that you 
would like us to have done so, thus giving you a justification for the use of 
the (informal part of) the Turing test (and the subsequent implicit basing of 
the formal part on the informal part--see above), doesn't make it so.

This is where your scientific-empirical model for developing the "total"
Turing test out of the original falls down.  Let's examine the development of
a typical scientific concept:  You have some rough, intuitive observations of
phenomena (gravity, stars, skin).  You take some objects whose properties
you believe you understand (rocks, telescopes, microscopes), let them interact
with your vaguely observed phenomenon, and draw more rigorous conclusions based 
on the recorded results of these experimental interactions.

Now, let's examine the Turing test in that light:  we take possibly-intelligent 
robot R, whose properties are fairly well understood, and sit it in front of 
person P, whose properties are something of a cipher to us.  We then have them 
interact, and get a reading off person P (such as, "yup, shore is smart", or, 
"nope, dumb as a tree").  Now, what properties are being scientifically 
investigated here?  They can't have anything to do with robot R--we assume that
R's designer, Dr. Rstein, already has a fairly good idea what R is about.  
Rather, it appears as though you are discerning those attributes of people 
which relate to their judgment of intelligence in other objects.  Of course, it 
might well turn out that something productive comes out of this, but it's also 
quite possible (and I conjecture that it's actually quite likely) that what you 
get out of this is some scientific law such as, "anything which is physically 
indistinguishable from a human being and can mutter something that sounds like 
person P's language is intelligent; anything else is generally dumb, but 
possibly intelligent, depending on the decoration of the room and the drug 
content of P's bloodstream at the time of the test".  In short, my worries 
about the context-dependence and subjective quality of the results have not 
disappeared in a puff of empiricism; they loom as large as ever.

>
>>	WHAT DOES THE "TOTAL TURING TEST" LOOK LIKE?...  Please
>>	forgive my impertinent questions, but I haven't read your
>>	articles, and I'm not exactly clear about what this "total"
>>	Turing test entails.
>
>Try reading the articles.
>
Well, not only did I consider this pretty snide, but when I sent you mail 
privately, asking politely where I can find the articles in question, I didn't 
even get an answer, snide or otherwise.  So starting with this posting, I 
refuse to apologize for being impertinent.  Nyah, nyah, nyah.

>
>
>Stevan Harnad
>princeton!mind!harnad

						Daniel R. Simon

"sorry, no more quotations"
		-D. Simon

harnad@mind.UUCP (Stevan Harnad) (11/01/86)

Jay Freeman (freeman@spar.UUCP) had, I thought, joined the 
ongoing discussion about the robotic version of the Total Turing Test
to address the questions that were raised in the papers under
discussion, namely: (1) Do we have any basis for contending with the
"other minds problem" -- whether in other people, animals or machines
-- other than turing-indistinguishable performance capacity? (2) Is
the teletype version of the turing test -- which allows only
linguistic (i.e., symbolic) interactions -- a strong enough test? (3)
Could even the linguistic version alone be successfully passed by
any device whose symbolic functions were not "grounded" in
nonsymbolic (i.e., robotic) function? (4) Are transduction, analog
representations, A/D conversion, and effectors really trivial in this
context, or is there a nontrivial hybrid function, grounding symbolic
representation in nonsymbolic representation, that no one has yet
worked out?

When Freeman made his original sugestion that the symbolic processor
could have access to the robotic transducer's bit-map, I thought he
was making the sophisticated (but familiar) point that once the
transducer representation is digitized, it's symbolic all the way.
(This is a variant of the "transduction-is-trivial" argument.) My
prior reply to Freeman (about simulated models of the world, modularity,
etc.) was addressed to this construal of his point. But now I see that
he was not making this point at all, for he replies:

>	... let's equip the robot with an active RF emitter so
>	it can jam the camera's electronics and impose whatever bit map it
>	wishes...  design a robot in the shape of a back projector, and let it
>	create internally whatever representation of a human being it wishes
>	the camera to see, and project it on its screen for the camera to
>	pick up.  Such a robot might do a tolerable job of interacting with
>	other parts of the "objective" world, using robot arms and whatnot
>	of more conventional design, so long as it kept them out of the
>	way of the camera... let's create a vaguely anthropomorphic robot and
>	equip its external surfaces with a complete covering of smaller video
>	displays, so that it can achieve the minor details of human appearance
>	by projection rather than by mechanical motion. Well, maybe our model
>	shop is good enough to do most of the details of the robot convincingly,
>	so we'll only have to project subtle details of facial expression.
>	Maybe just the eyes.

>	... if you are going to admit the presence of electronic or mechanical
>	devices between the subject under test and the human to be fooled,
>	you must accept the possibility that the test subject will be smart
>	enough to detect their presence and exploit their weaknesses...
>	consider a robot that looks no more anthropomorphic than your vacuum
>	cleaner, but that is possessed of moderate manipulative abilities and
>	a good visual perceptive apparatus.

>	Before the test commences, the robot sneakily rolls up to the
>	camera and removes the cover.  It locates the connections for the
>	external video output, and splices in a substitute connection to 
>	an external video source which it generates.  Then it replaces the
>	camera cover, so that everything looks normal.  And at test time,
>	the robot provides whatever image it wants the testers to see.
>	A dumb robot might have no choice but to look like a human being
>	in order to pass the test.  Why should a smart one be so constrained?


From this reply I infer that Freeman is largely concerned with the
question of appearance: Can a robot that doesn't really look like a
person SIMULATE looking like a person by essentially symbolic means,
plus add-on modular peripherals? In the papers under discussion (and in some
other iterations of this discussion on the net) I explicitly rejected appearance
as a criterion. (The reasons are given elsewhere.) What is important in
the robotic version is that it should be a human DO-alike, not a human
LOOK-alike. I am claiming that the (Total) object-manipulative (etc.)
performance of humans cannot be generated by a basically symbolic
module that is merely connected with peripheral modules. I am
hypothesizing (a) that symbolic representations must be NONMODULARLY
(i.e., not independently) grounded in nonsymbolic representations, (b)
that the Total Turing Test requires the candidate to display all of
our robotic capacities as well as our linguistic ones, and (c) that
even the linguistic ones could not be accomplished unless grounded in
the robotic ones. In none of this do the particulars of what the robot
(or its grey matter!) LOOK like matter.

Two last observations. First, what the "proximal stimulus" -- i.e.,
the physical energy pattern on the transducer surface -- PRESERVES
whereas the next (A/D) step -- the digital representation -- LOSES, is
everything about the full PHYSICAL configuration of the energy pattern
that cannot be recovered by inversion (D/A). (That's what the ongoing
concurrent discussion about the A/D distinction is in part concerned
with.) Second, I think there is a tendency to overcomplicate the
issues involved in the turing test by adding various arbitrary
elaborations to it. The basic questions are fairly simply stated
(though not so simple to answer). Focusing instead on ornamented
variants often seems to lead to begging the question or changing the
subject.

Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

harnad@mind.UUCP (Stevan Harnad) (11/01/86)

michaelm@bcsaic.UUCP (michael maxwell) writes:

>	I guess what I had in mind for the revised Turing test was not using
>	language at all--maybe I should have eliminated the sound link (and
>	writing). What in the way people behave (facial expressions, body
>	language etc.) would cue us to the idea the one is a human and the other
>	a robot?  What if you showed pictures to the examinees--perhaps
>	beautiful scenes, and revolting ones? This is more a test for emotions
>	than for mind (Mr. Spock would probably fail). But I think that a lot of
>	what we think of as human is tied up in this nonverbal/ emotional level.

The modularity issue looms large again. I don't believe there's an
independent module for affective expression in human beings. It's all
-- to use a trendy though inadequate expression -- "cognitively
penetrable." There's also the issue of the TOTALITY of the Total
Turing Test, which was intended to remedy the underdetermination of
toy models/modules: It's not enough just to get a model to mimic our
facial expressions. That could all be LITERALLY done with mirrors
(and, say, some delayed feedback and some scrambling and
recombining), and I'm sure it could fool people, at least for a while.
I simply conjecture that this could not be done for the TOTALITY of
our performance capacity using only more of the same kinds of tricks
(analog OR symbolic).

The capacity to manipulate objects in the world in all the ways
we can and do do it (which happens to include naming and describing
them, i.e., linguistic acts) is a lot taller order than mimicking exclusively
our nonverbal expressive behavior. There may be (in an unfortunate mixed
metaphor) many more ways to skin (toy) parts of the theoretical cat than
all of it.

Three final points: (1) Your proposal seems to equivocate between the (more
important) formal functional component of the Total Turing Test (i.e., how do
we get a model to exhibit all of our performance capacities, be they
verbal or nonverbal?) and (2) the informal, intuitive component (i.e., will it
be indistinguishable in all relevant respects from a person, TO a
person?). The motto would be: If you use something short of the Total
Turing Test, you may be able to fool some people some of the time, but not
all of the time. (2) There's nothing wrong in principle with a
nonverbal, even a nonhuman turing test; I think (higher) animals pass this
easily all the time, with virtually the same validity as humans, as
far as I'm concerned. But this version can't rely exclusively on
affective expression modules either. (3) Finally, as I've argued earlier,
all attempts to "capture" qualitative experience -- not just emotion,
but any conscious experience, such as what it's LIKE to see red or
to believe X -- amounts to an unprofitable red herring in this
enterprise. The whole point of the Total Turing Test is that
performance-indistinguishability IS your only basis for infer that
anyone but you has a mind (i.e., has emotions, etc.). In the paper I
dubbed this "methodological epiphenomenalism as aresearch strategy in
cognitive science."

By the way, you prejudged the question the way you put it. A perfectly
noncommittal but monistic way of putting it would be: "What in the way
ROBOTS behave would cue us to the idea that one robot had a mind and
another did not?" This leaves it appropriately open for continuing
research just exactly which causal physical devices (= "robots"), whether
natural or artificial, do or do not have minds.


Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

harnad@mind.UUCP (Stevan Harnad) (11/01/86)

In his second net.ai comment on the abstracts of the two articles under
discussion, me@utai.UUCP (Daniel Simon) wrote:

>>	WHAT DOES THE "TOTAL TURING TEST" LOOK LIKE?...  Please
>>	forgive my impertinent questions, but I haven't read your
>>	articles, and I'm not exactly clear about what this "total"
>>	Turing test entails.

I replied (after longish attempts to explain in two separate iterations):

>"Try reading the articles."

Daniel Simon rejoined:

>	Well, not only did I consider this pretty snide, but when I sent you
>	mail privately, asking politely where I can find the articles in
>	question, I didn't even get an answer, snide or otherwise. So starting
>	with this posting, I refuse to apologize for being impertinent.
>	Nyah, nyah, nyah.

The same day, the following email came from Daniel Simon:

>	Subject:  Hoo, boy, did I put my foot in it:
>	Ooops....Thank you very much for sending me the articles, and I'm sorry
>	I called you snide in my last posting. If you see a bright scarlet glow 
>	in the distance, looking west from Princeton, it's my face. Serves me
>	right for being impertinent in the first place... As soon as I finish
>	reading the papers, I'll respond in full--assuming you still care what
>	I have to say... Thanks again. Yours shamefacedly, Daniel R. Simon.

This is a very new form of communication for all of us. We're just going to
have to work out a new code of Nettiquette. With time, it'll come. I
continue to care what anyone says with courtesy and restraint, and
intend to respond to everything of which I succeed in making sense.


Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

harnad@mind.UUCP (Stevan Harnad) (11/02/86)

Here are backups of 5 prior replies that never made it to mod.ai. They
are responses to Cugini, Kalish, Krulwich, Mozes and Paul. If you've
read them elsewhere, please skip this file...      Stevan Harnad

-----
(1)
In Message-ID: <8610190504.AA08059@ucbvax.Berkeley.EDU> on mod.ai
CUGINI, JOHN <cugini@nbs-vms.ARPA> replies to my claim that

>> there is no rational reason for being more sceptical about robots'
>> minds (if we can't tell their performance apart from that of people)
>> than about (other) peoples' minds.

with the following:

>       One (rationally) believes other people are conscious BOTH because
>       of their performance and because their internal stuff is a lot like
>       one's own.

This is a very important point and a subtle one, so I want to make
sure that my position is explicit and clear: I am not denying that
there exist some objective data that correlate with having a mind
(consciousness) over and above performance data. In particular,
there's (1) the way we look and (2) the fact that we have brains. What
I am denying is that this is relevant to our intuitions about who has a
mind and why. I claim that our intuitive sense of who has a mind is
COMPLETELY based on performance, and our reason can do no better. These
other correlates are only inessential afterthoughts, and it's irrational
to take them as criteria.

My supporting argument is very simple: We have absolutely no intuitive
FUNCTIONAL ideas about how our brains work. (If we did, we'd have long
since spun an implementable brain theory from our introspective
armchairs.) Consequently, our belief that brains are evidence of minds and
that the absence of a brain is evidence of the absence of a mind is based
on a superficial black-box correlation. It is no more rational than
being biased by any other aspect of appearance, such as the color of
the skin, the shape of the eyes or even the presence or absence of a tail.

To put it in the starkest terms possible: We wouldn't know what device
was and was not relevantly brain-like if it was staring us in the face
-- EXCEPT IF IT HAD OUR PERFORMANCE CAPACITIES (i.e., it could pass
the Total Turing Test). That's the only thing our intuitions have to
go on, and our reason has nothing more to offer either.

To take one last pass at setting the relevant intuitions: We know what
it's like to DO (and be able to do) certain things. Similar
performance capacity is our basis for inferring that what it's like
for me is what it's like for you (or it). We do not know anything
about HOW we do any of those things, or about what would count as the
right way and the wrong way (functionally speaking). Inferring that
another entity has a mind is an intuitive judgment based on performance.
It's called the (total) turing test. Inferring HOW other entities
accomplish their performance is ordinary scientific inference. We're in
no rational position to prejudge this profound and substantive issue on
the basis of the appearance of a lump of grey jelly to our untutored but
superstitious minds.

>       [W]e DO have some idea about the functional basis for mind, namely
>       that it depends on the brain (at least more than on the pancreas, say).
>       This is not to contend that there might not be other bases, but for
>       now ALL the minds we know of are brain-based, and it's just not
>       dazzlingly clear whether this is an incidental fact or somewhat
>       more deeply entrenched.

The question isn't whether the fact is incidental, but what its
relevant functional basis is. In other words, what is it about he
brain that's relevant and what incidental? We need the causal basis
for the correlation, and that calls for a hefty piece of creative
scientific inference (probably in theoretical bio-engineering). The
pancreas is no problem, because it can't generate the brain's
performance capacities. But it is simply begging the question to say
that brain-likeness is an EXTRA relevant source of information in
turing-testing robots, when we have no idea what's relevantly brain-like.

People were sure (as sure as they'll ever be) that other people had
minds long before they ever discovered they had brains. I myself believed
the brain was just a figure of speech for the first dozen or so years of
my life. Perhaps there are people who don't learn or believe the news
throughout their entire lifetimes. Do you think these people KNOW any
less than we do about what does or doesn't have a mind? Besides, how
many people do you think could really pick out a brain from a pancreas
anyway? And even those who can have absolutely no idea what it is
about the brain that makes it conscious; and whether a cow's brain or
a horse-shoe crab's has it; or whether any other device, artificial or
natural, has it or lacks it, or why. In the end everyone must revert to
the fact that a brain is as a brain does.

>       Why is consciousness a red herring just because it adds a level
>       of uncertainty?

Perhaps I should have said indeterminacy. If my arguments for
performance-indiscernibility (the turing test) as our only objective
basis for inferring mind are correct, then there is a level of
underdetermination here that is in no way comparable to that of, say,
the unobservable theoretical entities of physics (say, quarks, or, to
be more trendy, perhaps strings). Ordinary underdetermination goes
like this: How do I know that your theory's right about the existence
and presence of strings? Because WITH them the theory succeeds in
accounting for all the objective data (let's pretend), and without
them it does not. Strings are not "forced" by the data, and other
rival theories may be possible that work without them. But until these
rivals are put forward, normal science says strings are "real" (modulo
ordinary underdetermination).

Now try to run that through for consciousness: How do I know that your
theory's right about the existence and presence of consciousness (i.e.,
that your model has a mind)? "Because its performance is
turing-indistinguishable from that of creatures that have minds." Is
your theory dualistic? Does it give consciousness an independent,
nonphysical, causal role? "Goodness, no!" Well then, wouldn't it fit
the objective data just as well (indeed, turing-indistinguishably)
without consciousness? "Well..."

That's indeterminacy, or radical underdetermination, or what have you.
And that's why consciousness is a methodological red herring.

>       Even though any correlations will ultimately be grounded on one side
>       by introspection reports, it does not follow that we will never know,
>       with reasonable assurance, which aspects of the brain are necessary for
>       consciousness and which are incidental...Now at some level of difficulty
>       and abstraction, you can always engineer anything with anything... But
>       the "multi-realizability" argument has force only if its obvious
>       (which it ain't) that the structure of the brain at a fairly high
>       level (eg neuron networks, rather than molecules), high enough to be
>       duplicated by electronics, is what's important for consciousness.

We'll certainly learn more about the correlation between brain
function and consciousness, and even about the causal (functional)
basis of the correlation. But the correlation will really be between
function and performance capacity, and the rest will remain the intuitive
inference or leap of faith it always was. And since ascertaining what
is relevant about brain function and what is incidental cannot depend
simply on its BEING brain function, but must instead depend, as usual, on
the performance criterion, we're back where we started. (What do you
think is the basis for our confidence in introspective reports? And
what are you going to say about robots'introspective reports...?)

I don't know what you mean, by the way, about always being able to
"engineer anything with anything at some level of abstraction." Can
anyone engineer something to pass the robotic version of the Total
Turing Test right now? And what's that "level of abstraction" stuff?
Robots have to do their thing in the real world. And if my
groundedness arguments are valid, that ain't all done with symbols
(plus add-on peripheral modules).

Stevan Harnad

-----
(2)
In mod.ai, Message-ID: <861016-071607-4573@Xerox>,
 "charles_kalish.EdServices"@XEROX.COM writes:

>       About Stevan Harnad's two kinds of Turing tests [linguistic
>       vs. robotic]: I can't really see what difference the I/O methods
>       of your system makes. It seems that the relevant issue is what
>       kind of representation of the world it has.

I agree that what's at issue is what kind of representation of the
world the system has. But you are prejudging "representation" to mean
only symbolic representation, whereas the burden of the papers in
question is to show that symbolic representations are "ungrounded" and
must be grounded in nonsymbolic processes (nonmodularly -- i.e., NOT
by merely tacking on autonomous peripherals).

>       While I agree that, to really understand, the system would need some
>       non-purely conventional representation (not semantic if "semantic"
>       means "not operable on in a formal way" as I believe [given the brain
>       is a physical system] all mental processes are formal  then "semantic"
>       just means governed by a process we don't understand yet), giving and
>       getting through certain kinds of I/O doesn't make much difference.

"Non-purely conventional representation"? Sounds mysterious. I've
tried to make a concrete proposal as to just what that hybrid
representation should be like.

"All mental processes are formal"? Sounds like prejudging the issue again.
It may help to be explicit about what one means by formal/symbolic:
Symbolic processing is the manipulation of (arbitrary) physical tokens
in virtue of their shape on the basis of formal rules. This is also
called syntactic processing. The formal goings-on are also
"semantically interpretable" -- they have meanings; they are connected
to objects in the outside world that they are about. The Searle
problem is that so far the only devices that do semantic
interpretations intrinsically are ourselves. My proposal is that
grounding the representations nonmodularly in the I/O connection may provide
the requisite intrinsic semantics. This may be the "process we don't
understand yet." But it means giving up the idea that "all mental
processes are formal" (which in any case does not follow, at least on
the present definition of "formal," from the fact that "the brain is a
physical system").

>       Two for instances: SHRDLU operated on a simulated blocks world. The
>       modifications to make it operate on real blocks would have been
>       peripheral and not have affected the understanding of the system.

This is a variant of the "Triviality of Transduction (& A/D, & D/A,
and Effectors)" Argument (TT) that I've responded to in another
iteration. In brief, it's toy problems like SHRDLU that are trivial.
The complete translatability of internal symbolic descriptions into
the objects they stand for (and the consequent partitioning of
the substantive symbolic module and the trivial nonsymbolic
peripherals) may simply break down, as I predict, for life-size
problems approaching the power to pass the Total Turing Test.

To put it another way: There is a conjecture implicit in the solutions
to current toy/microworld problems, namely, that something along
essentially the same lines will suitably generalize to the
grown-up/macroworld problem. What I'm saying amounts to a denial of
that conjecture, with reasons. It is not a reply to me to simply
restate the conjecture.

>       Also, all systems take analog input and give analog output. Most receive
>       finger pressure on keys and return directed streams of ink or electrons.
>       It may be that a robot would need  more "immediate" (as opposed to
>       conventional) representations, but it's neither necessary nor sufficient
>       to be a robot to have those representations.

The problem isn't marrying symbolic systems to any old I/O. I claim
that minds are "dedicated" systems of a particular kind: The kind
capable of passing the Total Turing Test. That's the only necessity and
sufficiency in question.

And again, the mysterious word "immediate" doesn't help. I've tried to
make a specific proposal, and I've accepted the consequences, namely, that it's
just not going to be a "conventional" marriage at all, between a (substantive)
symbolic module and a (trivial) nonsymbolic module, but rather a case of
miscegenation (or a sex-change operation, or some other suitably mixed
metaphor). The resulting representational system will be grounded "bottom-up"
in nonsymbolic function (and will, I hope, display the characteristic
"hybrid vigor" that our current pure-bred symbolic and nonsymbolic processes
lack), as I've proposed (nonmetaphorically) in the papers under discussion.

Stevan Harnad

-----
(3)
KRULWICH@C.CS.CMU.EDU (Bruce Krulwich) writes:

>	i disagree...that symbols, and in general any entity that a computer
>	will process, can only be dealt with in terms of syntax. for example,
>	when i add two integers, the bits that the integers are encoded in are
>	interpreted semantically to combine to form an integer. the same
>	could be said about a symbol that i pass to a routine in an
>	object-oriented system such as CLU, where what is done with
>	the symbol depends on it's type (which i claim is it's semantics)

Syntax is ordinarily defined as formal rules for manipulating physical
symbol tokens in virtue of their (arbitrary) SHAPES. The syntactic goings-on
are semantically interpretable, that is, the symbols are also
manipulable in virtue of their MEANINGS, not just their shapes.
Meaning is a complex and ill-understood phenomenon, but it includes
(1) the relation of the symbols to the real objects they "stand for" and
(2) a subjective sense of understanding that relation (i.e., what
Searle has for English and lacks for Chinese, despite correctly
manipulating its symbols). So far the only ones who seem to
do (1) and (2) are ourselves. Redefining semantics as manipulating symbols
in virtue of their "type" doesn't seem to solve the problem...

>	i think that the reason that computers are so far behind the
>	human brain in semantic interpretation and in general "thinking"
>	is that the brain contains a hell of a lot more information
>	than most computer systems, and also the brain makes associations
>	much faster, so an object (ie, a thought) is associated with
>	its semantics almost instantly.

I'd say you're pinning a lot of hopes on "more" and "faster." The
problem just might be somewhat deeper than that...

Stevan Harnad

-----
(4)
On mod.ai, in Message-ID: <8610160605.AA09268@ucbvax.Berkeley.EDU>
on 16 Oct 86 06:05:38 GMT, eyal@wisdom.BITNET (Eyal mozes) writes:

>       I don't see your point at all about "categorical
>       perception". You say that "differences between reds and differences
>       between yellows look much smaller than equal-sized differences that
>       cross the red/yellow boundary". But if they look much smaller, this
>       means they're NOT "equal-sized"; the differences in wave-length may be
>       the same, but the differences in COLOR are much smaller.

There seems to be a problem here, and I'm afraid it might be the
mind/body problem. I'm not completely sure what you mean. If all
you mean is that sometimes equal-sized differences in inputs can be
made unequal by internal differences in how they are encoded, embodied
or represented -- i.e., that internal physical differences of some
sort may mediate the perceived inequalities -- then I of course agree.
There are indeed innate color-detecting structures. Moreover, it is
the hypothesis of the paper under discussion that such internal
categorical representations can also arise as a consequence of
learning.

If what you mean, however, is that there exist qualitative differences among
equal-sized input differences with no internal physical counterpart, and
that these are in fact mediated by the intrinsic nature of phenomenological
COLOR -- that discontinuous qualitative inequalities can occur when
everything physical involved, external and internal, is continuous and
equal -- then I am afraid I cannot follow you.

My own position on color quality -- i.e., "what it's like" to
experience red, etc. -- is that it is best ignored, methodologically.
Psychophysical modeling is better off restricting itself to what we CAN
hope to handle, namely, relative and absolute judgments: What differences
can we tell apart in pairwise comparison (relative discrimination) and
what stimuli or objects can we label or identify (absolute
discrimination)? We have our hands full modeling this. Further
concerns about trying to capture the qualitative nature of perception,
over and above its performance consequences [the Total Turing Test]
are, I believe, futile.

This position can be dubbed "methodological epiphenomenalism." It amounts
to saying that the best empirical theory of mind that we can hope to come
up with will always be JUST AS TRUE of devices that actual have qualitative
experiences (i.e., are conscious) as of devices that behave EXACTLY AS IF
they had qualitative experiences (i.e., turing-indistinguishably), but do
not (if such insentient look-alikes are possible). The position is argued
in detail in the papers under discussion.

>       Your whole theory is based on the assumption that perceptual qualities
>       are something physical in the outside world (e.g., that colors ARE
>       wave-lengths). But this is wrong. Perceptual qualities represent the
>       form in which we perceive external objects, and they're determined both
>       by external physical conditions and by the physical structure of our
>       sensory apparatus; thus, colors are determined both by wave-lengths and
>       by the physical structure of our visual system. So there's no apriori
>       reason to expect that equal-sized differences in wave-length will lead
>       to equal-sized differences in color, or to assume that deviations from
>       this rule must be caused by internal representations of categories. And
>       this seems to completely cut the grounds from under your theory.

Again, there is nothing for me to disagree with if you're saying that
perceived discontinuities are mediated by either external or internal
physical discontinuities. In modeling the induction and representation
of categories, I am modeling the physical sources of such
discontinuities. But there's still an ambiguity in what you seem to be
saying, and I don't think I'm mistaken if I think I detect a note of
dualism in it. It all hinges on what you mean by "outside world." If
you only mean what's physically outside the device in question, then of
course perceptual qualities cannot be equated with that. It's internal
physical differences that matter.

But that doesn't seem to be all you mean by "outside world." You seem
to mean that the whole of the physical world is somehow "outside" conscious
perception. What else can you mean by the statement that "perceptual
qualities represent the form [?] in which we perceive external
objects" or that "there's no...reason to expect that...[perceptual]
deviations from [physical equality]...must be caused by internal
representations of categories."

Perhaps I have misunderstood, but either this is just a reminder that
there are internal physical differences one must take into account too
in modeling the induction and representation of categories (but then
they are indeed taken into account in the papers under discussion, and
I can't imagine why you would think they would "completely cut the
ground from under" my theory) or else you are saying something metaphysical
with which I cannot agree.

One last possibility may have to do with what you mean by
"representation." I use the word eclectically, especially because the
papers are arguing for a hybrid representation, with the symbolic
component grounded in the nonsymbolic. So I can even agree with you
that I doubt that mere symbolic differences are likely to be the sole
cause of psychophysical discontinuities, although, being physically
embodied, they are in principle sufficient. I hypothesize, though,
that nonsymbolic differences are also involved in psychophysical
discontinuities.

>       My second criticism is that, even if "categorical perception" really
>       provided a base for a theory of categorization, it would be very
>       limited; it would apply only to categories of perceptual qualities. I
>       can't see how you'd apply your approach to a category such as "table",
>       let alone "justice".

How abstract categories can be grounded "bottom-up" in concrete psychophysical
categories is the central theme of the papers under discussion. Your remarks
were based only on the summaries and abstracts of those papers. By now I
hope the preprints have reached you, as you requested, and that your
question has been satisfactorily answered. To summarize "grounding"
briefly: According to the model, (learned) concrete psychophysical categories
are formed from sampling positive and negative instances of a category
and then encoding the invariant information that will reliably identify
further instances. This might be how one learned the concrete
categories "horse" and "striped" for example. The (concrete) category
"zebra" could then be learned without need for direct perceptual
ACQUAINTANCE with the positive and negative instances by simply being
told that a zebra is a striped horse. That is, the category can
be learned by symbolic DESCRIPTION by merely recombining the labels of
the already-grounded perceptual categories.

All categorization involves some abstraction and generalization (even
"horse," and certainly "striped" did), so abstract categories such as
"goodness," "truth" and "justice" could be learned and represented by
recursion on already grounded categories, their labels and their
underlying representations. (I have no idea why you think I'd have a
problem with "table.")

>       Actually, there already exists a theory of categorization that is along
>       similar lines to your approach, but integrated with a detailed theory
>       of perception and not subject to the two criticisms above; that is the
>       Objectivist theory of concepts. It was presented by Ayn Rand... and by
>       David Kelley...

Thanks for the reference, but I'd be amazed to see an implementable,
testable model of categorization performance issue from that source...


Stevan Harnad  

-----
(5)
Machines: Natural and Man-Made

To Daniel Simon's reply in AI digest (V4 #226):

>One question you haven't addressed is the relationship between intelligence and
>"human performance".  Are the two synonymous?  If so, why bother to make
>artificial humans when making natural ones is so much easier (not to mention
>more fun)? 

Daniel Paul adds:

>	This is a question that has been bothering me for a while. When it
>	is so much cheaper (and possible now, while true machine intelligence
>	may be just a dream) why are we wasting time training machines when we
>	could be training humans instead? The only reasons that I can see are
>	that intelligent systems can be made small enough and light enough to
>	sit on bombs. Are there any other reasons?

Apart from the two obvious ones -- (1) so machines can free people to do
things machines cannot yet do, if people prefer, and (2) so machines can do
things that people can only do less quickly and efficiently, if people
prefer -- there is the less obvious reply already made to Daniel
Simon: (3) because trying to get machines to display all our performance
capacity (the Total Turing Test) is our only way of arriving at a functional
understanding of what kinds of machines we are, and how we work. 

[Before the cards and letters pour in to inform me that I've used
"machine" incoherently: A "machine," (writ large, Deus Ex Machina) is
just a physical, causal system. Present-generation artificial machines
are simply very primitive examples.]

Stevan Harnad

harnad@mind.UUCP (Stevan Harnad) (11/03/86)

The following is a response on net.ai to a comment on mod.ai.
Because of problems with posting to mod.ai, I am temporarily replying to net.ai.
On mod.ai cugini@NBS-VMS.ARPA ("CUGINI, JOHN") writes:

>	You seem to want to pretend that we know absolutely nothing about the
>	basis of thought in humans, and to "suppress" all evidence based on
>	such knowledge. But that's just wrong. Brains *are* evidence for mind,
>	in light of our present knowledge.

What I said was that we knew absolutely nothing about the FUNCTIONAL
basis of thought in humans, i.e., about how brains or relevantly
similar devices WORK. Hence we wouldn't have the vaguest idea if a
given lump of grey matter was in fact the right stuff, or just a
gelatenous look-alike -- except by examining its performance (i.e., turing)
capacity. [The same is true, by the way, mutatis mutandis, for a
better structural look-alike -- with cells, synapses, etc. We have no
functional idea of what differentiates a mind-supporting look-alike
from a comatose one, or one from a nonviable fetus. Without the
performance criterion the brain cue could lead us astray as often as
not regarding whether there was indeed a mind there. And that's not to
mention that we knew perfectly well (perhaps better, even) how to judge
whether somebody had a mind before 'ere we ope'd a skull nor knew what we
had chanced upon there.

If you want a trivial concession though, I'll make one: If you saw an
inert body totally incapable of behavior, then or in the future, and
you entertained some prior subjective probability that it had a mind, say,
p, then, if you opened its skull and found something anatomically and
physiologically brain-like in there, then the probability p that it
had, or had had, a mind would correspondingly rise. Ditto for an inert
alien species. And I agree that that would be rational. However, I don't
think that any of that has much to do with the problem of modeling the mind, or
with the relative strengths or weaknesses of the Total Turing Test.

>	People in, say, 1500 AD were perfectly rational in predicting
>	tides based on the position of the moon (and vice-versa)
>	even though they hadn't a clue as to the mechanism of interaction.
>	If you keep asking "why" long enough, *all* science is grounded on
>	such brute-fact correlation (why do like charges repel, etc.) - as
>	Hume pointed out a while back.

Yes, but people then and even earlier were just as good at "predicting" the
presence of mind WITHOUT any reference to the brain. And in ambiguous
cases, behavior was and is the only rational arbiter. Consider, for
example, which way you'd go if (1) an alien body persisted in behaving like a
clock-like automaton in every respect -- no affect, no social interaction,
just rote repetition -- but it DID have something that was indistinguishable
(on the minute and superficial information we have) from a biological-like
nervous system), versus (2) if a life-long close friend of yours had
to undergo his first operation, and when they opened him up, he turned
out to be all transistors on the inside. I don't set much store by
this hypothetical sci-fi stuff, especially because it's not clear
whether the "possibilities" we are contemplating are indeed possible. But
the exercise does remind us that, after all, performance capacity is
our primary criterion, both logically and intuitively, and its
black-box correlates have whatever predictive power they may have
only as a secondary, derivative matter. They depend for their
validation on the behavioral criterion, and in cases of conflict,
behavior continues to be the final arbiter.

I agree that scientific inference is grounded in observed correlations. But
the primary correlation in this special case is, I am arguing, between
mental states and performance. That's what both our inferences and our
intuitions are grounded in. The brain correlate is an additional cue, but only
inasmuch as it agrees with performance. As to CAUSATION -- well, I'm
sceptical that anyone will ever provide a completely satisfying account
of the objective causes of subjective effects. Remember that, except for
the special case of the mind, all other scientific inferences have
only had to account for objective/objective correlations (and [or,
more aptly, via) their subjective/subjective experiential counterparts).
The case under discussion is the first (and I think only) case of
objective/subjective correlation and causation. Hence all prior bets,
generalizations or analogies are off or moot.

>	other brains... are, by definition, relevantly brain-like

I'd be interested in knowing what current definition will distinguish
a mind-supporting brain from a non-mind-supporting brain, or even a
pseudobrain. (That IS the issue, after all, in claiming that the brain
in an INDEPENDENT predictor of mindedness.)

>	Let me re-cast Harnad's argument (perhaps in a form unacceptable to
>	him): We can never know any mind directly, other than our own, if we
>	take the concept of mind to be something like "conscious intelligence" -
>	ie the intuitive (and correct, I believe) concept, rather than
>	some operational definition, which has been deliberately formulated
>	to circumvent the epistemological problems.  (Harnad, to his credit,
>	does not stoop to such positivist ploys.)  But the only external
>	evidence we are ever likely to get for "conscious intelligence"
>	is some kind of performance.  Moreover, the physical basis for
>	such performance will be known only contingently, ie we do not
>	know, a priori, that it is brains, rather than automatic dishwashers,
>	which generate mind, but rather only as an a posteriori correlation.
>	Therefore, in the search for mind, we should rely on the primary
>	criterion (performance), rather than on such derivative criteria
>	as brains. I pretty much agree with the above account except for the
>	last sentence which prohibits us from making use of derivative
>	criteria.  Why should we limit ourselves so?  Since when is that part
>	of rationality?

I accept the form in which you've recast my argument. The reasons that
brainedness is not a good criterion are the following (I suppose I
should stop saying it is not a "rational" criterion having made the
minor concession I did above): Let's call being able to pass the Total
Turing Test the "T" correlate of having a mind, and let's call having a brain
the "B" correlate. (1) The validity of B depends completely on T. We
have intuitions about the way we and others behave, and what it feels
like; we have none about having brains. (2) In case of conflict
between T and B, our intuitions (rationally, I suggest) go with T rather
than B. (3) The subjective/objective issue (i.e., the mind/body
problem) mentioned above puts these "correlations" in a rather
different category from other empirical correlations, which are
uniformly objective/objective. (4) Looked at sufficiently minutely and
functionally, we don't know what the functionally relevant as opposed to the
superficial properties of a brain are, insofar as mind-supportingness
is concerned; in other words, we don't even know what's a B and what's
just naively indistinguishable from a B (this is like a caricature of
the turing test). Only T will allow us to pick them out.

I think those are good enough reasons for saying that B is not a good
independent criterion. That having be said, let me concede that for a
radical sceptic, neither is T, for pretty pretty much the same
reasons! This is why I am a methodological epiphenomenalist.

>	No, the fact is we do have more reason to suppose mind of other
>	humans than of robots, in virtue of an admittedly derivative (but
>	massively confirmed) criterion.  And we are, in this regard, in an
>	epistemological position *superior* to those who don't/didn't know
>	about such things as the role of the brain, ie we have *more* reason
>	to believe in the mindedness of others than they do.  That's why
>	primitive tribes (I guess) make the *mistake* of attributing
>	mind to trees, weather, etc.  Since raw performance is all they
>	have to go on, seemingly meaningful activity on the part of any
>	old thing can be taken as evidence of consciousness.  But we
>	sophisticates have indeed learned a thing or two, in particular, that
>	brains support consciousness, and therefore we (rationally) give the
>	benefit of the doubt to any brained entity, and the anti-benefit to
>	un-brained entities.  Again, not to say that we might not learn about
>	other bases for mind - but that hardly disparages brainedness as a
>	rational criterion for mindedness.

A trivially superior position, as I've suggested. Besides, the
primitive's mistake (like the toy AI-modelers') is in settling for
anything less than the Total Turing Test; the mistake is decidedly NOT
the failure to hold out for the possession of a brain. I agree that it's
rational to take brainedness as an additional corroborative cue, if you
ever need one, but since it's completely useless when it fails to corroborate
or conflicts with the Total Turing criterion, of what independent use is it?

Perhaps I should repeat that I take the context for this discussion to
be science rather than science fiction, exobiology or futurology. The problem
we are presumably concerned with is that of providing an explanatory
model of the mind along the lines of, say, physics's explanatory model
of the universe. Where we will need "cues" and "correlates" is in
determining whether the devices we build have succeeded in capturing
the relevant functional properties of minds. Here the (ill-understood)
properties of brains will, I suggest, be useless "correlates." (In
fact, I conjecture that theoretical neuroscience will be led by, rather
than itself leading, theoretical "mind-science" [= cognitive
science?].) In sci-fi contexts, where we are guessing about aliens'
minds or those of comatose creatures, having a blob of grey matter in
the right place may indeed be predictive, but in the cog-sci lab it is
not.

>	there's really not much difference between relying on one contingent
>	correlate (performance) rather than another (brains) as evidence for
>	the presence of mind.

To a radical sceptic, as I've agreed above. But there is to a working
cognitive scientist (whose best methodological stance, I suggest, is
epiphenomenalism).


>	I know consciousness (my own, at least) exists, not as
>	some derived theoretical construct which explains low-level data
>	(like magnetism explains pointer readings), but as the absolutely
>	lowest rock-bottom datum there is.  Consciousness is the data,
>	not the theory - it is the explicandum, not the explicans (hope
>	I got that right).  It's true that I can't directly observe the
>	consciousness of others, but so what?  That's an epistemological
>	inconvenience, but it doesn't make consciousness a red herring.

I agree with most of this, and it's why I'm not, for example, an
"eliminative materialist." But agreeing that consciousness is data
rather than theory does not entail that it's the USUAL kind of data of
empirical science. I KNOW I have a mind. Every other instance is
radically different from this unique one: I can only guess, infer. Do
you know of any similar case in normal scientific inference? This is
not just an "epistemological inconvenience," it's a whole 'nother ball
game. If we stick to the standard rules of objective science (which I
recommend), then turing-indistinguishable performance modeling is indeed
the best we can aspire to. And that does make consciousness a red
herring.

>	...being-composed-of-protein might not be as practically incidental
>	as many assume.  Frinstance, at some level of difficulty, one can
>	get energy from sunlight "as plants do."  But the issues are:
>	do we get energy from sunlight in the same way?  How similar do
>	we demand that the processes are?...if we're interested in simulation at
>	a lower level of abstraction, eg, photosynthesis, then, maybe, a
>	non-biological approach will be impractical.  The point is we know we
>	can simulate human chess-playing abilities with non-biological
>	technology.  Should we just therefore declare the battle for mind won,
>	and go home?  Or ask the harder question: what would it take to get a
>	machine to play a game of chess like a person does, ie, consciously.

This sort of objection to a toy problem like chess (an objection I take to
be valid) cannot be successfully redirected at the Total Turing Test, and
that was one of the central points of the paper under discussion. Nor
are the biological minutiae of modeling plant photosynthesis analogous to the
biological minutiae of modeling the mind: The OBJECTIVE data in the
mind case are what you can observe the organism to DO. Photosynthesis
is something a plant does. In both cases one might reasonably demand
that a veridical model should mimic the data as closely as possible.
Hence the TOTAL Turing Test.

But now what happens when you start bringing in physiological data, in the
mind case, to be included with the performance data? There's no
duality in the case of photosynthesis, nor is there any dichotomy of
levels. Aspiring to model TOTAL photosynthesis is aspiring to get
every chemical and temporal detail right. But what about the mind
case? On the one hand, we both agree with the radical sceptic that
NEITHER mimicking the behavior NOR mimicking the brain can furnish
"direct" evidence that you've captured mind. So whereas getting every
(observable) photosynthetic detail right "guarantees" that you've
captured photosynthesis, there's no such guarantee with consciousness.

So there's half of the disanalogy. Now consider again the hypothetical
possibilities we were considering earlier: What if brain data and
behavioral data compete? Which way should a nonsceptic vote? I'd go
with behavior. Besides, it's an empirical question, as I said in the
papers under discussion, whether or not brain constraints turn out to
be relevant on the way to Total Turing Utopia. Way down the road,
after all, the difference between mind-performance and
brain-performance may well become blurred. Or it may not. I think the
Total Turing Test is the right provisional methodology for getting you
there, or at least getting you close enough. The rest may very well
amount to only the "fine tuning."

>	BTW, I quite agree with your more general thesis on the likely
>	inadequacy of symbols (alone) to capture mind.

I'm glad of that. But I have to point out that a lot of what you
appear to disagree about went into the reasons supporting that very
thesis, and vice versa.

-----

May I append here a reply to andrews@ubc-cs.UUCP (Jamie Andrews) who
wrote:

>	This endless discussion about the Turing Test makes the
>	"eliminative materialist" viewpoint very appealing:  by the
>	time we have achieved something that most people today would
>	call intelligent, we will have done it through disposing of
>	concepts such as "intelligence", "consciousness", etc.
>	Perhaps the reason we're having so much trouble defining
>	a workable Turing Test is that we're essentially trying to
>	fit a square peg into a round hole, belabouring some point
>	which has less relevance than we realize.  I wonder what old
>	Alan himself would say about the whole mess.

On the contrary, rather than disposing of them, we will finally have
some empirical and theoretical idea of what their functional basis
might be, rather than simply knowing what it's like to have them. And
if we don't first sort out our methodological constraints, we're not
headed anywhere but in hermeneutic circles.

Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

kgd@rlvd.UUCP (Keith Dancey) (11/03/86)

In article <5@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>
>What do you think "having intelligence" is? Turing's criterion
>effectively made it: having performance capacity that is indistinguishable
>from human performance capacity. And that's all "having a mind"
>amounts to (by this objective criterion). ...
 
At the risk of sidetracking this discussion, I don't think it wise to try
and equate 'mind' and 'intelligence'.  A 'mind' is an absolute thing, but
'intelligence' is relative.
 
For instance, most people would, I believe, accept that a monkey has a
'mind'.  However, they would not necessarily so easily accept that a
monkey has 'performance capacity that is indistinguishable from human
performance capacity'.
 
On the other hand, many people would accept that certain robotic
processes had 'intelligence', but would be very reluctant to attribute
them with 'minds'.
 
I think there is something organic about 'minds', but 'intelligence' can
be codified, within limits, of course.
 
I apologise if this appears as a red-herring in the argument.
 
-- 
Keith Dancey,                                UUCP:   ..!mcvax!ukc!rlvd!kgd
Rutherford Appleton Laboratory,
Chilton, Didcot, Oxon  OX11 0QX             
                                            JANET:       K.DANCEY@uk.ac.rl
Tel: (0235) 21900   ext 5716