[comp.ai] Late thoughts on T Test, rooms, and functions

cjoslyn@bingvaxu.cc.binghamton.edu (Cliff Joslyn) (01/07/90)

Referring to the Sci.  Am.  articles, am I correct in identifying a
serious problem w/the computationalist assumption that brains calculate
functions? Functions are many-one, and implemented by deterministic
algorithms.  No animal behavior, let alone human behavior resulting from
thought, is strictly deterministic.  How can we avoid error if we assume
minds calculate functions?

Do we know that humans (some, most, all) pass the Turing Test?

Why is the T Test characterized as being wholy sufficient for
ascertaining intelligence (i.e.  passing = intelligence, failing =
non-intelligence)? Like statistical tests, why isn't passing a T Test
merely *evidence for* intelligence? In problems of induction, no finite
amount of evidence is sufficient to make the inference, rather refusing
to make the inference becomes sillier and sillier.  Similarly, as we
give harder and harder T Tests (and the system passes) it becomes more
difficult to deny it's intelligence, although still impossible to affirm
it.  The Chinese room is intended as a limitting example, but like all
good philosophical (ideal) examples, is impossible to construct. 

What is the significance of something really not being intelligence yet
for all the observations we can possibly make on it it appears to be
intelligent (e.g.  Searle's room)? Isn't the query "is it *really*
intelligent" vacuous under such conditions, or at least undecidable?
-- 
O------------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large, cjoslyn@bingvaxu.cc.binghamton.edu
| Systems Science, SUNY Binghamton, Binghamton NY 13901, USA
V All the world is biscuit shaped. . .

dhw@itivax.iti.org (David H. West) (01/08/90)

In article <2762@bingvaxu.cc.binghamton.edu> cjoslyn@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
>What is the significance of something really not being intelligence yet
>for all the observations we can possibly make on it it appears to be
>intelligent (e.g.  Searle's room)? 

If one refrains from reifying intelligence (yes, I know it's too
late), one doesn't even ask that question.

Searle's position seems to me to be the result of adopting an
indefensible view of understanding;  our everyday model of
understanding asserts that it is 
1) boolean-valued (you either have it or you don't on a given topic); 
2) atomic (it is magically available to introspection, but has no
   components); and
3) veridical (you're infallible about what you claim to understand).    

Additionally, Searle implicitly assumes understanding to be 

4) static (he wouldn't learn Chinese by following the rules).

None of these assertions withstands much examination.

-David West    dhw@iti.org

gilham@csl.sri.com (Fred Gilham) (01/09/90)

dhw@itivax.iti.org (David H. West) writes:
----------
Searle's position seems to me to be the result of adopting an
indefensible view of understanding;  our everyday model of
understanding asserts that it is 
1) boolean-valued (you either have it or you don't on a given topic); 
2) atomic (it is magically available to introspection, but has no
   components); and
3) veridical (you're infallible about what you claim to understand).    

Additionally, Searle implicitly assumes understanding to be 

4) static (he wouldn't learn Chinese by following the rules).

None of these assertions withstands much examination.
----------

I am not a cognitive psychologist, so I am on somewhat uncertain
ground here, but I don't feel the above points are necessary to the
argument about understanding.

Boolean valued:  We often say ``I partially understand what you are
saying.''  Nobody finds anything strange about this.  I know the
experience I have when I say this, and I know how it differs from when
I say, ``Yes, I understand what you mean.''

Atomic:  Why must Searle's argument depend on understanding not having
any components?  Why can't memory, or logic, or whimsey, play some
role in what we mean when we say ``I understand''?

Veridical:  Searle's argument does not depend on the statement ``I
understand'' being veridical, merely on the statement ``I don't
understand'' being veridical.  I don't see any problem with this.

Static:  The Chinese language is not even relevant to what Searle is
doing by following the rules.  Take the following example:

RULE:  If you see the number 7211111932971141013212111111763 for the
first time, subtract 7211111925626367813001600101617 from it and
return the result.  If you see it again, add
7332115963839891340469952196998399898370 to it and return the result.

I am not telling you what this is all about (yet).  In principle, any
rule must be of a form something like this, though more complicated,
yet applying only computable operations.  Why should you associate it
with language?  You are doing arithmetic.  From the outside, the
system looks like it is doing language understanding.  But Searle can
follow the rules without being told why he is doing them.  He could,
for all he knows, be computing rocket trajectories or playing chess.

The example rule consists of taking the ascii representations (in
decimal) for ``How are you?'' and, the first time one sees it,
transforming it to ``I'm fine.''  The second time it gets transformed
to ``I said I'm fine!''

kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) writes:
----------
The strong AI position is actually a little more general (as I
understand it) because 2 can do 3, but might also be able to do things
which 3 can't.

----------

This is impossible.  In principle, any algorithmic procedure can be
done by hand, and as such, is succeptible to (a possibly proper subset
of) human understanding.
----------
He continues:

"he would know that he is still not doing 3." WHAT?!?!
----------

Taking my example above, I am saying that someone could be doing the
calculations I gave, yet if you asked him, ``Did you understand what
the person said?'' he could honestly answer something like ``No, I
didn't; I didn't even know I was having a conversation.''

-Fred Gilham    gilham@csl.sri.com

ele@cbnewsm.ATT.COM (eugene.l.edmon) (01/10/90)

In article <2762@bingvaxu.cc.binghamton.edu> cjoslyn@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
>
>.  No animal behavior, let alone human behavior resulting from
>thought, is strictly deterministic.  


What makes you so sure about this?  The latest argument on this
score seems to be Penrose's in The Emperor's New Mind.  I think
he failed to make the case.


-- 
gene edmon    ele@cbnewsm.ATT.COM

cjoslyn@bingvaxu.cc.binghamton.edu (Cliff Joslyn) (01/10/90)

In article <8405@cbnewsm.ATT.COM> ele@cbnewsm.ATT.COM (eugene.l.edmon,lc,) writes:
>In article <2762@bingvaxu.cc.binghamton.edu> cjoslyn@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
>>.  No animal behavior, let alone human behavior resulting from
>>thought, is strictly deterministic.  
>What makes you so sure about this?  The latest argument on this
>score seems to be Penrose's in The Emperor's New Mind.  I think
>he failed to make the case.

I haven't read Penrose yet, but I believe my point is obvious, almost
trivial: the behavior of animals is unpredictable, not just at the
quantum level.  They move of their own accord, each cell in their bodies
moves on its own accord.  Only in a Skinner box is complete
predictability reasonably approximated.
-- 
O------------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large, cjoslyn@bingvaxu.cc.binghamton.edu
| Systems Science, SUNY Binghamton, Binghamton NY 13901, USA
V All the world is biscuit shaped. . .