[talk.philosophy.misc] Why the rigged chess match doesn't convince

kp@uts.amdahl.com (Ken Presting) (04/07/90)

(Watch the Followup-To: line)

In article <1990Apr6.234803.17071@caen.engin.umich.edu> zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) writes:
>>(Ken Presting) writes:
>>I have a (very small) problem with viewing novice+Kasparov as a "system",
>> [ ... humans _choose_ to play (and can walk away at will)
>> ... the computer _must_ play (laws of physics, no choice) ...](pm)
>
>You are, of course, assuming that the humans have "free-will".  A topic
>that I would like to immediately drop now that the assumption has been
>identified.  

OK - you hold it down, and I'll kick it - I'm a major determinist, all
the way down to the little hidden variables (bless their hidden little
hearts, and the hidden values in 'em).  I'll take my non-local lumps
back on sci.philosophy.tech, thank you.

I mentioned this "very small problem" because there are some who hold
a position called "anomalous monism" (I do not) which claims that while
all events may be determined by easily statable laws when the events
are identified in *physical* terms, while if descriptions in
*psychological* terms are supplied to identify the events, there can be
no law-like generalization to allow inference from cause to effect.

I think anyone who's not up to a few lumps ought to have a small problem
with the systems reply.  Local random lumps would do, but you'll break
the little hearts of the hidden variables - they die if nobody believes
in them.   :-)

***---  Back to reality ...

>As far as whether or not the computer can choose do something else,
>consider playing a game of chess on a time-sharing system, . . .

I knew I should have mentioned this obvious counterexample, which you
are correct to bring up.  The chess-playing thread could easily get
killed by a kernel short of descriptors, or another user, or swapped out
to never-never land ...

>dollars to the appropriate level of hardware -- and let the computer
>decide what _IT_ thinks is important.  (I believe Doug Linnet -- "the
>computer that 'discovered' primes" -- is doing just that.)  Prioritizing,
>choosing goals to pursue / discard is an essential element of
>"intelligence", but it seems well within the reach of the symbol
>hypothesis.

Agreed completely.  I'd say that the issue reduces to whether a
deterministic account can be given of the conditions under which a
process will assume a given trajectory (in state space, phase space, or
performance space, whichever is appropriate).

>
>>Ken Presting  ("Support the Symbolese Liberation Army - Out With Symbols")
>
>Symbol-processing is an insufficient model in my mind as well, but
>let's take special care when trying to define its limits.  

I meant "out" in the sense of "output".  I almost said "In & Out With
Symbols", but for some reason thought better of it ...  Serves me right,
not funny enough, again.

I see no limitation to symbol processing (except speed) as long as "what
TM's do" defines "symbol processing".  I just think that "being a symbol"
is an *intentional* property, and cannot be applied to any activities
of existing AI demo projects.  Linnet is close, but MVS is closer.

>
>---Paul...  (I'll take the symbols you throw out -- until I get something
>             better. "Spare symbols, anyone?" :-)


Ken Presting  ("Yes, we have no symbols's, we have no symbols's today")