[talk.philosophy.misc] Why the Chess game still doesn't convince

kp@uts.amdahl.com (Ken Presting) (04/10/90)

In article <1990Apr8.113052.14079@caen.engin.umich.edu> zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) writes:
>>(Ken Presting) writes:
>>The novice's explanation could be quite informative, and even rational,
>>but (by hypothesis) it could not mention much about *chess*, its rules,
>>possible strategies, et. al.
>
>It seems to me that ...mumble... code ...mumble... list ...mumble...
>(coffee) ...mumble... text ...mumble... if-then ... eureka!  Not only
>does my "brute-force" algorithm continue playing chess at the same mediocre
>level, but now it explains _why_ it makes those mediocre moves!  It even
>cites famous matches and noted authors!  

This is a very good ... mumble ... (:-).

A related problem arises in the philosophy of action.  A typical
definition of "intentional action" is "bodily motion caused by beliefs
and desires", but this definition runs into trouble when a process which
is not clearly cognitive connects a belief to an action.

One of Davidson's examples involves two mountain climbers.  Suppose one
fellow is in a precarious position, holding a safety rope for his
buddy, who is in a more precarious position.  If the low man on the rope
should slip and cry out, then the rope-holder might come to believe that
unless he lets go of the rope, he might lose his own footing.  This
belief (and the desire to stay alive) might unnerve him so much that he
*does* let go of the rope.  But did he intentionally abandon his buddy?
The defintion says yes, but this seems to be a mistake.

The corresponding problem with explanations and understanding is that
we normally suppose that someone who provides an explanation is not just
exhibiting a skill which is *additional* to the expertise they display
in practice.  We expect that when Kasparov explains chess, the ability to
explain is *part* of his ability to play.

Paul's example of the program with the bolt-on explanation feature shows
that this expectation is simplistic.  The conclusion to be drawn, I think,
is not that the capacity to explain one's choices is irrelevant, but
that the capacity to explain is not a sufficient condition for what we
call "understanding".

As far as I know, there have been no elegant solutions to the problem
of stating such a sufficient condition, either in the case of intentional
action or of understanding.  However, Fred Drestke has a recent book
called _Explaining Behavior: Reasons in a World of Causes_ which is an
attempt to do so.  I have looked at a few pages in this book, but have not
yet read it through.  Drestke's _Knowlede and the Flow of Information_
is excellent, and rather influential in its field, so I would expect his
new book to be worthwhile also.


>>I think it's possible to give a more precise account of understanding,
>
>I would appreciate seeing it.  I sincerely dislike "fuzzy" terms.
>
>---Paul... ("Just call me 'rocky'.")

OK - I feel like I've taken a solid left hook!  Just trimming the fuzz
on the concept of "understanding" is very tough.  The "precise account"
I have in mind for "understanding" does *not* preserve the intuitive
connection between understanding an activity and being able to perform
it.

Fortunately, the Chinese room argument can be re-stated without any
appeal to the concept of "understanding".  This is what I've tried to
do with the compiler example.  I suppose we'd all agree that the question
of whether a compiler understands its source language is irrelevant.

Ken Presting