[talk.philosophy.misc] Understanding is not a function of behaviour

ftoomey@maths.tcd.ie (Fergal Toomey) (04/10/90)

I've been doing some thinking about the "understanding" problem since
I posted the first article about the chess game. I think the chess game
argument is a fairly good one, but evidently there are plenty of people
who find it less than convincing. Therefore, I'm going to give a new,
stronger argument for my position, put my hard hat on, and dig in for
a long, long summer.	:-)

The discussion is about whether or not understanding can be inferred
from behaviour, ie. if I win a lot of chess games, do I neccessarily
understand chess? I instinctively feel that behaviour does not imply
understanding, mainly because of the many unfortunate conclusions this
position leads you to. For example, a plane can fly, I cannot. Does
this mean that a plane understands aerodynamics better than I do?
I would say that, certainly, the plane obeys the laws of aerodynamics
in a way in which I do not. Similarly, a simple chess program obeys certain
rules of thumb when formulating a strategy, but this does not mean that
it understands those rules anymore than an aeroplane understands
aerodynamics.

Now, as human beings, we understand things, I think we can all agree
on that. Therefore, part of out behaviour, which may or may not be
apparent to an observer (my opponents would say that it is) is to
understand. Whether or not this particular part of our behaviour is
observable by an outsider, it is certainly observable to us ourselves.
Hence there is no doubt that understanding is a part of human
behaviour.

In fact, on the whole, we succeed quite well in understanding many
different things. Therefore, if you believe that understanding is
implied by behaviour, then you must believe that we humans understand
how to understand, in the same way that a chess playing computer
understands how to play chess. But this isn't true. We do not
understand how to understand. If we did, the problem of the CR
would never have arisen, and AI would have been finished off years
ago. Chess would be no problem, Gary Kasparov could have written
a book explaining how to understand chess, and we'd all be grandmasters.

So if we assume from our behaviour that we can understand how to
understand, then we are faced with the contradiction that, from our
behaviour, it is clear that we do not understand how to understand.
So we must reject our hypothesis, and state that behaviour does not
neccessarily imply understanding.

Your opinions on this argument are welcome.

Fergal Toomey.

Imagine something you can't imagine. Now explain how you imagined it.

erich@eecs.cs.pdx.edu (Erich Boleyn) (04/11/90)

In article <1990Apr10.130006.6780@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal Toomey) writes:
...
>The discussion is about whether or not understanding can be inferred
>from behaviour, ie. if I win a lot of chess games, do I neccessarily
>understand chess? I instinctively feel that behaviour does not imply
>understanding, mainly because of the many unfortunate conclusions this
>position leads you to. For example, a plane can fly, I cannot. Does
>this mean that a plane understands aerodynamics better than I do?
>I would say that, certainly, the plane obeys the laws of aerodynamics
>in a way in which I do not. Similarly, a simple chess program obeys certain
>rules of thumb when formulating a strategy, but this does not mean that
>it understands those rules anymore than an aeroplane understands
>aerodynamics.
>
>Now, as human beings, we understand things, I think we can all agree
>on that. Therefore, part of out behaviour, which may or may not be
>apparent to an observer (my opponents would say that it is) is to
>understand. Whether or not this particular part of our behaviour is
>observable by an outsider, it is certainly observable to us ourselves.
>Hence there is no doubt that understanding is a part of human
>behaviour.
>
>In fact, on the whole, we succeed quite well in understanding many
>different things. Therefore, if you believe that understanding is
>implied by behaviour, then you must believe that we humans understand
>how to understand, in the same way that a chess playing computer
>understands how to play chess. But this isn't true. We do not
>understand how to understand. If we did, the problem of the CR
>would never have arisen, and AI would have been finished off years
>ago. Chess would be no problem, Gary Kasparov could have written
>a book explaining how to understand chess, and we'd all be grandmasters.
>
>So if we assume from our behaviour that we can understand how to
>understand, then we are faced with the contradiction that, from our
>behaviour, it is clear that we do not understand how to understand.
>So we must reject our hypothesis, and state that behaviour does not
>neccessarily imply understanding.

   Nicely put, but I feel that there is something missing here.  For most
models of AI, we use various aspects of human intelligence (and/or
behavior, as the case may be) as our goals to see if our models achieve
the same thing.  Have you ever wondered why we ask if someone REALLY
understands a problem?  In an intiutive sense, we know that understanding
involves being able to do more than paraphrase, but to actually tranform
your knowledge (through anology, or other such) into other forms, or to
create realationships that do not exist explicitly in the knowledge (but
may exist implicitly).  With human experts we don't tend to question
because we assume that someone else has done the checking to see if they
understand (otherwise they would not be considered an expert), and even then,
if someone does not seem able to perform adequately under varying conditions,
we take a closer look into their competence.

   (Posted a version of this just before, but am adding on...)

   I feel that understanding is not the process, and as such am agreeing
with Fergal, but also feel that to have understanding (in any way that we
can recognize) there must be an assimilation with the ability to transform
the knowledge and to relate the transformed knowledge to other knowledge.
This is the crucial ability that allows chess masters to do so well, that
they can tansform strategies and amend their strategies in-line (this
would seem to mean that the rules must be dynamic, but that would be
required to have what we call intelligence in the first place), i.e.
the tasks that humans accomplish can in some cases be reduced to a nice
set of rules (and do in some cases, as experts find processes that makes
their work easier), but the "understanding" part is the ability to
re-formulate the rules when an inefficiency or incorrect result develops.
This could be accomplished by rules that change the rules, etc.  but this
is pretty much just (in the end) trying to make an intelligent system for
"understanding" your problem and solving it.

   Erich

   ___--Erich S. Boleyn--___  CSNET/INTERNET:  erich@cs.pdx.edu
  {Portland State University}     ARPANET:     erich%cs.pdx.edu@relay.cs.net
       "A year spent in           BITNET:      a0eb@psuorvm.bitnet
      artificial intelligence is enough to make one believe in God"

gerry@zds-ux.UUCP (Gerry Gleason) (04/12/90)

In article <1990Apr10.130006.6780@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal Toomey) writes:
>position leads you to. For example, a plane can fly, I cannot. Does
>this mean that a plane understands aerodynamics better than I do?

No, it means it flies better.  Understanding aerodymanics is like the
"understanding understanding" you speak of below.

>Now, as human beings, we understand things, I think we can all agree
 . . .
>Hence there is no doubt that understanding is a part of human
>behaviour.

Ok, I can accept that.

> . . . . But this isn't true. We do not understand how to understand.

But neither does this imply that we know nothing about how to understand.
Obviously not since education is at least partially successful.

>So if we assume from our behaviour that we can understand how to
>understand, then we are faced with the contradiction that, from our
>behaviour, it is clear that we do not understand how to understand.
>So we must reject our hypothesis, and state that behaviour does not
>neccessarily imply understanding.

Just one question though: Is understanding necessary for intellegence?
When you are talking about understanding, you are talking about a particular
capability requiring intellegence.  Intellegence is not a generic capacity
that can be applied in any domain when you have it, it is rather a continuous
field of attributes without any clear dimensions or limits.  With this in
mind, now consider various animals; clearly at least some of them display
capacities we cannot match with present AI, but understanding?

Also, I claim that domains such as understanding in general, or understanding
the possible relationships on a chess board in particular are "open" domains
in the sense that there is no unique, complete description of the domain.  In
this case we can only talk of degrees of understanding since complete
understanding does not exist.

Gerry Gleason