[talk.philosophy.misc] related to

erich@eecs.cs.pdx.edu (Erich Boleyn) (04/11/90)

In article <1990Apr3.162019.27598@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal Toomey) writes:
>It seems to me that "understanding" is a property of the algorithm
>(or non-algorithmic procedure, if you believe humans are non-algorithmic)
>you use. Exactly what it is seems impossible to pin down. It seems to
>incorporate some kind of meta-analysis of the problem you're solving,
>which allows to find solutions relatively quickly. Thoughts, anyone?

   I would say that "understanding" something is not by not by exact
demonstration, but by assimilation of these concepts into your ability
to accomplish other tasks (or just relating knowledge X to knowledge Y).
Would we say someone understood what we said if they could repeat it
verbatim, or oven paraphrase?  No, we would ask them the implications or
related methods or especially an anology (which is a tranformation of the
knowledge into another system!).  For instance, when one understands a
theorem in mathematics is when one can prove it (at least intuitively,
i.e. know how it works!), not just use it!

   We say the grand master knows the game because he can make extensions
in the middle of a game if he encounters something that his general method
does not cover.  His ability to extend the game is in a way the understanding,
because (in the case of chess) he "understands" the rules well enough to know
what to add to them when necessary.

   A computer with a coded algorithm for a task (or even a human with
instructions), does not qualify under this idea of understanding.  I would
say that the ability to assimilate, transform, and relate knowledge is
necessary for understanding (good understanding, at least ;-)


   ___--Erich S. Boleyn--___  CSNET/INTERNET:  erich@cs.pdx.edu
  {Portland State University}     ARPANET:     erich%cs.pdx.edu@relay.cs.net
       "A year spent in           BITNET:      a0eb@psuorvm.bitnet
      artificial intelligence is enough to make one believe in God"

cs4g6at@maccs.dcss.mcmaster.ca (Shelley CP) (04/12/90)

In article <2643@psueea.UUCP> erich@cs.pdx.edu (Erich Boleyn) writes:
>In article <1990Apr3.162019.27598@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal Toomey) writes:
>>It seems to me that "understanding" is a property of the algorithm
>>(or non-algorithmic procedure, if you believe humans are non-algorithmic)
>>you use. Exactly what it is seems impossible to pin down. It seems to
>>incorporate some kind of meta-analysis of the problem you're solving,
>>which allows to find solutions relatively quickly. Thoughts, anyone?
>
>   I would say that "understanding" something is not by not by exact
>demonstration, but by assimilation of these concepts into your ability
>to accomplish other tasks (or just relating knowledge X to knowledge Y).
[...]
>   We say the grand master knows the game because he can make extensions
>in the middle of a game if he encounters something that his general method
>does not cover.  His ability to extend the game is in a way the understanding,
>because (in the case of chess) he "understands" the rules well enough to know
>what to add to them when necessary.
>
>   A computer with a coded algorithm for a task (or even a human with
>instructions), does not qualify under this idea of understanding.  I would
>say that the ability to assimilate, transform, and relate knowledge is
>necessary for understanding (good understanding, at least ;-)

  I believe that there is actually a kernel of truth in both of the 
above statements which could be illuminated if a finer distinction in 
the concept of "understanding" is made.  The distinction I propose is
between "understanding" and "understanding in depth".

  Mr. Toomey states that understanding is a property of 'the' algorithm
being applied to solve a problem.  As I have pointed out earlier, what
people generally mean here is the 'elegance' of the algorithm, as 
opposed to a mere brute-force approach.  A 'meta' level of analysis
is required for an 'understanding' algorithm.  I would argue that *any*
algorithm which can successfully solve a problem must to said to 
*understand* it.  However, *one* algorithm can only understand a
problem *one* way - the way it has been 'written' to work at any
time.  Let me give an example.

  During a visit to my brother's farm, I helped him shoo a stray calf
back into a pen where its mother was calling for it. Now, cows are 
very unintelligent creatures.  Frightened by us, the calf ran in the 
direction of its mother but was confronted by a fencepost - one 
belonging to the pen.  As we looked on, it remained there for about
half a minute, with its forehead literally butted against the post
which it couldn't dislodge, mooing pathetically!  It could not
come up with the idea of going around the post (despite wanting to
very badly)!!  Presently, I moved up beside it and shooed it again
in the direction of the gate which it finally managed to go
through.  My point is that the calf had only one 'algorithm' for
getting from A to B - go in a straight line.  This method was 
inadequate for the circumstances and the calf was paralysed.

  You and I know of many ways for getting from A to B, too many for
us to consciously list I would think.  Our redundancy (richness) of
problem solving methods gives us "understanding in depth" for very
many problems.  Certainly much greater than a calf could have!  
Therefore, our intelligence (as a function of problem solving) is
proportional in some way to the number of methods we can apply to
get the job done - our "depth of understanding".

  The ability (mentioned by Mr. Boleyn) to extend concepts and 
solutions would be provided by algorithms which revamp old 
solutions to fit new data.  How this is done, I don't know.  But
it does provide the 'meta' level brought up earlier.

  Since we are possessed of so many and possibly changing problem
solving methods - our consciousness of the process would be garbled.
Each solution apparently can work concurrently with several others
to produce a chaotic state - which is very flexible.  The chessmaster,
through years of learning has acquired innumeralbe algorithms for
playing chess in all possible situations, seeing many options where
the novice sees few (as in the calf story above).

  This leads up to the idea of basing an "understanding" scale on the
quality of the explanation given for a solution.  If we asked a 
brute force algorithm for an explanation of a particular move, we'd
likely get a pruned game tree.  The algorithm has *understood* and
solved the problem, but at a shallow *depth of understanding*.  If
we asked Kasparov for an explanation we would probably get an account
of the various solutions that he thought of (each separate algorithm's
solution) and how he decided among them (his 'meta' algorithm).  In
other words the information of the functioning of the 'lower level'
algorithms is hidden (although it can be inferred); who would want
to consciously try to keep track of so much detail?  We don't want
to have to remember how to breathe all the time, do we?  The fact
that Kasparov's explanation deals with 'meta' solutions to 
'intuitions' indicates a much greater *depth of understanding*!

  To conclude, the concept of understanding cannot be measured in a
yes/no fashion, but by graduations based on the number of possible 
solutions (flexibility) that can be  computed, and how many
levels of scrutiny the solutions undergo.  This is what I mean by
"depth of understanding".

  Sorry to be so long-winded, but I thought this might lead to some
good discussion.  Any takers?  Please?

PS.  This line of thought suggests to me a distinction between
"understanding" and "learning". Further, do we learn how to 
understand, or understand how to learn?  Both?  I lean towards 
the second option.
-- 
******************************************************************************
* Cameron Shelley   *    Return Path: cs4g6at@maccs.dcss.mcmaster.ca         *
******************************************************************************
*  /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\ *