[comp.ai] Polanyi in the Chinese Room.

caasnsr@nmtsun.nmt.edu (Clifford Adams) (04/27/89)

[I would have posted this earlier, but our site was off the net for
about a month.]

	About a month ago I finished the following paper for a
Philosophy of Technology class.  The purpose of the paper was to
summarize some works on the philosophy of science and technology, and
to relate them to personal experiences.  I closely followed the recent
comp.ai debate on the Chinese room argument, and tied the agument to
my thesis on "real" understanding and learning.  My major reference
for the ideas in this paper is _The Study Of Man_ by Polanyi, where I
first learned of the concepts of personal knowledge.


	I left the "texification" in the paper, but it should still be
readable.  I would enjoy any replies, especially replies from the
people who originally participated in the USENET discussion.  The
paper does get better after its initial paragraphs, which are simply
an introduction. (at least I hope they do :-)


	This paper Copyright 1989 by Clifford A. Adams.
	Permission is granted to duplicate given that this copyright
	notice is included.

---------- Cut Here ----------
\font\twelverm=cmr12
\twelverm
\font\seventeenrm=cmr17
\baselineskip=24pt
\parskip=24pt plus 2pt minus 2pt

\centerline{\bf \seventeenrm Artificial Intelligence and Learning}
\centerline{\bf \seventeenrm A Changing Perspective}
\medskip
\centerline{\rm by Clifford Adams}
\bigskip
	I have had an interest in Artificial Intelligence for many
years.  My interest began when I played with Eliza, the program which
simulates a Rogerian analyst.  It seemed to have a certain kind of
``life'' to it that other programs didn't have.  I was puzzled because
I knew exactly what instructions were in the program, yet the
correctness of most of Eliza's responses was amazing.  I soon started
adding features and information to Eliza in hopes of having real
conversations with it.  My additions added very little to the sense of
really conversing, and I soon abandoned the project.  At the time, I
saw Artificial Intelligence as a field that simply needed to have
problems solved.  When the problems were solved, intelligence would
arise.

	Soon after I arrived at Tech I learned about many spectacular
programs which solved many of the interesting problems.  I thought
that the AI community was most of the way toward solving the needed
problems, and that intelligent computers would soon become a reality.
Then I noticed that no real or measurable progress had been made
toward the final goal of AI.  I learned about the Turing test, and
believed that any computer able to pass that test must surely be
intelligent.  ``No problem,'' I thought.  ``all that's needed is a bit
of integration of the solutions.''  The integration then became a hard
problem.

	It was at this point that I decided not to make AI the core of
my Computer Science major, although I still have a strong interest in
the field.  The first part of this paper will explain how my views on
AI have been affected by the works of Heiseberg, Kuhn, and most
importantly Polanyi.  The remainder of the paper explains how
Polanyi's writings on understanding has changed my views of learning
and comprehension.

	Artificial Intelligence has been a field deeply divided by
competing paradigms.  There are no unifying theories or experimental
procedures in the field.  One of the greatest problems is that
researchers often do not agree on what AI even means!  This is unlike
other fields, like chemistry and physics, where many agree that a
problem belongs in the field.  Anything from problem-solving to
creativity to Barry Kort's ``ethical systems'' is a research topic for
someone in the field.

	The most obvious split in the field is between those who
attempt to recreate natural intelligence and those who attempt new
approaches.  Natural intelligence arises from neurons, so those in the
``natural'' camp often attempt to discover how these neurons are
organized to create intelligence.  They then try to organize ``neural
nets'' of simulated neurons in order to recreate the processes.  The
results have been amazing for small problems such as recognition or
``learning'' of a specific type of action, but no large-scale
reasoning processes have been simulated successfully.

	The second camp, which I have more interest in, is the
creation of intelligence by means that are more ``artificial'' than
neural nets.  This is also called the symbolic approach to Artificial
Intelligence.  The idea behind this method is that intelligence can be
modeled or created by formal means.  This kind of view is often held
by physical scientists such as Heisenberg, who believe that the
universe can be explained by Platonic formal symbols.

	Polanyi argues in {\it The Study of Man} that the
processes of understanding and intelligence are not monolithic.  He
shows that two kinds of understanding are important: both explicit,
rule-based understanding and tacit, experience-based understanding.
Polanyi then states several times that the tacit understanding is a
personal experience which cannot be done with strictly formal
operations.

	This conflicts with the view of many AI researchers who
believe in ``strong AI''.  They believe that it will some day be
possible for a machine to have ``true'' understanding in the same
sense that people do.  Polanyi says that such a goal is not reachable,
because formal operations are inadequate for tacit understanding.  One
reason for the impossibility is the total lack of a difference between
Polanyi's ``focal'' and ``subsidiary'' awareness.  The machine cannot
focus on the whole without focusing on the parts.  A computer either
knows something or it does not--there is no ability to focus on a
comprehensive whole while having a subsidiary awareness of the
particulars.  More basically, because computers lack the ability to
experience (as opposed to simply recording), they lack the foundation
of tacit understanding.  Therefore the computer cannot understand as
people understand.

	Various thought experiments have been proposed as tests for
the existence of intelligence or understanding.  One of the tests, the
Turing test (named after Alan Turing) is often cited by AI researchers
as being a good test for understanding.  The idea is that if a person
cannot tell the difference between conversations with a human and
conversations with a machine, the machine has demonstrated
intelligence equal to a human being.  The test appears to be correct
since it seems to require human intelligence to converse with another
person.

	The main argument against the Turing test is the Chinese Room
problem posed by Searle.  He said that a person could be placed in a
room, given explicit rules of action which do not require
understanding (such as ``write squiggle when you see loop''), and
given Chinese text to process.  If the rules are correct, the person
in the room will write good Chinese answers, but will not understand
the questions or Chinese in general.

	The Chinese Room argument is one of the most misunderstood
[sic] arguments in Artificial Intelligence.  The room behaves as if it
understands, yet it does not understand.  Some proponents of the
Turing test say that if the machine acts as if it understood, then the
machine actually did understand.  This is due to the confusion between
explicit and tacit understanding.  It may be fair to say that the
Chinese room {\it explicitly} understands, but it definitely does not
understand tacitly.  The Chinese room argument states that it is
impossible to know by symbolic means that a machine is doing anything
(tacit understanding) but manipulating symbols.

	One enhancement to the Turing test which defuses the Chinese
room is called the Total Turing Test, by Stevan Harnad.  Harnad
proposed a test in which a computer is placed inside a robot which
looks and acts exactly like a human being.  If a person cannot tell
the difference between the robot and a human, even after extensive
interaction, then one must conclude that the robot is truly
intelligent.  The reasoning behind this is simply the Other Minds
problem--if you can't tell the difference between the robot and a
human, and the human is assumed to have a mind, then the robot should
be considered to have a mind.

	Learning is one of the most important facets of understanding.
Learning is the process by which a (person {\it plus} knowledge)
becomes a (person {\it with} knowledge).  The ``plus'' indicates that
the person has the knowledge in some form (such as the rulebook in the
Chinese Room), but does not tacitly understand it.  When learning
occurs, the knowledge becomes an integral part of the learner.
Explicit knowledge is easy to forget, but tacit knowledge changes the
owner of the knowledge.

	Tacit knowledge is often hard to develop.  Learning to ride a
bicycle is a common process of tacit learning.  None of the advice or
suggestions really make sense until one ``gets the idea'' (learns) how
to balance.  Another example I am experiencing is in folk dancing.
The moves often happen too quickly for much thought to occur, so the
patterns must be learned tacitly, by ``teaching the feet, not the
head''.  Polanyi also recognizes that conscious thought may not be
appropriate for some learning situations.

	Public education has exposed me to many new experiences and
requirements.  Most of what I remember has been that which I have
learned both explicitly and tacitly.  I obtained little value from
some courses that I didn't like.  For those courses I only ``learned
the tests'', which was exclusively a process of explicit
understanding.  Some courses (especially math) I had a ``feel'' for,
but was unable to solve problems because of a lack of explicit
understanding.    I experienced both problems in a Spanish class.  I
learned the language explicitly for the first year, but I didn't
really understand the language.  My thoughts were closer to the man in
the Chinese room than those of a native speaker.  Then I suddenly
understood the language.  I could think in the language and speak
Spanish without mental effort.  Unfortunately, I knew very little of
the language at that point, and the novelty of thinking in two
languages soon decayed, to the point where I can only remember what it
was like to think in another language.

	I have also noticed the differences in understanding at Tech.  
Many people who choose Computer Science as a major don't know that a
large amount of tacit understanding is required.  Programming a
computer is like speaking in a language.  The transfer of ideas
between languages must occur smoothly and quickly in order to program
efficiently.  Some people seem to believe that they can simply program
``by the rules''.  They are then surprised when they need to design
and write a program by themselves, a nearly impossible task without a
tacit understanding of computers and programming languages.  Those who
understand can simply write code without specific thought, with a
focal awareness of the code segment, and only a subsidiary awareness
of particular lines or statements.  Pieces of code begin to look
``good'' or ``bad'' to a programmer depending on whether they fit her
mental models.  Errors in the program quickly become obvious, instead
of needing an exhaustive search.  In general, the tacit understanding
is what makes computer programming an enjoyable activity, rather than
a tiresome chore.

	In conclusion, Polanyi's work has made a major change in the
way I think about understanding.  I have had a tacit understanding of
many of his views, but his papers allowed me to explicitly discover
the difference between tacit and explicit understanding.  I also now
realize that this paper is mostly a test of whether I have tacitly
understood the texts used in class or if I am just parroting explicit
knowledge.
\bye

-- 
 Clifford A. Adams    ---    "I understand only inasmuch as I become."
 caasnsr@nmt.edu            ...cmcl2!lanl!unm-la!unmvax!nmtsun!caasnsr
 (505) 835-6104 | US Mail: Box 2439 Campus Station / Socorro, NM 87801

bwk@mbunix.mitre.org (Barry W. Kort) (05/01/89)

I enjoyed reading Clifford Adams' essay, "Artificial Intelligence
and Learning--A Changing Perspective."

I find myself increasingly amused by the debate over "True Understanding."
After reading Feynman's anecdotes about Brazilian physics students
(in _Surely You're Joking, Mr. Feynman_), I now differentiate between
shallow understanding and deep understanding.  I believe there is no
theoretical limit to the depth to which one can achieve understanding.

I find the word "understanding" a bit problematical, because I don't
fully understand what the word means.  I prefer the word "comprehension"
because its etymology is clearer.  "Comprehend" means "to capture with".
I capture knowledge by constructing a mental model that resembles
(in both structure and behavior) the object of my contemplation.
Often, I need to construct a physical model to play with, or a computer
model to interact with before I can get a good mental picture.

Using Lisp as a metaphor, I measure the depth of my understanding
by the number of levels of "chunking" or decomposition between my
overall model, and its atomic constituents.  I also measure my
depth of understanding by the number and richness of concrete
instances of an abstract model.  Here is where computers and AI
are a bit behind humans.  When it comes to models, analogies,
metaphors, and parables, computers are just getting started.

Symbolic representation is in its infancy.  For me to understand
something well, I need multiple interchangeable models.  I need
a verbal or mathematical representation, which I can manipulate
formally, and I also need a visual or geometric representation
which I can manipulate like a cartoon in my head (or on my computer
graphics display).  To my mind, the next great advance in computer
understanding comes with the computer's ability to project an
animated color image that graphically represents (re-presents)
the structure and behavior of a system by transforming a symbolic
(i.e. ASCII) representation into a visual (or audible) form.

Bi-directional information-preserving transformations are the
central tool of modeling.  We see examples in Fourier Transforms,
Analytical Geometry, Digital Signal Processing, Holograms, Analog
Computers, and various Duality Theories.

--Barry Kort