[talk.philosophy.misc] Why the Chinese Room doesn't convince

ray@bcsaic.UUCP (Ray Allis) (01/13/90)

John Searle has been telling this story for ten years and I have yet to hear
of a convert.  But, perhaps ironically, through analysis of its failure, it
will yet illuminate us. 

What *is* the difficulty with the Chinese Room story?  

Well, consider; the story is perforce posed by use of symbols; e.g. words on
a page (or a screen); words such as "understand" and "semantics".  Symbols
are after all an important means of communication among humans.

When I read this arrangement of symbols I build up an internal "picture" of
the situation: the room, the baskets of symbols, the man who "does not
understand Chinese".  I can picture "native Chinese speakers" coming up to a
window and passing in slips of paper with, perhaps, questions written on them
in Chinese.  Questions such as "How is the U.S. invasion of Panama different
from the U.S.S.R.'s invasion of Afghanistan?" or "Our daughter just informed
us she's pregnant by her uncle.  What should we do?"  But when I read that
even though nothing in the Room understands Chinese, the Room's "answers" to
the "questions" are "indistinquishable from those of a native Chinese
speaker", my imagination chokes; I can't "fit" that with the mental image I
have built up from the other premises.

To me, the situation described is internally inconsistent.  I can't bring
myself to believe that the Chinese Room will convince native speakers that it
understands Chinese *unless it really does understand Chinese*.  I am forced
to something like the "systems reply", i.e.  either the Room understands, or
it does not perform as well as a native Chinese speaker.  If the Room's
utterances are indistinguishable from those of a native Chinese speaker, then
it must understand about as much as that speaker does about the way the world
works, and the human condition.  This is basically a behaviorist view of
"understanding".

Could Searle *mean* something else by the word "understand"?  Maybe he means
the subjective experience of understanding?  For myself, I prefer a
subjective, experiential definition of the "Aha!" feeling when "things fall
into place", or the comfortable, secure feeling when there are few or no
unknowns in my environment, and for everyone else a behavioristic definition.
After all, I have no direct access to *their* experiences.  

Well, at bottom, my problem is that I cannot be certain that I *understand*
precisely what John *means* by the symbol "understand".  We cannot *know*
that we concur on the MEANING (semantics) of "understand".  Or it may be
that, as in my case, we hold more than one definition, and neither of us can
be certain which definition is intended in any given situation.  

There is nothing intrinsic in the words to determine their meaning.  This is
true in general for symbols.  We (John and I) must hope that we have
developed similar meanings, each through our individual experience, for the
symbols we use.  We depend on this assumption for communication with each
other.  We can send symbols back and forth, but not meanings.  

My inability to fit this set of premises into a coherent picture is not a
formal, logical problem.  It has nothing to do with the "shape" or
arrangement of the symbols, it is a perceived contradiction in the "mental
image" evoked (from my memories of experiences) by the premises.  

And this is the normal situation: we speak and write, we listen and read, and
nowhere is there logical certainty that we attach identical meaning to the
symbols.  In fact, we suspect that identity is quite unlikely, but with
practice, *from experience* we infer (induce) that we are communicating, and
assisted by feedback from our fellow conversants, we refine and sharpen until
we judge that we all "understand" (or give up in frustration).  That's the
situation each of us is in vis-a-vis the others.  (Wait a minute, are you
really people?)

Inside the Chinese Room there is only a "symbol system"; only marked pieces
of paper are manipulated; there are no meanings present.  There is no way for
the Chinese Room to "build up a picture" from an incoming "message" or
"question".  Actually these are "questions" only when they are outside the
Room, because their meanings exist only in minds.  Inside the Room there is
no possibility for contradiction and the notion of consistency does not
apply.  There is no "reality check".  (This is why I don't believe a Chinese
Room could fool anybody.)  Inside *any* symbol system it is the same; there
is no meaning present.  "Understanding" in the sense of "apprehending
meaning" clearly cannot exist inside a symbol system.  

So the Chinese room argument fails from the very error that it is intended to
illustrate.  We are handed a bag of symbols which are really not symbolic
*of* anything, and we are to "understand" that we have an illustration of
a hypothesis concerning whether certain kinds of systems can "understand",
and decide whether we agree and why.  As my son says, "Give me a *break*!"

I think the evidence is *overwhelmingly against* the hypothesis that symbol
systems are sufficient for the duplication of human intelligent behavior.  I
am always surprised that there are people still trying to build minds from
symbol systems!  I had come to this belief before the PSSH was stated as such
in the Turing Award Lecture,[1] and long before I encountered Searle's
Chinese Room story.

[1] Newell, A., and Simon, H. 1975. Computer Science as Empirical Inquiry: 
Symbols and Search. 10th ACM Turing Award Lecture. 

Three notes:

(1) The image is built up from MY EXPERIENCES.  My recall of some room and
some man, or some fuzzy, general image of each.

(2) Similarity (or difference, which is the same thing) is perceived among
the EXPERIENCES, not the symbols.

(3) That's what neuromorphic systems are good for, not "parallel computing"
but the acquisition, recording and recall of EXPERIENCE! 

(4) Isn't it obvious that diagnosis, model-based reasoning, natural language
understanding et. al. MUST be futile with nothing but tokens; no MEANINGS? 

rwojcik@bcsaic.UUCP (Rick Wojcik) (01/13/90)

In article <18883@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) writes:

>Well, at bottom, my problem is that I cannot be certain that I *understand*
>precisely what John *means* by the symbol "understand".  We cannot *know*
>that we concur on the MEANING (semantics) of "understand".  Or it may be
>that, as in my case, we hold more than one definition, and neither of us can
>be certain which definition is intended in any given situation.  

Ray's posting was excellent, but I would add a few more comments to this
point, which seemed to be his central point.  First of all, we do not know
what it is about human "wetware" that gives rise to subjective experiences
(e.g. Ray's mental pictures).  So it is difficult to claim that computing
machines can't somehow be made to have subjective experiences.  (I don't claim
that they can.  I claim that we don't know how to prove that they can't.)
Secondly, 'understanding' is not a binary predicate.  There is no true
threshold where one can say a human has 'understood' an expression or not
'understood' it.  There are many degrees and levels of understanding.  You can
even 'understand' foreign expressions if you infer their meanings correctly
from other cues (e.g. body language), not using linguistic cues at all.  For
example, if you don't know that Japanese "Hai" means agreement, but you see a
nod of the head, you might infer what it means.  Is this 'understanding' in
Searle's sense or not?  There is no way to tell, since the term is inherently
vague.
-- 
Rick Wojcik   csnet:  rwojcik@atc.boeing.com	   
              uucp:   uw-beaver!bcsaic!rwojcik 

weyand@csli.Stanford.EDU (Chris Weyand) (01/14/90)

ray@bcsaic.UUCP (Ray Allis) writes:

...

>Inside the Chinese Room there is only a "symbol system"; only marked pieces
>of paper are manipulated; there are no meanings present.  There is no way for
>the Chinese Room to "build up a picture" from an incoming "message" or
>"question".  Actually these are "questions" only when they are outside the
>Room, because their meanings exist only in minds.  Inside the Room there is
>no possibility for contradiction and the notion of consistency does not
>apply.  There is no "reality check".  (This is why I don't believe a Chinese
>Room could fool anybody.)  Inside *any* symbol system it is the same; there
>is no meaning present.  "Understanding" in the sense of "apprehending
>meaning" clearly cannot exist inside a symbol system.  

Why do you say this?  How is the human brain any different.  There are
simple processing units, neurons, being manipulated.  What do you mean by
there are no meanings present?  Also I'm confused by what you mean by,
"Inside the room there is no possibility for contradiction ..."

That a symbol system couldn't understand is not so clear to me.  Where is
your support for this claim?  Intuition?  If the mind is not a symbol system
what is it?  
Why couldn't the "Room" (Book + Searle) "build up a picture"?
Clearly there would have to be more going on than simply Searle reading 
from the book and returning answers.  The book would be constanly changing
based on it inputs/perceptions.  I take it for granted that Searle would 
be changing the book.  It couldn't possibly be a static system if we are
to believe that it can understand Chinese.  It seems to me that to respond
to a language such that others believed you understood the language takes
a general intelligence.  You have to have a great deal of information about
the world.

What if we make the book a complete mapping of the brain of an agent that
understands Chinese, i.e. a huge neural net.  And also give Searle the rules
that govern the behavior of each neuron.  Now we let Searle take the input,
present the input to the book (assume rules for manipulating the proper neurons
based on auditory signals), and follow the rules to propagate the effects.  
THis would involve calculating the inputs to the neurons the output firings
and updating weights and such things that we believe occur in the brain.
We'd also have to have some way for Searle to get output from the Book so we
could assume he has the rules for the neurons that stimulate the vocal chords
and some means of simulating their effects.  e.g. a mechanical device that
simulates human vocal apparatus based on lip, tounge movement etc, all of which
is dictated by Searle's "simulation" of the neural network.  

Now it seems to me that we have a symbol system here, the *symbols* are
probably hidden from us though.  The symbols correspond to patterns of 
activation.  At the lowest level (neuron level) there would seem to be 
nothing but deterministic rule-governed behavior.  The twist comes in the
strange loop whereby the lowest level affects higher *symbolic* levels 
and the higher levels in turn affect the behavior of the lower levels.

Does this scenario seem more plausible for attributing understanding to the
room?  

>So the Chinese room argument fails from the very error that it is intended to
>illustrate.  We are handed a bag of symbols which are really not symbolic
>*of* anything, and we are to "understand" that we have an illustration of
>a hypothesis concerning whether certain kinds of systems can "understand",
>and decide whether we agree and why.  As my son says, "Give me a *break*!"

>I think the evidence is *overwhelmingly against* the hypothesis that symbol
>systems are sufficient for the duplication of human intelligent behavior.  I
>am always surprised that there are people still trying to build minds from
>symbol systems! 

Where is your "evidence".  Did I miss something.  You seem to me to be 
assuming that a symbol system  "clearly" cannot understand.  You can't assume
this, you  are trying to show this.

>(4) Isn't it obvious that diagnosis, model-based reasoning, natural language
>understanding et. al. MUST be futile with nothing but tokens; no MEANINGS? 

Of course there is semantics but you haven't shown that syntax cannot give
rise to semantics.  Like Searle you have assumed this as obvious.  


Chris Weyand -=- weyand@csli.Stanford.Edu

jgk@osc.COM (Joe Keane) (01/15/90)

In article <18883@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) writes:
>To me, the situation described is internally inconsistent.  I can't bring
>myself to believe that the Chinese Room will convince native speakers that it
>understands Chinese *unless it really does understand Chinese*.  I am forced
>to something like the "systems reply", i.e.  either the Room understands, or
>it does not perform as well as a native Chinese speaker.  If the Room's
>utterances are indistinguishable from those of a native Chinese speaker, then
>it must understand about as much as that speaker does about the way the world
>works, and the human condition.  This is basically a behaviorist view of
>"understanding".

So according to the behaviorist view, the Chinese Room must understand Chinese
in order to perform as well as a native Chinese speaker.  I agree this is
true, although somewhat irrelevant since it's just talking about the meaning
of `understand'.  But anyway, you apparently do not believe that the Chinese
Room can understand.  We'll see why...

>There is nothing intrinsic in the words to determine their meaning.  This is
>true in general for symbols.  We (John and I) must hope that we have
>developed similar meanings, each through our individual experience, for the
>symbols we use.  We depend on this assumption for communication with each
>other.  We can send symbols back and forth, but not meanings.  
>[...]
>And this is the normal situation: we speak and write, we listen and read, and
>nowhere is there logical certainty that we attach identical meaning to the
>symbols.  In fact, we suspect that identity is quite unlikely, but with
>practice, *from experience* we infer (induce) that we are communicating, and
>assisted by feedback from our fellow conversants, we refine and sharpen until
>we judge that we all "understand" (or give up in frustration).  That's the
>situation each of us is in vis-a-vis the others.  (Wait a minute, are you
>really people?)

This is a good point; we don't get `meanings' or `understanding' by some sort
of meaning telepathy.  They are communicated through those dreaded symbols.

>Inside the Chinese Room there is only a "symbol system"; only marked pieces
>of paper are manipulated; there are no meanings present.

Here's where you lose me, with `only', `only', and an invalid conclusion.  If
you ask the Chinese Room for the meaning of a Chinese word, it will tell you.
This is certainly easy to do, you just have to include a Chinese dictionary.
Of course you probably want usage examples, connotations, and similar things,
just like any native speaker would have.

>There is no way for the Chinese Room to "build up a picture" from an incoming
>"message" or "question".

I'd say there is.

>Actually these are "questions" only when they are outside the Room, because
>their meanings exist only in minds.

Now i see what your assumptions are.  If you assume there's such thing as a
`mind', that meanings can only exist in a mind, and that minds only exist in
human brains, then why don't you just say that?  Then this whole post is
begging the question (NPI).

>Inside the Room there is no possibility for contradiction and the notion of
>consistency does not apply.  There is no "reality check".  (This is why I
>don't believe a Chinese Room could fool anybody.)

For someone who thinks the Chinese Room can't exist, you sure know a lot about
how it works.  _Why_ can't you have consistency; _why_ are there no `reality
checks'?  Seems plausible to me.

>Inside *any* symbol system it is the same; there is no meaning present.
>"Understanding" in the sense of "apprehending meaning" clearly cannot exist
>inside a symbol system.
>[...]
>(4) Isn't it obvious that diagnosis, model-based reasoning, natural language
>understanding et. al. MUST be futile with nothing but tokens; no MEANINGS? 

Give _me_ a break.  `clearly', `obvious' to you maybe.  Tell me, what is a
MEANING?  What do you need to do to be able to know what something means?
Give me any sort of test, and i'll be happy.  If there's no test, you're not
talking about something in the real world.

qw0w+@andrew.cmu.edu (Quanfeng Wu) (02/21/90)

kp@uts.amdahl.com (Ken Presting) writes:

>>Really?! let's restrict our programming language to only expressing
>>addition and multiplication of two digits. Ok, now we have a bunch of
>>syntactic rules as expressed in BNF:
>>    Syntax:    <expression> ::= <addition> | <multiplication>
>>               <addition> :: = <digit> + <digit>
>>               <multiplication> ::= <digit> + <digit>
>>               <digit> ::= 0|1| .... |9
>>and a bunch of semantic mappings:
>>                0 + 0 --> 0, 0 + 1 --> 1, ...., 9 + 9 = 18;
>>                0 * 0 --> 0, 0 * 1 --> 0, ...., 9 + 9 = 81.
>>I don't see difficulty in putting the above syntactic rules and semantic
>>mappings together into the following syntactic rules, again expressed in
>>BNF (Backus-Naur notation):
>>
>>                0 ::= ( 0 + 0 ) | (0 * 0) | (0 * 1) | (1 * 0)
>>                ............
>>                18 :: = ( 9 + 9) | ( 2 * 9) | (9 * 2)
>>                81 ::=  9 * 9.
 
>I'm a little confused by this example, but I think I see what you're
>getting at:
 
>You have correctly used the BNF notation in the first case, but the second
>case has either no terminal symbols, or no non-terminal symbols, or no
>starting symbol, or none of the above!   There is also a collision between
>the use of the digit symbols in the mapping and the second BNF grammar.

I should have said that all the symbols in the second BNF are terminal
symbols; I don't need non-terminal symbols at all; nor starting-symbol
(you mean something like <expression>?). My mother is illiterate, and
I'm sure she doesn't know what is sentence and what is noun and so on
(at least she cannot offer some satisfactory definitions of these
non-terminal and/or starting symbols); however, she surely can process
the ordinary language.

 
>It looks like you are trying to define a language whose expressions
>refer to numbers, then use BNF-style formulas to indicate which
>expressions can be mapped to which number.

Exactly, that's what I was trying to do. However, I have to maintain
that in this example I simply don't need the notion 'expression'. I just
want to have a bunch of mapping rules, that's enought; that is, from
these mapping rules, I surely can do all the additions and
multiplications limited to two decimal digits. 
 
>There is a basic problem with BNF: it's not general enough to define
>recursively enumerable sets, so there is no way to get a move like
>this going if the starting language includes (eg) Diophantine equations.
>If you move to more powerful grammars, eventually you'll run into Tarski's
>theorem.  The *provable* sentences in a formal theory are syntactically
>defined, but the *true* sentences cannot be.
>(If you're interested in the math behind this problem, you may want to
>look at Hopcroft & Ullman, "Formal Languages and their Relation to
>Automata", if you haven't already)

You are right; I'm aware of the difficulty of using BNF to express
semantics. When the semantics domain is large enough, BNF is not
inadequate; and then, even the most powerful syntax formalism, the
Turing machine, cannot handle. I don't know anyone has formalized all
the grammar rules of our ordinary language. And that's why I picked the
simplest exmple that came into my mind. For that example, I do believe
BNF is sufficient to express both the synatx and semantics.
 
>But there is an even BIGGER problem with this suggestion.  Suppose you
>do show two BNF grammars, with a semantics for the first that parallels
>the syntax of the second.  How can you show that you got it right?  You
>have to define the semantics in the first place!

Exactly, it's not only a BIGGER problem; indeed it's the BIGGEST problem
relating to semantics. Actually, eariler in the posting you have pointed
out this problem: semantics is connected to, and has to be defined in
relation with, perception and behavior. In my example, I have the
assumption that meanings have been symbolized.  And upon that
assumption, I have the claim that operation on semantics is just
manipulation on symbols.
  
>Let me emphasize again that these are issues which Searle seems to
>completely ignore.  He does not acknowledge that semantic information
can be encoded in a program at all.  (I guess he's never compiled a
>program).  What's important for us, in our discussions of Searle, is to
>avoid getting confused ourselves about which information is syntactic
>and which information is semantic.

I really would maintain that the boundary between syntax and semantics
is very fuzzy. I had some experience of design a small language for the
purpose of real-time-control application; because that case dealt with a
very specific semantic domain, and from that experience I firmly believe
that the dividing line between syntax and semantics is arbitary; that
is, I could almost arbitarily put more and more semantics checking into
the syntax checker while designing the complier.

One may say that semantics can be embedded in syntax; but perhaps a more
acceptable way to state that is to say: syntax is a way of expressing
partial semantics; what has become the syntax of our ordinary language
for us is the most lawful semantics existed in the world and perceived
by our species in the long evolutionary history.

xerox@cs.vu.nl (J. A. Durieux) (02/21/90)

In article <sZrtkau00UhW81pHhU@andrew.cmu.edu> qw0w+@andrew.cmu.edu (Quanfeng Wu) writes:

>You are right; we may indeed have to make the distinction between
>"numbers" and  "numberals". However, very unfortunately, I really don't
>see there is a way of expressing "numbers' without resorting to any
>"numerals", ...

Of course not: once you talk about expressing, you have shifted
to syntax already.  That's one of the sources of confusion in the
syntax/semantics debate: when one *talks about* semantics, one is
using syntax.  Nobody can "write down a semantics", except by
using a syntax, about which' semantics there is agreement.
And always there is someone who doesn't appreciate this
indirection, and concludes that semantics and syntax are really
not very different.

rolfl@ulrik.uio.no (Rolf Lindgren) (02/21/90)

In article <36550@mirror.UUCP> francis@mirror.UUCP (Joe Francis) writes:
>   Why the Chinese Room argument doesn't convince ME:

>   Searle hasn't given us a very clear description of the Chinese room,
>   but we can fill in some blanks from what he has told us.

To some extent he has. The original article is a critique of a program by Roger
Schank, a program that, according to Searle, does what the Chinese Room does,
and in the same way as that in which the Chinese Room does it.

In the article, he explains, sucessfully, 1) that the Chinese Room is in all
ways parallell to Schank's program, and, 2) that Schank's program is not
`intelligent' or, does not have `understanding'.

The place where he sort of takes off is when concludes that by 1) and 2), a
Universal Turing Machine cannot have understanding, hence no computer can have
understanding.

I feel that 1) and 2) are not really worth discussing from a `Does the program
work?' point of view. The program most likely can't cope with puns, and I can't
really see that it can learn, either. If Schank's program has memory, then so
does the Chinese room. 

I'm sort of on Searle's side. I can see that the brain possibly can run
programs for simple householding tasks, but I'm still not convinced that there
can be a direct mapping from sensory inputs to thinking to cognition - and
possibly some language of the brain - to output in the form of human language.

>  anything passing TT must be able to solve simple (and some
>  not so simple problems), demonstrating that 
>  Additionally, anything passing TT must be able to learn.  This
>  demonstrates that the book instructs Searle to modify itself.

Certainly. So Schank's program is not perfect. Neiter is the Chinese Room.

I'd like to see Searle designing a room that is equipped with a `newborn infant
script', geared to survival in a mean and hostile world, with a program that
can modify its script, a featuere that the programmer might call `an effect of
evolution'.

Shrink the room a lot and place it in the head of a `newborn infant robot'. Let
it interact socially with its environment. Sometimes, when I read CogSci
litterature, I wnder whether some of those researchers still remember what it
was to interact socially with people, rather than formally with computers:-)

The modifications may be triggered by the aid of procedures called `pleasure',
`pain', etc. The symbol pusher needs not know or understand what happens when
?he changes the program. By and by, the computer program that is being
instantiated in the baby's head should be able to learn both Chinese and
English.

Rolf Lindgren		| 	"The opinions expressed above are 
612 Bjerke Studentheim	|  	 not necessarily those of anyone"	
N-0589 OSLO 5		|  email: rolfl@ifi.uio.no  rolfl@humanist.uio.no 

smoliar@vaxa.isi.edu (Stephen Smoliar) (02/21/90)

In article <a6UG02ww89FI01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken
Presting) writes:
>
>I would deny that the interrelationship of semantics with more general
>cognitive features makes it either impossible or unproductive to define
>semantics in isolation from the rest of cognition.  I see the situation
>of semantics as similar to knowledge representation, motor control, or
>syntactic competence.  We should analyze them separately at first, then
>consider the interfaces, and proceed from there to a full system.  It's
>OK, of course, to work in parallel on all these, just don't rule out the
>serial approach.
>
Ken, I think you have put your finger on the heart of our disagreement;  and
while I try to be careful about getting too extreme with words like
"impossible" and "unproductive," it certainly WAS my intention to introduce
a cautionary remark about your approach.  Perhaps another way of putting it
is that, believe it or not, I, too, am trying to sort things out;  however,
we see different "things" as being fundamental.  Let me see if I can capture
the distinction in a manner upon which we can agree.

The impression I get is that your fundamental concern is with employing
appropriate forms of symbol systems as abstractions for natural systems.
Thus, you list systems such as semantics, knowledge representation, motor
control, and syntactic competence and advocate that abstractions for each
of these systems may be developed in relative isolation as long as one
recognizes the need for appropriate interfaces.  This is, essentialy,
a good, healthy, modular approach to software development.

The reason I am questioning what would appear, to most of our readers, to be
almost absolute motherhood actually stems from personal experiences in software
development.  One of the lessons I have learned is that the modular approach
generally works best if you already know what the modules are!  (I've always
felt this is the most important justification from Brooks' adage to "build one
to throw away."  Only after you've built the "first draft" do you begin to
develop a suitable intuition for what it is REALLY made out of.)  With regard
to the discussion at hand, I am prepared to argue that we are still so far from
implementing intelligent behavior that any attempt to approach it in terms of
modules (even modules which carry all the weight of tradition in logic,
linguistics, and philosophy) MAY (note that I shall not stick my neck out
with "will") mislead us.

Another way of putting all this is that, even if you think you have identified
a viable set of modules, you may still not be able to avoid a rather high
bandwidth of communication across your interfaces.  This means that the
progress you can make in studying any one module in isolation will be severely
limited.  (This is not to say "unproductive;"  but it will be productive to the
extent that you are aware of the limitations.)  In the case of language, we see
the extent of such complex interactions as early as Terry Winograd's pioneering
work.  One of the reasons he built his system on a procedural representation of
knowledge was to establish a better handle on all those interactions.

In conclusion, I do not think the issue is one of whether or not we take a
serial or parallel approach.  What is important is that, if we try to build
a model, we should try to build one large enough to accommodate some corpus
of intelligent behavior.  Part of the trick is defining such a corpus with
a scope small enough to admit of implementation but broad enough to be
interesting.  Chinese rooms are clearly beyond that scope level.  Rather
than wax philosophical about how they might turn out, we should look for
some form of behavior on a more manageable scale.  Then we can assess whether
or not such traditional "modules" as syntax and semantics are really as useful
in our studies as conventional wisdom would have us believe.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"Only a schoolteacher innocent of how literature is made could have written
such a line."--Gore Vidal

kp@uts.amdahl.com (Ken Presting) (02/22/90)

In article <gZsXuTm00WBKI3I1wG@andrew.cmu.edu> qw0w+@andrew.cmu.edu (Quanfeng Wu) writes:
>kp@uts.amdahl.com (Ken Presting) writes:
>> . . .  Suppose you
>>do show two ... grammars, with a semantics for the first that parallels
>>the syntax of the second.  How can you show that you got it right?  You
>>have to define the semantics in the first place!
>
> . . .  In my example, I have the
>assumption that meanings have been symbolized.  And upon that
>assumption, I have the claim that operation on semantics is just
>manipulation on symbols.

All information can be expressed and processed in symbolic form, whether
it's about the syntax of a language, the semantics of a language,
the positions of the planets, even whether John loves Mary.  When you
give an example of "symbolized" information, you have NOT shown either:

    1) That the subject matter described by the information is symbolic

or  2) That operations on the objects described are symbolic operations.

>>Let me emphasize again that these are issues which Searle seems to
>>completely ignore.  He does not acknowledge that semantic information
>can be encoded in a program at all.  (I guess he's never compiled a
>>program).  What's important for us, in our discussions of Searle, is to
>>avoid getting confused ourselves about which information is syntactic
>>and which information is semantic.
>
>I really would maintain that the boundary between syntax and semantics
>is very fuzzy. I had some experience of design a small language for the
>purpose of real-time-control application; because that case dealt with a
>very specific semantic domain, and from that experience I firmly believe
>that the dividing line between syntax and semantics is arbitary; that
>is, I could almost arbitarily put more and more semantics checking into
>the syntax checker while designing the complier.

This is a good example.  When designing a compiler (or especially an
interpreter) the designer has many options in assigning the syntactic
and semantic processing to different modules.  Frequently, the most
efficient design will mix syntactic with semantic information.  In the
case of human language understanding, it seems assured that the overlap
is almost complete.  Overlap in *implementation*, that is.

Let's look at another case: motion in two dimensions.  As the Moon orbits
the Earth, part of its motion is radial (toward or away from the Sun),
and part of its motion is tangential.  At all but four points in the
orbit, the motion is mixed between the two directions.  This does not mean
that the motion does not have two separate components.

Similarly, an parser may use semantic information extensively, but that
does not mean that the semantic information is therefore syntactic.

Take the sentence, "John and Mary had dinner, but she didn't eat much".
The pronoun "she" refers to only one of the subjects.  Which subject is
mentioned is part of the syntax of the sentence, but semantic information
about "she" and "Mary" will probably be used in the parse.  It's still
a matter of semantics that "she" refers to females, and "Mary" is a
woman's name.

> . . .  syntax is a way of expressing partial semantics; . . .

I agree with this.

qw0w+@andrew.cmu.edu (Quanfeng Wu) (02/22/90)

From: kp@uts.amdahl.com (Ken Presting) write:
>This is a good example.  When designing a compiler (or especially an
>interpreter) the designer has many options in assigning the syntactic
>and semantic processing to different modules.  Frequently, the most
>efficient design will mix syntactic with semantic information.  In the
>case of human language understanding, it seems assured that the overlap
>is almost complete.  Overlap in *implementation*, that is.
 
But, when referrin to natural langugaes, what would you mean by saying
*implementation*? You mean different English speakers have different
implementations (or say, parsers) of the language English? or you mean
English, French, Chinese and so on are different implementations of a
unique human communication language (which is neither English, nor
French, ....)? 

I think it may not be necessary to distinguish between a natural
language and its implementations. 

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (02/22/90)

From article <cZsr6tS00UhB02WVs9@andrew.cmu.edu>, by qw0w+@andrew.cmu.edu (Quanfeng Wu):
 
>But, when referrin to natural langugaes, what would you mean by saying
>*implementation*? You mean different English speakers have different
>implementations (or say, parsers) of the language English? ...

Possibly displaying different choices of semantic versus syntactic
implementation of number agreement in English:

	There's two modules to implement. (syntactic agreement)
	There're two modules to implement. (semantic agreement)

	Robins are a sign of Spring. (syntactic agreement)
	Robins is a sign of Spring. (semantic agreement)

	Where're the scissors? (syntactic agreement)
	Where's the scissors? (semantic agreement)

			Greg, lee@uhccux.uhcc.hawaii.edu 

kp@uts.amdahl.com (Ken Presting) (02/23/90)

In article <11965@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes:
>In article <a6UG02ww89FI01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken
>Presting) writes:
>>
>>I would deny that the interrelationship of semantics with more general
>>cognitive features makes it either impossible or unproductive to define
>>semantics in isolation from the rest of cognition.  I see the situation
>>of semantics as similar to knowledge representation, motor control, or
>>syntactic competence.  We should analyze them separately at first, then
>>consider the interfaces, and proceed from there to a full system.  It's
>>OK, of course, to work in parallel on all these, just don't rule out the
>>serial approach.
>>
>The impression I get is that your fundamental concern is with employing
>appropriate forms of symbol systems as abstractions for natural systems.
> . . .   This is, essentialy,
>a good, healthy, modular approach to software development.
>
> . . . I am prepared to argue that we are still so far from
>implementing intelligent behavior that any attempt to approach it in
>terms of modules . . . MAY . . .  mislead us.

No need to argue!  I think we are so far from implementing intelligent
behavior that every attempt to date *has* misled us.  Some have been led
out of the field altogether, by what I consider the most impressive
implementations.  :-(

I have no disagreement at all with your cautions on the implementation
of semantics.  Staying with the software cycle metaphor, I would
say I confine my attention to application analysis rather than design,
if you will grant that "application analysis" in the context of AI
includes the philosophy of mind and language.  In this context, formal
semantics has the indispensable function of precisely specifying some
of the information we'll want in the eventual application.

There is also a role for formal semantics in the "software verification"
phase, although the role is indirect.  Some significant portions of
human behavior are typically explained by claiming that the person in
question understands a language.  We need to pay some attention to
the definition of 'understanding' and devising "test cases" which will
evoke behavior from a computer which are difficult to explain *without*
admitting that the computer understands.  My rough defintion of
'understanding' as 'knowledge of semantics' is intended as an effort in
this direction.  I doubt that this definition is complete, but it's a
reasonable first approximation.

Let me now return to the Chinese Room.  I think that Searle's arguments
show *nothing* about the limitations of so-called symbolic or algorithmic
processing.  Personally, I find it difficult to distinguish (on any
philosophical grounds) between algorithmic and any other kind of
processing.  The Differential Analyser was not only analog, but also
mechanical, yet its input graph is one commonplace notation for specifying
a function.  On the other hand, I find it amusing to imagine a plugboard
as an expression in a formal syntax, yet that was a preferred technique
for programming the early digital machines (such as ENIAC).

However, once the CR is extricated from its context in Searle's argument,
I find that a fascinating conclusion can be drawn from it:  It is
possible for a Turing test to be passed by a computer, without thereby
evoking behavior which demonstrates understanding.  Now, to accomplish
this astonishing feat of incompetence in test methodology, it is necessary
to empty one's mind of all daily experience and ascend to refined heights
of art and philosophy.  But it is possible.  Just don't ask the machine
about the weather, or any other real, contemporary event.

This conclusion is probably not revelatory to many here, but it should not
be overlooked.  If the formal view of semantics is useful in getting
something of value out of the Chinese Room debate, then it is a
remarkable tool indeed ...

I have attempted here (and in previous posts) to show that understanding
is a legitimate design goal for AI, and that objective tests can be
relevant to establishing the presence or absence of understanding.  The
CR shows the negative case, the positive case is admittedly more
difficult.  I would be very interested to hear any disagreements.

smoliar@vaxa.isi.edu (Stephen Smoliar) (02/25/90)

In article <0cWk02pp8aza01@amdahl.uts.amdahl.com> kp@uts.amdahl.com (Ken
Presting) writes:
>
>All information can be expressed and processed in symbolic form, whether
>it's about the syntax of a language, the semantics of a language,
>the positions of the planets, even whether John loves Mary.

As long as this discussion is being cross-posted to talk.philosophy.misc,
perhaps we should try to take a look at this sentence without being drawn
into the kind of hysteria which Searle has promulgated.  Do any of us REALLY
believe that ALL information can be both expressed and processed in symbolic
form?  I, for one, am not willing to buy into the extreme form of this
proposition;  and I would like to try to elaborate on the point WITHOUT
falling into the trap of playing games with highly-charged terms like
"understanding."

My reasons for questioning this assertion actually have more to do with Marvin
Minsky's THE SOCIETY OF MIND than with anything Searle has yet said on the
issue of intelligence.  Much of Minsky's book is based on raising doubts as
to whether patterns of behavior which are associated with memory can be
adequately modeled in terms of the sentential forms of a symbolic logic,
the sort of symbolic form which is usually invoked for the representation
of information in Ken's premise.  Minsky's doubts actually have their roots
in a Freudian view of the world.  He says that we usually assume that
declarative facts are easy for a machine to store and recall, while concepts
such as feelings are much more difficult, while in the human brain, the
situation is very much the other way around.  Fundamental to much of Minsky's
writing is the question of what it is about the human brain that makes this
"reversal of priorities" so.

The point is that, while you should have no trouble with a symbolic
representation of the sentence "John loves Mary," that does not mean
that you have any representation of how John actually FEELS.  I realize
that this brings me dangerously close to some of Searle's arguments about
intentionality, but I don't think we have to go over that brink.  Instead,
let me try to give a more concrete example which does not involve a fictitious
John and Mary.

Let me only assume that we have both seen the movie PSYCHO.  (If this is not
the case, let me know;  and I shall come up with another example.)  In 1988
the Kronos Quartet happened to give a recital on Halloween here in Los Angeles.
They have built up quite a reputation for playing arrangements of rock, jazz,
and blues for encore pieces.  On this occasion, however, they ended the evening
with an arrangement of the shower music from PSYCHO.  This had all sorts of
effects on me.  On the one hand, it reminded me of that portion of the film
and all the elements of anxiety associated with it.  On the other, there was
an amusing side to hearing this being played by a string quartet in a concert
hall (enhanced by the viola player coming out in a towel and shower cap).  As
a result, I was dealing with concurrent impressions of anxiety and amusement.
It was really quite something.

Now, think about what I have communicated to you in that last paragraph.
Assuming you know about PSYCHO and assuming you know something about string
quartets, my guess is that I have been able to communicate my impressions to
you;  but HAVE I DONE SO IN TERMS OF INFORMATION EXPRESSED IN SYMBOLIC FORM?
On the one hand, you can argue--and rightly so--that the above paragraph is
a symbolic form;  but it that where the information resides?  Another
possibility is that I have tried to use the words of the above paragraph
to INDUCE retrieval from your own store of memories in such a way that you
might be able to share the feelings I had.  If that is the case, then I MIGHT
be willing to argue that the "information," such as it is, resides in your own
mental state;  and I am using those words simply to DISPOSE you to invoke that
mental state.  At this point, I do not think we are talking about information
expressed in symbolic form any more.

What ARE we talking about, then?  Quite honestly, I'm not really sure.
However, as a result of attending the Santa Fe Artificial Life Conference,
I am willing to entertain the possibility of viewing information as an EMERGENT
PROPERTY, by which I mean a "side effect" of some set of activities, rather
than a concrete object that I can describe symbolically in terms of concepts,
attributes, and relations.  If this is the case (and I still feel it is
necessary to emphasize that we are dealing with a VERY BIG "if"), then
there may, after all, be justification in arguing about the shortcomings
of what we can do with symbols.  Ironically, Searle seems more inclined
to beat up on Minsky in his expositions (as he did at a colloquium at UCLA)
then consider the possibility that Minsky may be introducing a new twist which
might be of value to him.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"Only a schoolteacher innocent of how literature is made could have written
such a line."--Gore Vidal

sticklen@cpswh.cps.msu.edu (Jon Sticklen) (02/25/90)

From article <12015@venera.isi.edu>, by smoliar@vaxa.isi.edu (Stephen Smoliar):

...


i have a similar intuition as stephen that symbol level processing 
is not everything. but i find his arguments not compelling me toward 
that conclusion. the gist of stephen's argument is that his narative
paragraph about PSYCHO and string quartets is not what communicates 
his expereince to me, but rather that his narative puts me in a "mental
state" that is similar to his, and that in fact the information was
somehow in me all the time. 

no problem. another, maybe more obvious example, is of couples that have
been together for many years and that have to utter only the slighest
phoneme for the other to seemingly totally understand some subtle
communication. the content of the communication is pretty clearly mostly
in the receiver, and the provocation by the sender is just a trigger.

but that observation does not seem to support sub-symbolic representation
and computation. it would seem equally plausible that the hearer (or
reader of stephen's paragraph) had that information stored somehow 
symbolically, and once triggered it popped up.

where is the compelling support for sub-symbolic storage and retrieval
in stephen's example?

again, i emphasize that i share the intuition that some combination of
symbol level processing and sub-symbol level processing is going
to be necessary. i just don't that stephen's example is helping to force
us in that direction.

	---jon---



-------------------------------------------------------------
	Jon Sticklen
	Artificial Intelligence/Knowledge Based Systems Group
	Computer Science Department
	Michigan State University
	East Lansing, MI  48824-1027
	517-353-3711
	FAX: 517-336-1061
-------------------------------------------------------------

qw0w+@andrew.cmu.edu (Quanfeng Wu) (02/25/90)

>where is the compelling support for sub-symbolic storage and retrieval
>in stephen's example?
 
>again, i emphasize that i share the intuition that some combination of
>symbol level processing and sub-symbol level processing is going
>to be necessary. i just don't that stephen's example is helping to force
>us in that direction.

I also have that intuition; but my problem is: can all sub-symbolic
information storage and processing be simulated in a symbolic fashion?

This may serve an illustration for my question: when I (or someone else)
hit my knee, my leg would have a reflexive kick; in this case, I don't
think my hand has given any symbolic information to my leg, neither
would I think my leg has processed any symbolic information to give out
that reflexive behavior. That would be an example of sub-symbolic
processing; and it may be easily simulated in a connectionist model. But
one the other hand, I also don't think it would be very difficult to
implement that kind of reflexive behaviors in a symbolic way, say, using
a micro-processor attached to an artificial leg.

Is there any evidence that some sub-symbolic computation as simulated in
connectionist models are *not computatable* in symbolic computation? Of
course, I think there are examples of *more computationally efficient*
in syb-symbolic than in symbolic!

-Quanfeng Wu

smoliar@vaxa.isi.edu (Stephen Smoliar) (02/27/90)

In article <6595@cps3xx.UUCP> sticklen@cpswh.cps.msu.edu (Jon Sticklen) writes:
>
>i have a similar intuition as stephen that symbol level processing 
>is not everything. but i find his arguments not compelling me toward 
>that conclusion.
>
>where is the compelling support for sub-symbolic storage and retrieval
>in stephen's example?
>
Actually, it was not my intention to muster "compelling support for
sub-symbolic storage and retrieval."  My goal was much more modest.
I simply wished to point out that we should be skeptical in Ken's
claim that all information "can be expressed and processed in symbolic
form."  My personal feeling is that the term "sub-symbolic" is in danger
of a modest level of abuse from a variety of different connectionist camps,
each of which has a different angle on how they want to use it.  I would prefer
to punt on the term altogether and try to be a bit more specific in saying what
constitutes an alternative to a symbolic approach.

The reason I cited Minsky is that the alternative I am most interested in
pursuing is one based on processes.  Yes, THE SOCIETY OF MIND has returned
us to the old procedural/declarative controversy;  but, hopefully, we have
a bit more experience in the varieties of knowledge representation this time
around.  My question is:  Can we define some collection of relatively simply
processes whose activities yield behaviors which outside observers may describe
as the formation of categories and the assignment of labels to those
categories?  One of the major punch lines of Gerald Edelman's NEURAL
DARWINISM is that algorithms based on selection from a population may
allow us to deal with the first half of this question--category formation.
Supposedly, he is taken on the assignment and manipulation of labels (which
brings us into the realm of symbols) in his new book, THE REMEMBERED SELF;
but I have not progressed far enough in reading it to comment as to how
successful he has been.  If he pulls it off, then he may have a foundation
for how symbol-manipulating behavior may be an emergent property of processes
which are not, themselves, symbol manipulating.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"Only a schoolteacher innocent of how literature is made could have written
such a line."--Gore Vidal

mmt@dciem.dciem.dnd.ca (Martin Taylor) (03/01/90)

Jon Sticklen comments on Stephen Smoliar's example of the Kronos Quartet's
playing music from Psycho:

From article <12015@venera.isi.edu>, by smoliar@vaxa.isi.edu (Stephen Smoliar):
...


i have a similar intuition as stephen that symbol level processing 
is not everything. but i find his arguments not compelling me toward 
that conclusion. the gist of stephen's argument is that his narative
paragraph about PSYCHO and string quartets is not what communicates 
his expereince to me, but rather that his narative puts me in a "mental
state" that is similar to his, and that in fact the information was
somehow in me all the time. 

no problem. another, maybe more obvious example, is of couples that have
been together for many years and that have to utter only the slighest
phoneme for the other to seemingly totally understand some subtle
communication. the content of the communication is pretty clearly mostly
in the receiver, and the provocation by the sender is just a trigger.

but that observation does not seem to support sub-symbolic representation
and computation. it would seem equally plausible that the hearer (or
reader of stephen's paragraph) had that information stored somehow 
symbolically, and once triggered it popped up.

where is the compelling support for sub-symbolic storage and retrieval
in stephen's example?

=====================================
I would not say that anything in either example has any relevance to
sub-symbolic storage and retrieval.  Both seem to be clear examples of
what would be called high-level messages in Layered Protocol theory(1)
The objective of any communication is exactly what the end of each of
the first two paragraphs above suggests: to get the partner into a desired
state of mind (i.e. to feel, understand, do, ... something).  In order
for any communication to occur, the content of the communication has to
be largely in the receiver, or to be embodied in the message in chunks
that are themselves already in the receiver.  In our work here, we tend
toward using the term "resonance" to describe the effect of a message.

But both the Smoliar example and the couples example say nothing about
symbolic or sub-symbolic processing.  What they indicate is that on these
occasions, the desired evocation could be done without using symbols patterns
that a third party would identify with the intention to create the specified
mental state.

(1) Taylor, M. M. International Journal of Man Machine Studies, March 1988,
vol 28, 175-218 and 219-257
Taylor, M. M. "Response timing in Layered Protocols: A cybernetic view of
natural dialogue" In M.M.Taylor, F.Neel, and D.G.Bouwhuis (Eds.) The Structure
of Multimodal Dialogue, Elsevier Science Publishers (North Holland), 1989.

Forgive the plug:-)
-- 
Martin Taylor (mmt@zorac.dciem.dnd.ca ...!uunet!dciem!mmt) (416) 635-2048
"Viola, the man in the room
doesn't UNDERSTAND Chinese. Q.E.D."  (R. Kohout)

kp@uts.amdahl.com (Ken Presting) (03/03/90)

In article <12015@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes:
>In article <0cWk02pp8aza01@amdahl.uts.amdahl.com> kp@uts.amdahl.com (Ken
>Presting) writes:
>>
>>All information can be expressed and processed in symbolic form, whether
>>it's about the syntax of a language, the semantics of a language,
>>the positions of the planets, even whether John loves Mary.
>
>As long as this discussion is being cross-posted to talk.philosophy.misc,
>perhaps we should try to take a look at this sentence without being drawn
>into the kind of hysteria which Searle has promulgated.  Do any of us REALLY
>believe that ALL information can be both expressed and processed in symbolic
>form?  I, for one, am not willing to buy into the extreme form of this
>proposition;

Well, how about ... a trivial form of the same proposition?

Before I get into "information", please recall that I used this assertion
in a discussion of the question "Does the fact that the semantics of a
language can be defined in a formal notation show that semantics is
indistinguishable from syntax?"  In that context, I was concerned only to
show that formal expressibility does not imply reducibility to syntax.
I think Stephen has taken my remark out of its context, thereby inflating
its significance.  So I will attempt to re-trivialize the remark.

Once some object or event is identified as information, its expressibility
is already decided.  This is because, on the going formal theories of
information, the "signal", "message", or whatever is presented to its
receiver in a code of some sort.  Probability theory itself depends on
the identification of events, and the otherwise abstemious Shannon
theory inherits a hefty dependence on prior categorization therefrom.
Assign a character to each event category, and off you go.

This is *all* I meant by the remark above.  Reading the parargraph about
the Kronos Quartet brings up a completely different kettle of fish.

Rather than delay any longer, I'll post this short clarification.  I still
have not decided how far to go into the issue of what goes on when a
person reads a paragraph about a subjective experience.  I have a lot
I'd like to say about it, but most of my ideas are psychological rather
than AI-oriented.  I haven't been able to figure out how to express them
in a vocabulary appropriate to this group.

(who said "aha, inexpressible in symbolic form!" :-)

kp@uts.amdahl.com (Ken Presting) (03/06/90)

In article <12015@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes:
>In article <0cWk02pp8aza01@amdahl.uts.amdahl.com> kp@uts.amdahl.com (Ken
>Presting) writes:
>>
>>All information can be expressed and processed in symbolic form, whether
>>it's about the syntax of a language, the semantics of a language,
>>the positions of the planets, even whether John loves Mary.
>
>  . . .  Do any of us REALLY
>believe that ALL information can be both expressed and processed in symbolic
>form?  I, for one, am not willing to buy into the extreme form of this
>proposition;  . . .

I've decided to wimp out on actually stating a theoretical account of
what goes on whan a person reads a passage of text, but I will try to
give an example of one of the many things that can happen:

>. . .
>Let me only assume that we have both seen the movie PSYCHO.  (If this is not
>the case, let me know;  and I shall come up with another example.) . . .

Stephen has gone to some trouble with these two lines, to identify an
assumption and to indicate his willingness to accomodate the reader.  These
two lines are transitional between the polemical passage just before, and
the evocative passage that follows.

Making assumptions is a natural part of arguing, but it is uncommon for
someone defending a position to be able to drop an assumption without
losing some force from his argument.  By emphasizing that he is making an
assumption, then immediately offering to withrdaw it, Stephen effectively
disarms the polemically inclined reader.

>
> ... <good part left out> ...
>
>. . . I was dealing with concurrent impressions of anxiety and amusement.
>It was really quite something.

When I first read this carefully, I thought "How anticlimactic the last
sentence is - of course it was *quite something*".  I thought that Stephen
had diluted the impact of his description, by adding a sentence that
conveyed no extra information.

Of course, I was mistaken in that judgement, which I quickly realized.
After envisioning such events as a musician appearing on stage in a towel
and shower cap, the reader can well use an extra line to readjust to the
dry polemical attitude.

>Now, think about what I have communicated to you in that last paragraph.

This is a crucial, absolutely essential line.  Stephen cannot force his
reader to think.  He can manipulate the reader's mood, and he can sidetrack
the reader's critical impulses.  If worst comes to worst, he can
explicitly *order* his reader to think about the content of the message.

But if the reader wants to ignore the content and focus on the form, Stephen
can't do a thing about it.

>Assuming you know about PSYCHO and assuming you know something about string
>quartets, my guess is that I have been able to communicate my impressions to
>you;

*Guess* my foot!  Mr. Smoliar, you know *exactly* what you are doing with
words.  I doubt that any reader could avoid receiving your impression.

But you are correct that you must guess.  Although I have refused to
discuss the content of the passage, would you say I have misread it,
or misunderstood it?  Reading is by no means a passive process, although
the illusion of passivity is fundamental to the process.  The writer
can never predict the reader's response, beyond the bare recognition of
words.  Sometimes even that is a stretch.

It is important to consider the writer's intentions and hopes when reading
a passage, but a theoretical account of reading must go well beyond the
"intended effects" of the writer's act.

>
> . . .   At this point, I do not think we are talking about information
>expressed in symbolic form any more.
>

My view is that symbols are related to real objects and processes in two
ways:

(1) Semantically, as in the case of a word or phrase referring to an
    object or event, and

(2) By an act of interpretation, as in the case of reading a pattern of
    dots on a terminal.

On my view, there is no process in any brain or any computer which is
*in itself* a symbolic representation, expression, or manipulation.  To
make the claim that some object or process is symbolic, it is required
that an interpretation of the process be specified.  The fact that a
process is digital, or that the process is defined by a program, or that
the process results in the generation of recognizable English inscriptions
is by no means definitive of being a "symbolic process".  Analog,
parallel, or unprogrammed processes are just as likely to be "symbolic"
as are digital processes.


>What ARE we talking about, then?  Quite honestly, I'm not really sure.

We're talking about a normative property - "... is a symbol".  One
important feature of normative properties is that they cannot be defined
directly in terms of physical properties.  A normative property is defined
indirectly, in terms of the possibility of discovering an interpretation.

For example, any pattern of dots whatsoever can be used to represent the
letter "A".  As long as the intended receiver is able to interpret the
message in its context, the communication will succeed.  One way to
understand my claim that "all information can be expressed ... in symbols"
is to read it as the claim that all interpretations of physical phenomena
into systems of any sort can be translated into a second interpretation
of the phenomenon into a symbol system.  So my claim reduces (I'd say)
to Church's Thesis - no system has more expressive power than arithmetic.

That is *not* to say that everything is symbolic, formal, or arithmetic.
Events and emotions are real.  Sounds and marks are real.  Automatic
computations are real.  Symbols are abstract.

John Haugeland gives a useful introduction to the process of
interpretation in his introductory text, "Artificial Intelligence: The
Very Idea".  This is a simplified presentation, but he discusses some of
the more complex issues in a paper called "Weak Supervenience", published
a few years ago in _The American Philosophical Quarterly_.  (I disagree
with his conclusion that there can be no psychological laws, but his
presentation is excellent).

tmoody@sjuphil.uucp (T. Moody) (03/19/90)

In article <23100@mimsy.umd.edu> flink@mimsy.umd.edu (Paul V Torek) writes:
>Just thought I'd inject an interesting fact into the Chinese Room discussion.
>In a recent article (in _Philosophical Topics_, I think) Searle claims
>that only conscious beings can have intrinsically intentional states
>(that is, mental states with a non-derivative semantic value).

Thanks for the pointer; I'll take a look at the article.

To return the favor, there was a very interesting article by Tim Maudlin
in _Journal of Philosophy_ last year, entitled "Computation and
Conscioussness."  Unfortunately, I don't have the exact reference,
because the last six months of JP 1989 in our library are now at the
binder.  I do know that the article is in that range (the last six
months of 1989).

For my own part, I am convinced that such phenomena as "understanding
Chinese" must be analyzed into *four* components:

	1.  Behavioral -- what the Turing Test tests.

	2.  Intentional -- causal/referential properties.

	3.  Computational/functional -- sets constraints upon the *how*
	    of (1) and (2), supports counterfactuals, etc.
	    
	4.  Phenomenological -- the subjective, experiential character
	    (if any) of the process.

Searle may be understood as arguing that (1) and (3) alone do not
"constitute" understanding, because they do not guarantee (2) and (4).

I believe he is right.




-- 
Todd Moody * tmoody%sjuphil.sju.edu@bpa.bell-atl.com (Whatever that means)
            "The mind-forg'd manacles I hear."  -- William Blake

ftoomey@maths.tcd.ie (Fergal Toomey) (04/06/90)

In article <1990Apr5.202224.27534@caen.engin.umich.edu>
zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) writes:

>You are assuming that "understanding" has some meaning beyond the 
>information that can be gleaned from observation.  I disagree.  I
>assert that "understanding" should be defined by functional behavior.
>
>In the example that you presented of the novice assisted by Kasparov
>(the international grand master), I would say that the system being
>observed: novice+Kasparov did _indeed_ display "understanding" of
>chess.  Making the knowledge of Kasparov's influence hidden is simply
>hand-waving.  If the observer was specifically interested in the
>novice's un-aided "understanding" of chess, then the observer should
>have taken appropriate measures to insure that the novice was not
>receiving advice from external sources.  Granted, this cannot be
>known for certain in every situation, but that is a statement about
>how easy it is to fool humans -- not about the meaning of "understanding".  

Perhaps I should have chosen a better example. The novice+Kasparov
idea was originally intended to parallel the computer+programmer idea
in computer science. If you argue that the system under consideration
is not just the novice, but the novice *plus* Kasparov, then what can
we say about an apparently intelligent computer program? Must we say
that it is not the computer that is intelligent, but the computer
*plus* its programmer?

Let me put it like this: suppose the novice
has brain damage, so that he is incapable of understanding chess, but
he is capable of carrying out simple instructions. He is also capable
of speaking English fluently. He is given a long list of instructions
from Gary Kasparov telling him exactly what move to make in every
board situation that can possibly arise during a chess game (the number
of possible board configurations is very, very high, but finite). You
now start playing chess with the novice, as well as with one other, normal,
average chess player. You are, of course, beaten by the novice+list system,
but you beat the normal, average, player. You conclude that the novice+list
system has a better understanding of the game than the normal player. You
ask the normal player:

"Why do you think you lost?"

"Well, I think I shouldn't have castled on move 8... and maybe I should have
moved my queen out earlier... "

You ask the novice+list (the novice remember, can speak English fluently),

"Why do you think you won?"

"Well, uhmm, yes, ... er,"

My point is that the system "novice+list" has no understanding of the
game, although it can beat you every time. The list corresponds to
a computer program, the novice to a computer. Saying that we must consider
the "Kasparov+novice+list" system instead is equivalent to saying that we
must always consider the "Programmer+computer+program" system instead of
the "computer+program" system. This argument is demonstably false, in
that chess-playing programs have been written which consistently beat
their authors. Clearly these programs were not simply programmed to play
"brute-force" chess, they were deliberated programmed to analyse the
game, to think up new moves, ... to "understand". Unfortunately,
there is no way of distinguishing between a "brute-force" program
and an "understanding" program if you consider only their behaviour:
you have to look at the algorithms themselves.

>Certainly the algorithms employed by a system affect the level of
>"understanding" that the system can display, but the idea that there
>is a "magical" algorithm that represents "true" understanding, while
>all other algorithms represent ignorance is absurd.  Even the idea
>that there is a fuzzy boundary between algorithms which display
>"understanding" and algorithms that do not is highly suspect.  

IMO, there is a fuzzy boundary between the two, and I do not see
at all why this should be highly suspect. People make all kinds of
distinctions between algorithms already, for example between
"brute force" algorithms and "insight" algorithms. Why shouldn't
we be able to say:

"This computer has been programmed to play chess, and this computer
has been programmed to *understand* how to play chess (and in fact
the first computer has beaten Kasparov, but the second computer
has given Kasparov some hints to improve his game next time)."

>To my mind, there can only be a _functional_ definition of 
>"understanding".  How do you know that Kasparov uses a different
>algorithm for playing chess than the computer?  Better yet, how
>do you know that Kasparov "understands" chess -- except by virtue
>of his behavior?

I don't. Before the arrival of computers, the question would never have
been asked, everyone simply assumed that other people thought in the same
way. I can't open up Kasparov's head and have a look at his algorithms,
and until I can, I'll simply choose to give him the benefit of the doubt.
The question has more relevance, though, when applied to computers.
Fundamental problems in artificial intelligence are linked to questions
about understanding and interpretation. For these problems, brute-force
solutions just don't work, yet the human mind solves them without any
difficulty, often in a matter of seconds. IMO an understanding of what
"understanding" is will help us to produce algorithms which solve these
problems. 

>---Paul... (P-K4, P-Q4 -- a Queen's gambit?!) 

Fergal Toomey.

kp@uts.amdahl.com (Ken Presting) (04/08/90)

In article <1990Apr6.144947.11473@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal Toomey) writes:
>Let me put it like this: suppose the novice
>has brain damage, so that he is incapable of understanding chess, but
>he is capable of carrying out simple instructions. He is also capable
>of speaking English fluently. He is given a long list of instructions
>from Gary Kasparov telling him exactly what move to make in every
>board situation that can possibly arise during a chess game . . .
>. . . You ask the normal player:
>
>"Why do you think you lost?"
>
>"Well, I think I shouldn't have castled on move 8... and maybe I should have
>moved my queen out earlier... "
>
>You ask the novice+list (the novice remember, can speak English fluently),
>
>"Why do you think you won?"
>
>"Well, uhmm, yes, ... er,"

Fergal's improved example *is* much better, but I'd like to stregnthen
his point, if I can.

The brain-damaged novice with the list is perhaps more likely to say:

"I won becouse I got these instructions from Gary Kasparov.  Cost me
 a *fortune*, but it was worth it.  I win every time.  Just don't ask
 me to play without this list."

The novice's explanation could be quite informative, and even rational,
but (by hypothesis) it could not mention much about *chess*, its rules,
possible strategies, et. al.

The everyday test for understanding of a subject is to ask for an
explanation.  This is a practical, functional, empirical procedure.

Take another example.  We all know how to think.  But none of us can
explain how we think.  We would agree, I suppose, that we do not
understand thinking.  Partial explanations exist; these are indicative
of partial understanding.

(Paul McCarthy:)
>>To my mind, there can only be a _functional_ definition of 
>>"understanding".  How do you know that Kasparov uses a different
>>algorithm for playing chess than the computer?  Better yet, how
>>do you know that Kasparov "understands" chess -- except by virtue
>>of his behavior?

(We should distinguish between a definition and a criterion for applying
 a predicate, but I'll let it pass)

The criterion for "being able to play chess" is winning games, or at least
playing without breaking the rules too often.  The criterion for
understanding chess is explaining games, or at least not being
dumbfounded before them.

I think it's possible to give a more precise account of understanding,
but all I've tried to do here is to show that "understanding" is NOT
some bizarre philosopher's daydream, but an everyday useful concept.

Especially useful for AI!


Ken Presting

throopw@sheol.UUCP (Wayne Throop) (04/08/90)

> From: ftoomey@maths.tcd.ie (Fergal Toomey)
> I think you're right in that people do use functional cues to decide
> whether or not their dog, for example, understands something.
> [...] But that doesn't mean that studying your dog's
> behaviour is not an infallible method of determining whether he understands.

(I think that "not" is not intended... that's the only way I can make
 sense of the ensuing examples.)

I agree that behavioral evidence is not infallable.  In fact, I said as
much in my first posting to this particular resussitation of this subject
line.  I merely claim that it is the only evidence that exists (other
than pure presumption).
--
Wayne Throop <backbone>!mcnc!rti!sheol!throopw or sheol!throopw@rti.rti.org

thornley@cs.umn.edu (David H. Thornley) (04/08/90)

In article <1990Apr6.144947.11473@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal Toomey) writes:
>In article <1990Apr5.202224.27534@caen.engin.umich.edu>
>zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) writes:
>
>>You are assuming that "understanding" has some meaning beyond the 
>>information that can be gleaned from observation.  I disagree.  I
>>assert that "understanding" should be defined by functional behavior.
>>
>>In the example that you presented of the novice assisted by Kasparov
>>(the international grand master), I would say that the system being
>>observed: novice+Kasparov did _indeed_ display "understanding" of
>>chess.  Making the knowledge of Kasparov's influence hidden is simply
>>hand-waving.  If the observer was specifically interested in the
>>novice's un-aided "understanding" of chess, then the observer should
>>have taken appropriate measures to insure that the novice was not
>>receiving advice from external sources.  Granted, this cannot be
>>known for certain in every situation, but that is a statement about
>>how easy it is to fool humans -- not about the meaning of "understanding".  
>
>Perhaps I should have chosen a better example. The novice+Kasparov
>idea was originally intended to parallel the computer+programmer idea
>in computer science. If you argue that the system under consideration
>is not just the novice, but the novice *plus* Kasparov, then what can
>we say about an apparently intelligent computer program? Must we say
>that it is not the computer that is intelligent, but the computer
>*plus* its programmer?
>
>Let me put it like this: suppose the novice
>has brain damage, so that he is incapable of understanding chess, but
>he is capable of carrying out simple instructions. He is also capable
>of speaking English fluently. He is given a long list of instructions
>from Gary Kasparov telling him exactly what move to make in every
>board situation that can possibly arise during a chess game (the number
>of possible board configurations is very, very high, but finite).... 

I am starting to wonder if one of the real problems in these discussions is
the high proportion of impossible thought experiments.  The number of possible
board configurations is far too high for any sort of exhaustive search pattern
to be applied within the confines of the known universe.  Certainly it is far
too high for one human to go through.

This is also a problem with Searle's Chinese Room arguments.  The man seems to
have no conception of how big a program to simulate human intelligence would
have to be, and how difficult it would be to implement by means of paper and
pencil.  Frankly, I don't think that Searle could perform the symbol
manipulations sufficiently accurately to implement such a program.  Certainly
he could not memorize the program and the data, as he has stated he could do
in the Scientific American article.  

I dislike two consequences of this.  First, I think it is intellectually
sloppy, almost to the point of dishonesty.  If I were arguing about the
country's transportation network, and I questioned the need for intra-city
roads by saying that people could walk at 20 mph, people would lose all
respect for me.  If I argue about artificial intelligence and say I can
implement a human-intelligence simulator mentally, or that I can get Gary
Kasparov to give me written instructions on how to play chess at his level,
I should get quite the same treatment.  Second, it trivializes the problems
involved.  Programming a chess computer or a human-intelligence simulator
is not a small feat, cannot be duplicated by memorizing lists of rules or
board positions, and should not be treated as if they can.

If intelligence were a "trick" or something that could be easily implemented,
AI researchers would have succeeded back in the '50s.  The fact that nobody
has managed to create a machine capable of human intelligence, or one that can
defeat the World Champion at chess reliably, indicates that the problems are
quite difficult.

David Thornley

hougen@cs.umn.edu (Dean Hougen) (04/09/90)

In article <1990Apr6.144947.11473@maths.tcd.ie>, ftoomey@maths.tcd.ie
 (Fergal Toomey)writes:
>In article <1990Apr5.202224.27534@caen.engin.umich.edu>
>zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) writes:

>>You are assuming that "understanding" has some meaning beyond the 
>>information that can be gleaned from observation.  I disagree.  I
>>assert that "understanding" should be defined by functional behavior.
>>
>>In the example that you presented of the novice assisted by Kasparov
>>(the international grand master), I would say that the system being
>>observed: novice+Kasparov did _indeed_ display "understanding" of
>>chess. 

>Perhaps I should have chosen a better example. The novice+Kasparov
>idea was originally intended to parallel the computer+programmer idea
>in computer science. If you argue that the system under consideration
>is not just the novice, but the novice *plus* Kasparov, then what can
>we say about an apparently intelligent computer program? Must we say
>that it is not the computer that is intelligent, but the computer
>*plus* its programmer?

The fact that you present a revision of the analogy (below) indicates to
me that you do not believe that your novice+Kasparov analogy (above) to 
be adequate to answer this question in the positive.  Neither do I.  The
fact that in the one case the novice is exactly a conduit for Kasparov
(note that he can be removed from the system without effecting 
performance) should be sufficient to tell us that the analogy (above) is
fataly flawed.

>Let me put it like this: suppose the novice
>has brain damage, so that he is incapable of understanding chess, but
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^      
You're cheating.  "We assume that he can't understand chess but he can
play chess, therefore we know that we shouldn't give 'understanding' a
functional definition."  You are assuming what you wish to prove.  We
need to define what the novice can and can't do *without* reference to
whether or not he 'understands' and *then* try to determine if we can
apply the term 'understanding' to him.  (Analogy:  Assume my cat can't
cook but he can create terrific meals, therefore we know that we 
shouldn't give 'cooking' a functional definition.  But if we are 
trying to determine the definition of 'cooking', then to *assume* that
he can't do it is premature.)

Perhaps we should rephrase this as "traditional methods to teach him
chess have failed" or something similar.

>he is capable of carrying out simple instructions. He is also capable
>of speaking English fluently. He is given a long list of instructions
>from Gary Kasparov telling him exactly what move to make in every
>board situation that can possibly arise during a chess game (the number
>of possible board configurations is very, very high, but finite). You
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^
Too high for the list to ever be created but this is a thought experiment
so we can give him this list for purposes of our investigation anyway.  
But why do you feel the need to mention that it is finite?

>now start playing chess with the novice, as well as with one other, normal,
>average chess player. You are, of course, beaten by the novice+list system,
>but you beat the normal, average, player. You conclude that the novice+list
>system has a better understanding of the game than the normal player. You
                     ^^^^^^^^^^^^^^^^^^^^^^^^^
I conclude that the novice+list system has a better *understanding of how to
play the game*.  I don't assume that the novice+list system has a better
understanding of all aspects of the game such as historical origins, etc.

>ask the normal player:
>"Why do you think you lost?"
>"Well, I think I shouldn't have castled on move 8... and maybe I should have
>moved my queen out earlier... "

>You ask the novice+list (the novice remember, can speak English fluently),
>"Why do you think you won?"
>"Well, uhmm, yes, ... er,"

You ask the normal player:
"Who was the world chess champion in 1962?"
"Well, uhmm, yes, ... er,"

>My point is that the system "novice+list" has no understanding of the
>game, although it can beat you every time. 

My point is that the normal player has *no understanding* of the game,
although he can tell you why he lost.  You may argue that knowing the
name of the world chess champion in 1962 is not all there is to
understanding chess; that the normal player can still have a partial
understanding of the game without being able to answer this question.
I might argue that the same is true for the novice+list system in
relation to the question that you posed to it.  How did you come to
the conclusion that being able to explain why you won is *all there 
is to understanding chess*?  Why should I buy this?  Why should I not
maintain that the novice+list system has a partial understanding of
the game (precisely that it understands how to play chess) without
being able to answer your question?

>The list corresponds to
>a computer program, the novice to a computer. Saying that we must consider
>the "Kasparov+novice+list" system instead is equivalent to saying that we
>must always consider the "Programmer+computer+program" system instead of
>the "computer+program" system. 

I agree that we need not look at the Kasparov+novice+list system.  Unlike
in your earlier version of the analogy this novice is not redundant.  In
this version it is the novice operating on the list that allows Kasparov
to go about his life without having to be actively involved in the novice's
chess games.  I simply maintain that the novice+list system understands 
(how to play) chess.

>This argument is demonstably false, in
>that chess-playing programs have been written which consistently beat
>their authors. 

This does not demonstrate the falsehood of considering the programmer-
computer-program system instead of the computer-program system.  After
all, the programmer-computer-program system is different than the
programmer-alone system, and so the performance of two is likely to be
different.

>Clearly these programs were not simply programmed to play
>"brute-force" chess, they were deliberated programmed to analyse the
>game, to think up new moves, ... to "understand". Unfortunately,
>there is no way of distinguishing between a "brute-force" program
>and an "understanding" program if you consider only their behaviour:
>you have to look at the algorithms themselves.

Assume this is true.  Why is it unfortunate that we cannot distinguish
between the "'understanding' program" and the brute-force program on 
the basis of their behavior?  Obviously you wish to say that the 
"'understanding' program" understands, but the brute-force program 
does not.  But simply putting the 'undertanding' label on the one and
not the other does not mean that the one actually does understand or
that the other does not.  In fact, let's use your own analogy here.
We write a speech program for the computer running the "'understanding'
program".  It can now speek fluently in English.  You play a game of
chess with it.  

You ask it, "Why do you think you won/lost/drew?"
"Well, uhmm, yes, ... er,"  

My point is this new system has *no understanding of the game*, although
sometimes it beats you.  You may argue that being able to expalain why
you won/lost/drew is not all there is to understanding chess; that this
system can still have a partial understanding of the game without being
able to answer this question.  In this case you agree with my point above,
that we have yet to find a reason for saying that the novice+list doesn't
understand and therefore have found no reason not to define 'understanding'
by functional behavior.

Or you may agree with me that this system is has no understanding; that
it is just like the brute-force system - in direct contradiction to your
assertion above that it is unfortunate that we cannot distinguish between
the two.  In this case we agree that this discussion has yet to find a
reason for not defining 'understanding' by functional behavior. 

>>Certainly the algorithms employed by a system affect the level of
>>"understanding" that the system can display, but the idea that there
>>is a "magical" algorithm that represents "true" understanding, while
>>all other algorithms represent ignorance is absurd.  Even the idea
>>that there is a fuzzy boundary between algorithms which display
>>"understanding" and algorithms that do not is highly suspect.  
>
>IMO, there is a fuzzy boundary between the two, and I do not see
>at all why this should be highly suspect. People make all kinds of
>distinctions between algorithms already, for example between
>"brute force" algorithms and "insight" algorithms.

That people do this does not make it correct.

>Why shouldn't we be able to say:
>
>"This computer has been programmed to play chess, and this computer
>has been programmed to *understand* how to play chess (and in fact
>the first computer has beaten Kasparov, but the second computer
>has given Kasparov some hints to improve his game next time)."

Do you realize that you just asked us to believe that the second
computer has been programmed to *understand* chess based on its
functional behavior?  Would a brute-force chess-hint machine (one
which contains a list of all possible games and a hint associated
each one) understand chess?  Is it necessary to take your strategy
a step further and ask the hint-machine, "Why do you think you came
up with that hint?"  "Well, uhmm, yes, ... er,"  That hint-machine
has *no understanding of the game*, although it can give advice on
how to improve your game.  The fact is that your strategy assumes
that one *can* use functional behavior to determine whether or not
a system understands, and you are using this strategy to try to 
argue that 'understanding' should *not* be defined by functional
behavior.  Note also that recursive application of this strategy
to people will prove that we have *no understanding of the game*
either.  Perhaps I should point out that by steping back and 
asking the system "Why do you think you won/lost?" you were looking
for understanding of a different sort.  The original question was,
"Can you tell by watching it play whether a system understands chess
*strategy*?" although the word 'strategy' was left unspoken.  You
changed the question to, "Can you tell by watching it play whether
a system understands how to learn from its mistakes at chess?"  In
normal human chess players these go hand in hand - this is how we
acquire our understanding of strategy.  But if a system does not get
its strategy in the normal way will you argue that it has no
strategy at all?  Is its play then random?

>>To my mind, there can only be a _functional_ definition of 
>>"understanding".  How do you know that Kasparov uses a different
>>algorithm for playing chess than the computer?  Better yet, how
>>do you know that Kasparov "understands" chess -- except by virtue
>>of his behavior?
>
>I don't. 

In fact, you seemed not to flinch at the idea of Kasparov providing
the list of all possible board configurations.  He understands chess
therefore he would have/could create such a list?  Hmmm.

>Before the arrival of computers, the question would never have
>been asked, everyone simply assumed that other people thought in the same
>way. 

The problem of other minds has a slightly longer history than you think.

>I can't open up Kasparov's head and have a look at his algorithms,
>and until I can, I'll simply choose to give him the benefit of the doubt.
>The question has more relevance, though, when applied to computers.
>Fundamental problems in artificial intelligence are linked to questions
>about understanding and interpretation. For these problems, brute-force
>solutions just don't work, yet the human mind solves them without any
>difficulty, often in a matter of seconds. IMO an understanding of what
>"understanding" is will help us to produce algorithms which solve these
>problems. 

No one is forcing you to use brute-force methods or trying to stop you
from understanding how humans understand.  But, when you do create an
artificial intelligence, don't be surprised when some one peaks inside
and says, "I don't think like that, your system doesn't understand, I
don't care what it can do."

>>---Paul... (P-K4, P-Q4 -- a Queen's gambit?!) 

>Fergal Toomey.

Dean Hougen
--
"god save the queen, she ain't no human bein'."  - the Sex Pistols

hougen@cs.umn.edu (Dean Hougen) (04/09/90)

In article <7cn102fg9ahA01@amdahl.uts.amdahl.com>, kp@uts.amdahl.com
 (Ken Presting) writes:
>In article <1990Apr6.144947.11473@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal >Toomey) writes:
>>Let me put it like this: suppose the novice
>>has brain damage, so that he is incapable of understanding chess, but
>>he is capable of carrying out simple instructions. He is also capable
>>of speaking English fluently. He is given a long list of instructions
>>from Gary Kasparov telling him exactly what move to make in every
>>board situation that can possibly arise during a chess game . . .
[stuff deleted]
>The brain-damaged novice with the list is perhaps more likely to say:

>"I won becouse I got these instructions from Gary Kasparov.  Cost me

>The novice's explanation could be quite informative, and even rational,
>but (by hypothesis) it could not mention much about *chess*, its rules,
>possible strategies, et. al.

>The everyday test for understanding of a subject is to ask for an
>explanation.  This is a practical, functional, empirical procedure.
                                    ^^^^^^^^^^
So you are arguing directly against Fergal's point which was that 
understanding should *not* be defined by functional behavior?

Also, note that being an everyday test carries no weight.  If you go
to a math instructor and tell him that the reason you did bad on your
exam was that you were distracted by a personal crisis, were ill, etc.
she is likely (if she gives you a break at all) to give you a retest,
*not* necessarily ask you what you did wrong on the first test.  So 
what?  Does that make simple observation of a system's chess playing
the right procedure to determine its understanding?

[stuff deleted]

>The criterion for "being able to play chess" is winning games, or at least
>playing without breaking the rules too often.  The criterion for
>understanding chess is explaining games, or at least not being
>dumbfounded before them.

The criterion for understanding how to apply chess strategy is winning
games, or ...  The criterion for understanding how to formulate chess
strategy is explaining games, or ...

Again, you are arguing against Fregal's point (as was he, btw) but saying
you are trying to back him up.  Functional behavior looks like the way
to go on this understanding thing, no?

>I think it's possible to give a more precise account of understanding,
>but all I've tried to do here is to show that "understanding" is NOT
>some bizarre philosopher's daydream, but an everyday useful concept.

>Especially useful for AI!

True.  It is useful to look at functional behavior.  Or are you trying
to say something else?

>Ken Presting

Dean Hougen
--
"It only makes me laugh."  - Oingo Boingo

cs4g6at@maccs.dcss.mcmaster.ca (Shelley CP) (04/09/90)

I have been somewhat underwhelmed by the infamous Chinese Room Problem
which I have heard so much about.  The recent Scientific American article
which carried a short discussion of it, stated that  the combination of 
room + operator + rulebook 'obviously' didn't "understand" Chinese despite
behaving as if it did because our (the reader's) priviledged viewpoint
allowed us to see the absurdity of the claim.

I frankly see nothing absurd in it at all!  The observers of the room were
quite correct to conclude that the room understood chinese (given the 
conditions stated).  The problem, as I see it, comes from resorting to 
the god's-eye-view in order to make a rebuttal.  We will *never* have such
a priviledge in real life (to assume otherwise is questionable), so why
accept such an arguement in a Gedankenexperiment?

To resort to the chess-playing example: in order to determine if both/either
of the computer and human (whoever he is) actually knows chess, the observer 
will need to "understand" chess himself!  His decision on who really knows
chess will be self-referencially based on his own knowledge of the game.

Utimately, one necessarily requires intelligence to evalute intelligence.
How do we test our own intelligence?  Cogito intellegere, ergo intellego?
(What does this say about my latin? :)  I have heard of no "Von Neumann"
axiomatization that gets around this self-referential problem.  The 
god's eye point of view runs into the something like Goedel's incompleteness,
how many metas are too many - ie. who determines if god is intelligent?

I think the problem is likely to turn out as undecidable.  Perhaps the only
way to determine if a computer program will ever be "intelligent" is to 
wait until one asks us if we are "intelligent"!

PS.  I attended a lecture a couple of years ago given by Dr. Putnam and 
although I don't recall any details, I don't remember being very convinced.
I felt very much that one or both of us did not understand what he was 
talking about!  Is one of us not "intelligent" on the subject of computer
intelligence? :>

				Cameron Shelley

-- 
******************************************************************************
* Cameron Shelley   *    Return Path: cs4g6at@maccs.dcss.mcmaster.ca         *
******************************************************************************
*  /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\ *

hougen@cs.umn.edu (Dean Hougen) (04/09/90)

In article <1990Apr8.160030.1988@cs.umn.edu>, thornley@cs.umn.edu
 (David H. Thornley) writes:
>In article <1990Apr6.144947.11473@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal >Toomey) writes:
>>Let me put it like this: suppose the novice
>>has brain damage, so that he is incapable of understanding chess, but
>>he is capable of carrying out simple instructions. He is also capable
>>of speaking English fluently. He is given a long list of instructions
>>from Gary Kasparov telling him exactly what move to make in every
>>board situation that can possibly arise during a chess game (the number
>>of possible board configurations is very, very high, but finite).... 
>
>I am starting to wonder if one of the real problems in these discussions is
>the high proportion of impossible thought experiments.  The number of possible
>board configurations is far too high for any sort of exhaustive search pattern
>to be applied within the confines of the known universe.  Certainly it is far
>too high for one human to go through.
>
>This is also a problem with Searle's Chinese Room arguments.  The man seems to
>have no conception of how big a program to simulate human intelligence would
>have to be, and how difficult it would be to implement by means of paper and
>pencil.  Frankly, I don't think that Searle could perform the symbol
>manipulations sufficiently accurately to implement such a program.  Certainly
>he could not memorize the program and the data, as he has stated he could do
>in the Scientific American article.  

My assesment of Searle is less gracious.  I believe that he *intentionally*
trivializes the problem to win a psychological edge in the argument.

>I dislike two consequences of this.  First, I think it is intellectually
>sloppy, almost to the point of dishonesty.  If I were arguing about the
>country's transportation network, and I questioned the need for intra-city
>roads by saying that people could walk at 20 mph, people would lose all
>respect for me.  If I argue about artificial intelligence and say I can
>implement a human-intelligence simulator mentally, or that I can get Gary
>Kasparov to give me written instructions on how to play chess at his level,
>I should get quite the same treatment.

If you are asking for funding to actually try this crap, of course people
should laugh at you.  But there is a long and proud history of the use of
thought experiments in philosophy, and the use here is not out of line.
Here we are not proposing actual solutions to actual problems, but rather
asking how our concepts fit to unreal situations in order to get a better
handle on our concepts.  Although particular examples can get 
intellectually sloppy (perhaps this one, no offense to any participant
intended) or dishonest (e.g. Searle's Chinese Room), many are neither of
these, instead being well-reasoned arguments by some of the great thinkers
throughout the centuries.  Some of them have actually been useful too!  :)

>Second, it trivializes the problems
>involved.  Programming a chess computer or a human-intelligence simulator
>is not a small feat, cannot be duplicated by memorizing lists of rules or
>board positions, and shouldn't be treated as if they can.

I agree with you here.  It is all too easy to allow the problem being
solved to be trivialized.  I think that it is possible to use these 
thought experiments without allowing oneself to be distracted by their
psychological side, but this is not easy and one must be constantly on
guard.  In Fergal's defense, he did say that the number of configurations
is "very, very high," which is putting it mildly, but is not openly
dishonest like Searle's use of "scraps of paper", etc. in the Chinese Room.

Returning to the chess example, if you were to approach the typical chess
player and comment, "Ya know, I don't think Kasparov understands chess,"
you would likely get, "But he's the world chess champion," or "Hey, he
beat Karpov, didn't he?" or something similar as a reply, not "He can 
explain why he lost the second game of that match, can't he?" or anything
along those lines.  So why has explaining why one won or lost been 
proposed (here and elsewhere) as a criterion for understanding chess?
Perhaps it is the psychological impact of imagining a brain-damaged
novice with a list (or, god forbid, a machine) understanding chess.  "I
know understanding chess is hard, and if he (it) is playing chess well,
then playing chess well must not be what is meant by understanding chess.
Let's see, what else could it mean to understand chess?  ... "

I seem to get this impression in Ken's article (message-id: 
<7cn102fg9ahA01@amdahl.uts.amdahl.com>) where playing chess is reduced 
to "being able to" do something, and understanding chess is redefined to
be a level above(?) this.  When the novice+list system started winning
games that was no longer enough for understanding chess.  And if we
imagine the hint-machine I wrote about in an earlier article (yesterday),
I suppose that explaining chess games will not seem like so much
understanding after all.  The hint-machine will only "be able to" give
hints/explain games, it won't *understand chess*.  I don't expect Ken
to follow to this position, but I would ask him to consider how he got
where he is.  The same for Fergal.

To look at a more real-world example, suppose one of these days I get
up from my desk in the new EE/CSci building, walk out the front doors
and across Washington Avenue to the Health Sciences complex, and spend
some time carefully observing medical diagnosis taking place.  I then ask
some of the doctors how they go about making diagnoses.  I might just find
what many people who have actually done this sort of thing (in order to
construct expert systems) have found: some experts actually *can't* tell
you how they do what they do.  Some, in fact, will make up partially false
explanations in order to cover the fact that they cannot give adequate
explanations! 

But if to understand medical diagnosis means to be able to give
explanations of how such diagnoses are made, then these doctors don't
understand medical diagnosis!  For some reason, perhaps the psychological
reason mentioned above, some people have twisted the meaning of the word
'understand' so far that we can now say with a straight face that a
human expert acting as an expert within his own field of expertise
doesn't understand his own field of expertise!  Seems we have gone
astray somewhere.  Perhaps it is time to give serious consideration to
whether, just perhaps, the novice+list *does* understand chess.

>If intelligence were a "trick" or something that could be easily implemented,
>AI researchers would have succeeded back in the '50s.  The fact that nobody
>has managed to create a machine capable of human intelligence, or one that can
>defeat the World Champion at chess reliably, indicates that the problems are
>quite difficult.
>
>David Thornley

I agree that intelligence is no "trick" and that it cannot be easily 
implemented, but the fact that no one has succeeded yet with the full
thing is hardly a proof of this.  Getting a machine to play chess the
way people do is probably quite hard, but brute-force may well get us
a World Champion in the form of a machine with the simple addition of
speed.  The question is, "Would this champion understand chess?"  I
think the answer might just be yes.

Dean Hougen
--
"I'm on the outside now."  - Oingo Boingo

kp@uts.amdahl.com (Ken Presting) (04/10/90)

In article <1990Apr8.194925.17551@cs.umn.edu> hougen@cs.umn.edu (Dean Hougen) writes:
>In article <7cn102fg9ahA01@amdahl.uts.amdahl.com>, kp@uts.amdahl.com
> (Ken Presting) writes:
>>The everyday test for understanding of a subject is to ask for an
>>explanation.  This is a practical, functional, empirical procedure.
>                                    ^^^^^^^^^^
>So you are arguing directly against Fergal's point which was that 
>understanding should *not* be defined by functional behavior?

Not exactly.  I think that whenever an objective, fuctional ("operational"
is perhaps a better word) procedure can be given for applying some
description to a phenomenon, so much the better.  The question of
whether such a procedure constitutes a *definition* of a term, or
whether every term must have such a procedure associated with it, is a
much bigger issue.  I think we can draw many interesting conclusions from
the CR without going into that mess, so I will just note the existence
of the mess, and drop the issue!

>
>>The criterion for "being able to play chess" is winning games, or at least
>>playing without breaking the rules too often.  The criterion for
>>understanding chess is explaining games, or at least not being
>>dumbfounded before them.
>
>The criterion for understanding how to apply chess strategy is winning
>games, or ...  The criterion for understanding how to formulate chess
>strategy is explaining games, or ...
>
>Again, you are arguing against Fregal's point (as was he, btw) but saying
>you are trying to back him up.  Functional behavior looks like the way
>to go on this understanding thing, no?

I think we should avoid arguing about understanding as much as we can.
I certainly don't want to claim that understanding is undefinable, or
that it is irrelevant to AI.  I would not expect much success from any
attempt to define "understanding" in functional or operational terms.
The everyday operational criteria for deciding whether someone
understands are not completely satisfactory, even for everyday purposes.

I have posted a discussion of the information content of a cross-
compiler, which I think provides a practical and (potentially) formal
framework for stating a conclusion which is directly analogous to the
CR.  That is the way to go, I think.  Nobody could leran the semantics
of a programming language from studying the machine code of a cross-
compiler.  This is what the CR boils down to.  Not much, but not
nothing!


Ken Presting

cs4g6at@maccs.dcss.mcmaster.ca (Shelley CP) (04/10/90)

Having just read a lengthy article (too long to include) on the 
chinese room / chessmaster + novice problem, I feel compelled to 
raise a couple of questions.

Firstly, it was stated that "we'd like to distinguish between the 
brute-force and human-imitative algorithms because the first is 
'obviously' not intelligent and the second could be" (paraphrase).
This arguement makes the (to my mind) unwarrented assumption that
intelligent means 'designed to work like us'.  Why should the 
brute-force algorithm be considered unintelligent?

The suggestion was made that they should be distinguished by their
ability to give "intelligent" explanations.  I assume the explanations
would look like: "I generated the entire game tree and found that 
this move lead to a win for me" for the brute-force approach, and
"I have often found that this move under these circumstances leads
to a major improvement in my position" for the heuristic player.  If
I had the ability to go through the entire game tree (without pissing
off my opponent) and see a sure win, why would that be unintelligent?
The chess-novice with Kasparov's list would say, "Well, er, I don't
know much about chess, but I have this list here which tells me how
to win, so I used it!"

A statement was made that "we wish to find the heuristic approach
intelligent over the brute-force one" (paraphrase).  We 'wish' this
because our preconceptions about intelligence include a foggy notion
of 'elegance', but there is *no* a priori guarantee that elegance has
anything to do with smarts!

I recall a novel called "Code of the Lifemaker" by James P. Hogan in
which a race of intelligent robots evolved out of the robot crew of 
a crashed ship.  The details are unnecessary here.  However, when these
robots were discovered by man (on Titan?) they were living in a 
society much like that of medieval Italy.  One of the human engineers
remarked something like "Wow, think of the robotic techniques they could
teach us!"  As was pointed out to the engineer, the robots could be
quite ignorant of mechanics (and were), in the same way we humans aren't
born bio-scientists just because we're organic!  The point I'm trying
to make is that even assuming humans are intelligent, that does not
imply we *know* anything about it, ie. what are its component parts, how
could we improve it, etc...? 

As I have tried to point out before, since confirming intelligence 
itself requires intelligence there will never be a set of axioms or
principles which can decide the 'intelligence' problem for all cases.
I have no doubt that guidelines on the subject will be emerging in the 
course of AI research, or that the quest for such guidelines is 
worthwhile.

I apologize if my tone is too polemic, but I am interested in this 
topic and have a tendency to play devil's advocate.  I would *greatly*
appreciate anyone taking the time to respond and/or disagree as I 
find there are often things to learn in such discussions - and group
scrutiny of opinions prevents them from becoming fossilized 
superstitions!

 
-- 
******************************************************************************
* Cameron Shelley   *    Return Path: cs4g6at@maccs.dcss.mcmaster.ca         *
******************************************************************************
*  /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\ *

ftoomey@maths.tcd.ie (Fergal Toomey) (04/10/90)

In article <1990Apr8.191524.6565@cs.umn.edu> hougen@cs.umn.edu
(Dean Hougen) writes:

>In article <1990Apr6.144947.11473@maths.tcd.ie>, ftoomey@maths.tcd.ie
> (Fergal Toomey)writes:
>>In article <1990Apr5.202224.27534@caen.engin.umich.edu>
>>zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) writes:

>>Perhaps I should have chosen a better example. The novice+Kasparov
>>idea was originally intended to parallel the computer+programmer idea
>>in computer science. If you argue that the system under consideration
>>is not just the novice, but the novice *plus* Kasparov, then what can
>>we say about an apparently intelligent computer program? Must we say
>>that it is not the computer that is intelligent, but the computer
>>*plus* its programmer?
>
>The fact that you present a revision of the analogy (below) indicates to
>me that you do not believe that your novice+Kasparov analogy (above) to 
>be adequate to answer this question in the positive.  Neither do I.  The
>fact that in the one case the novice is exactly a conduit for Kasparov
>(note that he can be removed from the system without effecting 
>performance) should be sufficient to tell us that the analogy (above) is
>fataly flawed.

I believe that my original analogy was flawed in that it was too open to
misinterpretation. Paul found an apparent way out my argument by saying
that we must consider the Kasparov + novice system instead of the novice
alone. My second analogy was supposed to demonstate that we can't do this:
we *must* consider the novice alone. If we require that the Kasparov and
the novice must be considered together (and this is clear, I hope, in the
second example) then we are led to the conclusion that understanding is
impossible for a non-human. We are forced to say, when faced with an
intelligent computer, that "this computer *plus* its programmer is a system
capable of understanding." I reject this view of things for the following
reason: faced with a chess playing computer we do not say: "this computer plus
its programmer is playing chess"; nor should we say, when faced with a
computer which has been programmed to analyse and understand chess,
that "this computer plus its programmer is analysing and understanding
chess". We have to consider the computer (and its program) alone; similarly
in the chess game, we consider the novice (plus his list) alone, and leave
Kasparov out of it.

>>Let me put it like this: suppose the novice
>>has brain damage, so that he is incapable of understanding chess, but
>                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^      
>You're cheating. "We assume that he can't understand chess but he can
>play chess, therefore we know that we shouldn't give 'understanding' a
>functional definition."

No. My argument is as follows: I assume that the novice can't possibly
understand chess. Then I show that he can play chess, using Kasparov's
list. Conclusion: understand != playing. The argument does rest, however,
on the assumption that it is possible to follow simple instructions
without understanding anything about what you're doing. If it is not
possible to construct an instruction-following, non-understanding device,
then clearly the poor brain-damaged novice cannot be constructed, and so
the argument falls through. I know for a fact, however, that it *is*
possible to follow instructions while understanding nothing. I have 
done precisly this many times in my life, for example in constructing
my first electronic kit. I understood nothing about electronics except
how to use a soldering iron, yet the completed circuit worked.

Therefore, I can rephrase my argument:
It is possible to construct a thing which follows instructions yet
understands nothing, therefore,
It is possible to construct a thing which plays chess as well as
Gary Kasparov and yet does not understand chess.


>>he is capable of carrying out simple instructions. He is also capable
>>of speaking English fluently. He is given a long list of instructions
>>from Gary Kasparov telling him exactly what move to make in every
>>board situation that can possibly arise during a chess game (the number
>>of possible board configurations is very, very high, but finite). You
>                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^
>Too high for the list to ever be created but this is a thought experiment
>so we can give him this list for purposes of our investigation anyway.  
>But why do you feel the need to mention that it is finite?

If it were infinitely long, it would clearly take an infinite length of
time to produce, in which case my argument would fall through, since it
would not be possible, using this method, to construct a thing capable
of playing chess as well as Gary Kasparov.

>>now start playing chess with the novice, as well as with one other, normal,
>>average chess player. You are, of course, beaten by the novice+list system,
>>but you beat the normal, average, player. You conclude that the novice+list
>>system has a better understanding of the game than the normal player. You
>                     ^^^^^^^^^^^^^^^^^^^^^^^^^
>I conclude that the novice+list system has a better *understanding of how to
>play the game*.  I don't assume that the novice+list system has a better
>understanding of all aspects of the game such as historical origins, etc.

Well, I was considering only the strategy aspect. You say that the novice+list
has a better understanding of this aspect than the normal player has. I
think you're confusing knowledge with understanding. Certainly, all the
information you need to play brilliant chess is contained in the list, but
it is stored in a form that is not accessible to the novice's mind in
a form that would allow him to understand it. Similarly, you could learn
a lot about aerodynamics from studying aeroplanes, but that doesn't mean
that aeroplanes understand aerodynamics, even though they can fly a lot
better than I can.

>My point is that the normal player has *no understanding* of the game,
>although he can tell you why he lost.  You may argue that knowing the
>name of the world chess champion in 1962 is not all there is to
>understanding chess; that the normal player can still have a partial
>understanding of the game without being able to answer this question.
>I might argue that the same is true for the novice+list system in
>relation to the question that you posed to it.  How did you come to
>the conclusion that being able to explain why you won is *all there 
>is to understanding chess*?  Why should I buy this?  Why should I not
>maintain that the novice+list system has a partial understanding of
>the game (precisely that it understands how to play chess) without
>being able to answer your question?

My requirement that the novice explain
how he won is, btw, irrelevant to the argument (I've noticed a lot of
people arguing about this point). I was simply pointing out that
although the behaviour of the novice gave rise to the belief that he
understood chess, examination of his algorithm reveals that to accord
him an "understanding" of the game would be to stretch the meaning of
the word beyond permissible limits. If you imagine yourself carrying
out the actions of the novice you instinctively know that you would not
have an understanding of chess, you would simply be following instructions.

When you want to examine another person's algorithm, the only way to
do it is to ask them to explain. That's why we ask the novice to explain
how he won. If the novice were a computer, of course, we could
just look at his program. Immediately we realise that the novice has no
understanding of chess, since we know that if we ourselves were to simply
carry out instructions which tell us how to play, but do not tell us how
to understand the game, then we would understand nothing. If you want a
computer to understand chess, you must tell it how to understand chess.
Telling it how to play good chess does not garantee an understanding. A
computer programmed to play chess will not understand chess in exactly
the same way as our brains, which are programmed to understand things
like chess, cannot, apparently, understand "understanding" :-)

Programmed to play chess,	can't understand how to play chess.
Programmed to understand chess,	can't understand how to understand chess.

Since you article is so long, I'll have to respond to the rest of it
(which brings up yet more good points) in another posting.

Fergal Toomey.

ftoomey@maths.tcd.ie (Fergal Toomey) (04/10/90)

[This is my response to the second half of Dean's article]

In article <1990Apr8.191524.6565@cs.umn.edu> hougen@cs.umn.edu
(Dean Hougen) writes:

>Why is it unfortunate that we cannot distinguish
>between the "'understanding' program" and the brute-force program on 
>the basis of their behavior?  Obviously you wish to say that the 
>"'understanding' program" understands, but the brute-force program 
>does not.  But simply putting the 'undertanding' label on the one and
>not the other does not mean that the one actually does understand or
>that the other does not.

I'm not simply putting on a label. I'm arguing that there is some
yet-to-be-understood difference between the programs which distinguishes
the two. I'm saying that one understands and the other does not by virtue
of some algorithmic difference.

>In fact, let's use your own analogy here.
>We write a speech program for the computer running the "'understanding'
>program".  It can now speek fluently in English.  You play a game of
>chess with it.  
>
>You ask it, "Why do you think you won/lost/drew?"
>"Well, uhmm, yes, ... er,"  
>
>My point is this new system has *no understanding of the game*, although
>sometimes it beats you.  You may argue that being able to expalain why
>you won/lost/drew is not all there is to understanding chess; that this
>system can still have a partial understanding of the game without being
>able to answer this question.  In this case you agree with my point above,
>that we have yet to find a reason for saying that the novice+list doesn't
>understand and therefore have found no reason not to define 'understanding'
>by functional behavior.

Assuming that the program really does understand chess, and that it can
speak, then it will be able to explain its game. I can understand chess
to some extent, and I know that when I lose or win, I can usually pick
out some defect in either my opponent's or my own strategy, and I can do
this by virtue of my ability to understand chess strategy. Therefore,
when I speak of an "understanding" program, I mean one which is capable
of analysing a game and picking out reasons for victory/defeat.

>Or you may agree with me that this system is has no understanding; that
>it is just like the brute-force system - in direct contradiction to your
>assertion above that it is unfortunate that we cannot distinguish between
>the two.  In this case we agree that this discussion has yet to find a
>reason for not defining 'understanding' by functional behavior. 

I don't agree with you, but it is a sticky point. In a later post, I will
give a new, in my opinion much stronger, argument for my position.

>>IMO, there is a fuzzy boundary between the two, and I do not see
>>at all why this should be highly suspect. People make all kinds of
>>distinctions between algorithms already, for example between
>>"brute force" algorithms and "insight" algorithms.
>
>That people do this does not make it correct.

"Correct" doesn't apply here. People do this because it is useful. But
again, if you ignore the amount of time and resources required by a
brute-force algorithm and an insight algorithm, the two are
indistinguishable. This is getting off the point anyhow, since I
don't think insight algorithms have much to do with "understanding".
(Sorry for bringing it up).

>>Why shouldn't we be able to say:
>>
>>"This computer has been programmed to play chess, and this computer
>>has been programmed to *understand* how to play chess (and in fact
>>the first computer has beaten Kasparov, but the second computer
>>has given Kasparov some hints to improve his game next time)."
>
>Do you realize that you just asked us to believe that the second
>computer has been programmed to *understand* chess based on its
>functional behavior?  Would a brute-force chess-hint machine (one
>which contains a list of all possible games and a hint associated
>each one) understand chess?

I mentioned the hint-giving abilities of the "understanding" machine
only to highlight the difference between the two. If the "understanding"
machine were designed to use its own hints to play chess, rather than
to advise Kasparov, then there would be no way of telling the difference
between the two machines without looking at their algorithms. Giving
hints to Kasparov constitutes a kind of meta-behaviour which will
always distinguish the "understanding" machine from the "brute-force"
model.


>Is it necessary to take your strategy
>a step further and ask the hint-machine, "Why do you think you came
>up with that hint?"  "Well, uhmm, yes, ... er,"  That hint-machine
>has *no understanding of the game*, although it can give advice on
>how to improve your game.

But, when you ask for an explanation of the hints themselves, you are
in effect asking the machine to understand how it understands chess.
This it cannot do, although if you believed that understanding is
defined by behaviour, you would expect it to be able to do this. The
hint-machine has *no understanding of how it understands the game*.
Similarly, I understand some mathematics, but I don't understand
how I understand maths. If I did, I'd have made a fortune selling
intelligent computers.

>>>To my mind, there can only be a _functional_ definition of 
>>>"understanding".  How do you know that Kasparov uses a different
>>>algorithm for playing chess than the computer?  Better yet, how
>>>do you know that Kasparov "understands" chess -- except by virtue
>>>of his behavior?
>>
>>I don't. 
>
>In fact, you seemed not to flinch at the idea of Kasparov providing
>the list of all possible board configurations.  He understands chess
>therefore he would have/could create such a list?  Hmmm.

It was a gedanken experiment, after all.

>No one is forcing you to use brute-force methods or trying to stop you
>from understanding how humans understand.  But, when you do create an
>artificial intelligence, don't be surprised when some one peaks inside
>and says, "I don't think like that, your system doesn't understand, I
>don't care what it can do."

What kind of person would do a horrible thing like that, I wonder?
Probably the same sort of person who looks at aeroplanes in the sky
and says: "Oh dear, that aeroplane is not flying like a bird, therefore
it must be disobeying the laws of aerodynamics."

My argument is simply that there are "laws of thought" which a program
must obey in order to "understand", in the same way that there is a law
of gravity which both birds and planes must obey. This does not mean
that an intelligent computer must think in the same way as a human mind,
anymore than it means that planes must fly in the same way as birds. 

>Dean Hougen

Fergal Toomey.

ftoomey@maths.tcd.ie (Fergal Toomey) (04/10/90)

In article <1990Apr8.160030.1988@cs.umn.edu> thornley@cs.umn.edu
(David H. Thornley) writes:

>I dislike two consequences of this.  First, I think it is intellectually
>sloppy, almost to the point of dishonesty.  If I were arguing about the
>country's transportation network, and I questioned the need for intra-city
>roads by saying that people could walk at 20 mph, people would lose all
>respect for me.  If I argue about artificial intelligence and say I can
>implement a human-intelligence simulator mentally, or that I can get Gary
>Kasparov to give me written instructions on how to play chess at his level,
>I should get quite the same treatment.  Second, it trivializes the problems
>involved.  Programming a chess computer or a human-intelligence simulator
>is not a small feat, cannot be duplicated by memorizing lists of rules or
>board positions, and should not be treated as if they can.

In demonstrating his theory of Relativity, Einstein, the father of the
gedanken experiment, postulated the existence of trains moving at the
speed of light, interstellar spaceships, also moving at the speed of
light, and human observers standing on meteorites, moving at speeds close
to the speed of light. A gedanken experiment should not be rejected
because of wildly improbable assumptions, but only if the reasoning
doesn't follow. If people could walk at 20 mph then we should certainly
abolish roads. That is an "in principle" argument. We do not however
abolish roads because the problem of how to get from A to B is a 
"practical" problem. The problem of machine understanding 
is an "in principle" problem. And so improbable gedanken experiments
are acceptable.
 
Fergal Toomey.

thornley@cs.umn.edu (David H. Thornley) (04/11/90)

In article <1990Apr10.102610.5376@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal Toomey) writes:
>In article <1990Apr8.191524.6565@cs.umn.edu> hougen@cs.umn.edu
>(Dean Hougen) writes:
>
>>In article <1990Apr6.144947.11473@maths.tcd.ie>, ftoomey@maths.tcd.ie
>> (Fergal Toomey)writes:
>>>he is capable of carrying out simple instructions. He is also capable
>>>of speaking English fluently. He is given a long list of instructions
>>>from Gary Kasparov telling him exactly what move to make in every
>>>board situation that can possibly arise during a chess game (the number
>>>of possible board configurations is very, very high, but finite). You
>>                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>Too high for the list to ever be created but this is a thought experiment
>>so we can give him this list for purposes of our investigation anyway.  
>>But why do you feel the need to mention that it is finite?
>
>If it were infinitely long, it would clearly take an infinite length of
>time to produce, in which case my argument would fall through, since it
>would not be possible, using this method, to construct a thing capable
>of playing chess as well as Gary Kasparov.

I seriously question the validity of this.  The number of possible
positions is sufficiently high that it may as well be infinite, since
it is not possible to enumerate them (at, say, a thousand per second)
within the expected lifespan of the universe, nor would a human-readable
version come anywhere near fitting on this planet.

Therefore, the list is impossible.  The novice must have something else
from Kasparov, perhaps a computer program or a chess book of a quality
never yet approached.

The key difference here is that it is obvious that a list understands
nothing, and a person using a list does not have to understand anything.
If you equip the novice with a computer or a book so that novice+gimmick
can play as well as Kasparov, it gets very iffy if the novice+gimmick
should be referred to as understanding chess, since the system must
analyze the board situation in some manner or other.

The reason I am so insistent on this part of the argument is that the list
technique is so universally applicable.  If a list is drawn up of all 
possible hour-long conversations in Chinese, we have a Chinese room with
no understanding.  Since so much of human interaction is quantifiable,
and hence listable, the Turing test is obviously ridiculous if its
opponents are allowed to propose a counterexample beginning with "First we
construct this list with 10 ** 30000 items."

David Thornley

ftoomey@maths.tcd.ie (Fergal Toomey) (04/11/90)

In article <1990Apr9.063331.15478@cs.umn.edu> hougen@cs.umn.edu (Dean Hougen)
writes:

>So why has explaining why one won or lost been 
>proposed (here and elsewhere) as a criterion for understanding chess?
>Perhaps it is the psychological impact of imagining a brain-damaged
>novice with a list (or, god forbid, a machine) understanding chess.  "I
>know understanding chess is hard, and if he (it) is playing chess well,
>then playing chess well must not be what is meant by understanding chess.
>Let's see, what else could it mean to understand chess?  ... "

We are, in fact, arguing in a very sloppy manner, but this is inevitable.
We have no consensus on what understanding is, nor on what (in the example
of the chess game) is actually being understood. In presenting my argument
I hoped that people would see that the novice+list cannot understand chess
within the normal use of the word "understand". I know that if I were
placed in the position of the novice, I would not feel that I understood
chess. I would not get up after a game and say "Ah! now I understand
chess".

Dean and some others seem to have got the impression that I don't think
that machines are not capable of understanding. I'd like to clarify my
position on this. I have chosen not to take sides in the Strong AI
debate (that is, the "Minds are just machines vs. Minds are more than
just machines" debate) because there is simply not enough evidence to
justify either position. The balance of evidence at present seems to
indicate that minds *are* just machines made of meat, but alas, the
evidence is just not conclusive.

In the mean time, I choose to work on the assumption that minds are
machines, and that therefore we may be able to build machines which
are minds. I assume, therefore, that machines can understand. But
I do not feel that behaviour implies understanding.

One reason I feel this way is because it is more constructive to
study understanding itself than just to say, "Well, if it plays
chess, it must understand chess, therefore we don't have to bother
working out what understanding is".

The Wright brothers could have said, "Well it's very hard to work
out what the laws of aerodynamics are, but luckily we don't have to
bother with these laws, since we know that if something flys, then
it must obey them. Therefore, all we have to do is stick bits of
wood and metal together in random ways, throw them up in the air,
and if they fly, then they fly."

AI researchers would be better off finding out what understanding is,
than just saying, "Well, if it passes a Turing test, then it must
understand". 

>The hint-machine will only "be able to" give
>hints/explain games, it won't *understand chess*.  I don't expect Ken
>to follow to this position, but I would ask him to consider how he got
>where he is.  The same for Fergal.

There is no contradiction in my position. In introducing the hint-machine,
I *defined* it as a machine which understands chess.

>To look at a more real-world example, suppose one of these days I get
>up from my desk in the new EE/CSci building, walk out the front doors
>and across Washington Avenue to the Health Sciences complex, and spend
>some time carefully observing medical diagnosis taking place.  I then ask
>some of the doctors how they go about making diagnoses.  I might just find
>what many people who have actually done this sort of thing (in order to
>construct expert systems) have found: some experts actually *can't* tell
>you how they do what they do.  Some, in fact, will make up partially false
>explanations in order to cover the fact that they cannot give adequate
>explanations! 

True, it was me who brought in the idea of explanations as a way of
testing understanding. But as I pointed out in another article, I did
this only for convenience. When faced with a human being, the only way
you can look at his algorithms is to ask him to explain what he's
doing. The point above merely shows that people are not always able
to explain their algorithms. When you have a computer, on the other hand,
you can look at its algorithm, so the problem doesn't arise.

By the way, if you accept that being able to explain something is neccessary
evidence of understanding (which I, repeat, *do not*), then your point
above about doctors not being able to explain their behaviour leads
to the conclusion that behaviour does not imply understanding,
which is my position, not yours! :-) Therefore, I think that you should
reject, along with me, the idea that being able to explain something is
a neccessary condition for understanding.

>For some reason, perhaps the psychological
>reason mentioned above, some people have twisted the meaning of the word
>'understand' so far that we can now say with a straight face that a
>human expert acting as an expert within his own field of expertise
>doesn't understand his own field of expertise!  Seems we have gone
>astray somewhere.  Perhaps it is time to give serious consideration to
>whether, just perhaps, the novice+list *does* understand chess.

Put yourself in the position of the novice. Would you understand chess?
The answer must be no. Put yourself in the position of the novice and
suppose that you have committed the list to memory. Would you understand
chess, in the usual sense of the word? The answer again is no. If you
claim that the answer is yes, then I contest that it is you, not I,
who has twisted the meaning of the word "understanding" beyond its limits.
Note again that I am not saying that machines cannot understand, by
definition of the word understanding. I am saying that a particular
machine, the novice+list machine, cannot understand, and that its
lack of understanding cannot be deduced from its behaviour.

>I agree that intelligence is no "trick" and that it cannot be easily 
>implemented, but the fact that no one has succeeded yet with the full
>thing is hardly a proof of this.  Getting a machine to play chess the
>way people do is probably quite hard, but brute-force may well get us
>a World Champion in the form of a machine with the simple addition of
>speed.  The question is, "Would this champion understand chess?"  I
>think the answer might just be yes.
>
>Dean Hougen

So artificial intelligence has been in a rut for 50 years. My own
feeling is that this is because AI researchers have behaved much like
my alternative Wright brothers above. They have simply messed around
with a couple of good ideas like neural nets in the hope that they
would hit suddenly upon the secret of artificial intelligence.

But it would take a long time to build a plane if you knew nothing
about aerodynamics; and it will take a long time to build a machine
which understands as well as we do, by anybody's definition of
understanding, unless we gain an insight into what understanding *is*.

Fergal Toomey.

hougen@cs.umn.edu (Dean Hougen) (04/11/90)

In article <1990Apr10.102610.5376@maths.tcd.ie> ftoomey@maths.tcd.ie
 (Fergal Toomey) writes:
>In article <1990Apr8.191524.6565@cs.umn.edu> hougen@cs.umn.edu
>(Dean Hougen) writes:
>
>>In article <1990Apr6.144947.11473@maths.tcd.ie>, ftoomey@maths.tcd.ie
>> (Fergal Toomey)writes:
>>>Perhaps I should have chosen a better example.  [stuff deleted]
>>
>>The fact that you present a revision of the analogy (below) indicates to
>>me that you do not believe that your novice+Kasparov analogy (above) to 
>>be adequate to answer this question in the positive.  Neither do I. 
>>[stuff deleted] 
>
>I believe that my original analogy was flawed in that it was too open to
>misinterpretation. Paul found an apparent way out my argument by saying
>that we must consider the Kasparov + novice system instead of the novice
>alone.

I don't think Paul "found an apparent way out" of an analogy "too open to
misinterpretation."  I think he correctly found the flaw in your analogy.
The chess playing ability in your example did come from an understanding
of chess, Kasparov's!

> My second analogy was supposed to demonstate that we can't do this:
>we *must* consider the novice alone. If we require that the Kasparov and
>the novice must be considered together (and this is clear, I hope, in the
>second example) then we are led to the conclusion that understanding is
>impossible for a non-human.  [stuff deleted]

I think you mean that we must consider the novice+list system (not that
we must consider the novice alone).  I agree that if we drag Kasparov
back into consideration then you will have shown that understanding is
impossible for brute-force systems (though not for all non-humans).

>>>Let me put it like this: suppose the novice
>>>has brain damage, so that he is incapable of understanding chess, but
>>                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^      
>>You're cheating.  [stuff deleted]
>
>No.  [stuff deleted]

You're right, you weren't cheating.  I temporarily lost my mind.

>Therefore, I can rephrase my argument:
>It is possible to construct a thing which follows instructions yet
>understands nothing, therefore,
>It is possible to construct a thing which plays chess as well as
>Gary Kasparov and yet does not understand chess.

You are making the same "prefered vantage point" mistake that Searle
made in the original Chinese Room argument.  You say that because the
novice alone doesn't understand chess we know that the novice+list
system doesn't understand chess.  You can fiat-in the novice's
inability to understand chess, but you can't do the same with the
novice+list system's ability or lack thereof to understand chess; this
latter point is what the discussion is all about.

>>>[stuff deleted]  You conclude that the novice+list
>>>system has a better understanding of the game than the normal player. You
>>                     ^^^^^^^^^^^^^^^^^^^^^^^^^
>>I conclude that the novice+list system has a better *understanding of how to
>>play the game*.  I don't assume that the novice+list system has a better
>>understanding of all aspects of the game such as historical origins, etc.
>
>Well, I was considering only the strategy aspect. 

No, you weren't.  That's why I brought this point up.  I realize that you
weren't thinking about historical origins of chess, but you were confusing
two seperate aspects.  They are: 1. understanding chess strategy aplication
(that is, being able to win at chess) and 2. understanding chess strategy
*formulation* (this is, being able to, among other things, explain why
you won or lost).  The first of these is what I think was originally meant
by 'understanding chess' in this discussion, the second is what you tested
for in order to conclude that, and this *is* a quote, "the novice+list
system has *no understanding* [emphasis yours] of the game."

>You say that the novice+list
>has a better understanding of this aspect than the normal player has. I
>think you're confusing knowledge with understanding. Certainly, all the
>information you need to play brilliant chess is contained in the list, but
>it is stored in a form that is not accessible to the novice's mind in
>a form that would allow him to understand it. 

But again, is the *novice+list system* understanding?  We know a priori
that the novice is not going to understand chess, but does that mean
that any system of which the novice is a part cannot understand chess?
Take a Kasparov+robot arm system.  The robot arm can execute manuvers on
a certain chess set, such that when it gets standard chess notation as
input it can reach out and move the correct pieces to the correct places.
Now, the arm is too dumb to understand chess.  Does the Kasparov+robot
arm system understand chess?  Yes, despite the fact that the arm alone
does not understand chess.  In the Kasparov+robot arm system it is quite
easy to see where the understanding is taking place: in Kasparov.  In
the novice+list system the understanding is more distributed, but you
can't say it is absent just because it can't *all* reside in the novice.

As for confusing knowledge with understanding, I think you are just
playing word games.  "Ooops, I see a computer doing it, it must only
*know* something, not actually understand it."   What is the difference
between a. knowing how to discuss questions with you and b. understanding
question discussion?

>Similarly, you could learn
>a lot about aerodynamics from studying aeroplanes, but that doesn't mean
>that aeroplanes understand aerodynamics, even though they can fly a lot
>better than I can.

Ever heard of a pilot?  If the pilot didn't understand how to fly the
plane it would crash.  If we replace the pilot with a computer will you
say that the computer (assuming it flies the plane without crashing)
doesn't understand how to fly the plane?

>> [stuff deleted]

>My requirement that the novice explain
>how he won is, btw, irrelevant to the argument (I've noticed a lot of
>people arguing about this point). 

Having the novice explain how he won was not irrelevant to the argument,
it *was* the argument.  Need I go back to your previous article and splice
in your comments that the novice+list can't explain *therefore* we know
that is doesn't understand?

>I was simply pointing out that
>although the behaviour of the novice gave rise to the belief that he
>understood chess, examination of his algorithm reveals that to accord
>him an "understanding" of the game would be to stretch the meaning of
>the word beyond permissible limits. 

Now you are saying that, a priori, no brute-force system understands.
Sorry, you'll have to give me a reason to buy that, I'm not going to
assume it.  (If I prove that you are a brute-force system will you
concede that you don't understand?)  BTW, what are "permissible limits"?
Anything that lets your prejudices stand?

>If you imagine yourself carrying
>out the actions of the novice you instinctively know that you would not
>have an understanding of chess, you would simply be following instructions.

The question is not whether *I* understand, it is whether the system of
me+list understands.  Again, this is the mistake Searle makes.

>If you want a
>computer to understand chess, you must tell it how to understand chess.
>Telling it how to play good chess does not garantee an understanding. A
>computer programmed to play chess will not understand chess in exactly
>the same way as our brains, which are programmed to understand things
>like chess, cannot, apparently, understand "understanding" :-)

Perhaps the problem is still that you haven't said what you think it
means to "understand chess."  We are trying to find out what it means
to understand, and yet anytime I tell a computer how to do anything
chess-related you say, "Yes, but that is not understanding chess."  I
have no way to defend Kasparov if you say he doesn't understand chess,
because you refuse to say what understanding chess means to you.

>[stuff deleted]
>Fergal Toomey.

Dean Hougen

kp@uts.amdahl.com (Ken Presting) (04/11/90)

In article <1990Apr9.063331.15478@cs.umn.edu> hougen@cs.umn.edu (Dean Hougen) writes:
>
>Returning to the chess example, if you were to approach the typical chess
>player and comment, "Ya know, I don't think Kasparov understands chess,"
>you would likely get, "But he's the world chess champion," or "Hey, he
>beat Karpov, didn't he?" or something similar as a reply, not "He can 
>explain why he lost the second game of that match, can't he?" or anything
>along those lines.  . . .

These would be reasonable replies, especially because they indicate that
the comment seems absurd.  But there are other reasonable replies, eg,
"Are you saying that Kasparov is an _idiot savant_?" or "Check out
Capablanca's old book - that guy *really* understood chess".

The easy part of this point is that there is a clear difference between
two different kinds of behavior - playing chess and talking about chess.
Playing chess is intellectually demanding, so for this reason (among
others) it is unlikely that anyone who plays chess well would lack the
ability to talk about it.

The hard part is: what's the connection?

> . . .  in Ken's article (message-id:
><7cn102fg9ahA01@amdahl.uts.amdahl.com>) where playing chess is reduced 
>to "being able to" do something, and understanding chess is redefined to
>be a level above(?) this.

A well-placed question mark.  If the activity of playing chess can be
described as linguistic (imagine postcard chess), then a conversation
about chess would be conducted in a metalanguage.  But I would deny
emphatically that the ability to talk about chess is usefully described
as a "meta-ability", or "on a different level" from playing chess. A
similar point would apply to a conversation about linguistics, or about
meta-mathematics.  Conversations about language are still conversations,
and proofs about proofs or models are still proofs.


>. . .   The hint-machine will only "be able to" give
>hints/explain games, it won't *understand chess*.  I don't expect Ken
>to follow to this position, but I would ask him to consider how he got
>where he is.  The same for Fergal.

The distinction between know-how and knowledge goes (at least) as far
back as Plato (techne' vs. episteme').

>
>. . .  ask some of the doctors how they go about making diagnoses.
>. . .   Some, in fact, will make up partially false
>explanations in order to cover the fact that they cannot give adequate
>explanations! 

This is a very significant observation.  I would say that the essential
attribute of a conscious agent is that after proposing an incoherent or
inadequate explanation of its behavior, it is capable of entering into
an *argument* regarding that explanation, which involves two kinds of
premises:

1)  Logical laws applied normatively, as in, "That explanation is wrong,
    because the explanandum is not a consequence of the explanans"

1a) (more important, but not directly applicable to diagnosis:)
    Principles of rational decision theory applied normatively, as in,
    "That action is irrational given your stated beliefs and desires,
    because its expected outcome is strongly dis-preferred"

2)  Factual descriptions of the agent's own behavior, as in "You could
    not have made that diagnosis on the basis of the X-ray, because
    you did not look at the X-ray"

(It should be noted that such an argument can only be conducted in a
 language which is semantically closed)

Chess and medical diagnosis are difficult (but interesting) cases of the
distinction between techne and episteme, because of the importance of
rules in the behaviors themselves.  Simply to participate in a game of
chess requires that one be able to participate in arguments which meet
the above criteria, with "rules of chess" replacing "logical laws".

In medical diagnosis (but less so in pure science) arguments about the
specific information-collecting actions of the doctor might be
considered _ad hominem_, or as professional attacks.

>
>But if to understand medical diagnosis means to be able to give
>explanations of how such diagnoses are made, then these doctors don't
>understand medical diagnosis!

There are two different kinds of explanation to ask for.  One is an
account of the epistemic basis of a particular diagnosis, the other is a
description of the mental (ie psychological) process which produced the
behavior of stating the diagnosis.  Certainly a doctor who cannot offer
an explanation of the first type would be of limited use to students,
whatever his value to patients.

Would an explanation of the first type *constitute* an explanation of
the second?  That is an empirical hypothesis, and if I understand the
Computationalists, it is very close to their claim.  IMO, there is no
_a priori_ reason to find that claim plausible.

Let me emphasize that while I disagree strongly with Computationalism,
I also see no reason to suppose that a computer cannot be programmed to
participate in the required sort of behavior.

>
> . . .  we can now say with a straight face that a
>human expert acting as an expert within his own field of expertise
>doesn't understand his own field of expertise!  ...

(Whom, pray tell, are you accusing of straight-facedness?  :-)

Once we have distinguished between explanations of why a diagnosis is
plausible and explanations of how diagnostic behavior is generated,
we can laugh all the way back to our keyboards.  *After* thanking Prof.
Searle for his instructive (though misinformed) counter-example, I would
say.

The CR is not a real problem for AI.  The real problems are: (a) over-
reacting to the CR (ala TTTTT...), and (b) ignoring the CR (ala Minsky).

Minsky is right - self knowledge is dangerous.  So's life in the Big City.
(cf _The Society of Mind_, section 6.13)

> ....  Perhaps it is time to give serious consideration to
>whether, just perhaps, the novice+list *does* understand chess.

Hogwash!  The novice, ex hypothesi, cannot even distinguish a legal move
from an illegal move.  Could he learn to do so?  His list furnishes
enough information for him to infer the rules of chess, but if he did so,
he would cease to be a novice.  (It is debatable whether there is any
semantics of chess, even if it is conceived as a language)

The rulebooks of the Chinese Room likewise furnish enough information
to allow Searle to infer the syntax of Chinese, but he could no more
learn the semantics of Chinese from the books than he could learn its
phonetics.  What he needs is a set of books that describe more than
symbol manipulation.

(I've tried to trim a little fuzz off the concept of "understanding", but
I still think we'll be better off if we drop the general question of how
to define it, and focus on *what information* is contained in programs)


Ken Presting  ("Leave the hairsplitting to us")

ftoomey@maths.tcd.ie (Fergal Toomey) (04/12/90)

In article <1990Apr10.202829.2080@cs.umn.edu> thornley@cs.umn.edu
(David H. Thornley) writes:

>I seriously question the validity of this.  The number of possible
>positions is sufficiently high that it may as well be infinite, since
>it is not possible to enumerate them (at, say, a thousand per second)
>within the expected lifespan of the universe, nor would a human-readable
>version come anywhere near fitting on this planet.
>
>Therefore, the list is impossible.  The novice must have something else
>from Kasparov, perhaps a computer program or a chess book of a quality
>never yet approached.

Well, chess came up as a convenient example. Replace with Tic-Tac-Toe
and everything becomes reasonable. I expect that we'll keep the chess
example, however, because *in principle* it is the same as Tic-Tac-Toe
(ie. the ideas involved in understanding chess are the same as those
involved in understanding Tic-Tac-Toe... probably... oh dear.. :-) ).

> David Thornley

Fergal Toomey.

gerry@zds-ux.UUCP (Gerry Gleason) (04/13/90)

In article <1990Apr11.173241.6428@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal Toomey) writes:
>Well, chess came up as a convenient example. Replace with Tic-Tac-Toe
>and everything becomes reasonable. I expect that we'll keep the chess
>example, however, because *in principle* it is the same as Tic-Tac-Toe
>(ie. the ideas involved in understanding chess are the same as those
>involved in understanding Tic-Tac-Toe... probably... oh dear.. :-) ).

Do you reject the idea that large differences in complexity, can manifest
as qualitative differences?  Yes, in principle both of these games are
formal systems that are finite, but in the case of chess, you cannot
possibly generate a list of all possible games in a "realisable" amount
of time (e.g. a person or societies lifetime).  You need to argue that it
is valid to ignore this difference in complexity.

Gerry Gleason

thornley@cs.umn.edu (David H. Thornley) (04/14/90)

In article <1990Apr11.173241.6428@maths.tcd.ie> ftoomey@maths.tcd.ie (Fergal Toomey) writes:
>In article <1990Apr10.202829.2080@cs.umn.edu> thornley@cs.umn.edu
>(David H. Thornley) writes:
>
>>I seriously question the validity of this.  The number of possible
>>positions is sufficiently high that it may as well be infinite, since
>>it is not possible to enumerate them (at, say, a thousand per second)
>>within the expected lifespan of the universe, nor would a human-readable
>>version come anywhere near fitting on this planet.
>>
>>Therefore, the list is impossible.  The novice must have something else
>>from Kasparov, perhaps a computer program or a chess book of a quality
>>never yet approached.
>
>Well, chess came up as a convenient example. Replace with Tic-Tac-Toe
>and everything becomes reasonable. I expect that we'll keep the chess
>example, however, because *in principle* it is the same as Tic-Tac-Toe
>(ie. the ideas involved in understanding chess are the same as those
>involved in understanding Tic-Tac-Toe... probably... oh dear.. :-) ).
>
>> David Thornley
>
>Fergal Toomey.

Here is my position:

I believe that understanding is inferred from behavior.  For my first exhibit,
consider an elementary algebra student.  I wish to determine whether she
understands polynomial multiplication.  How can I determine this?

1.  I can ask her to explain how to do it.  Any answer of the form of 
"multiply all the parts of one polynomial by all the parts of the other one,
then add everything together" is acceptable.

2.  I can give her several problems, some of them involving more than one
polynomial.

3.  I can ask her to prove that what she is doing is valid, applying the laws
of distributivity and commutativity and associativity.

4.  I can assume that I am human, she is human, and I understand polynomial
multiplication, hence she should be able to.

5.  I can try to use brain-imaging techniques and massive psychological
tests to determine the changes in her mind that have resulted from her
learning algebra.

Techniques 1, 2, and 3 are simply behavior-based.  Technique 5 is impractical
because I haven't the faintest idea of what to look for.  I don't know how it
is that I understand algebra (although I do), how should I know how anyone
else does it?

Technique 4 is, of course, only a plausibility argument, but I think it is
a major force in thinking about computer understanding.  Understanding is a
very slippery concept, as we have seen in this thread, and there seems to
be extreme reluctance to apply it to nonhumans on a serious basis (as 
opposed to such things as verbally encouraging ones car or bowling ball to 
behave in a desired manner).  I believe that we are preprogrammed from birth
to recognize that other humans are like us, as otherwise I don't understand
how so many of us would believe that at such an early age.

Therefore, we need to decide how much more difficult we are to make it to
consider that a machine understands something than a human.  How much of
the human prejudice is rational observations that we know that humans can
understand things and how much is irrational conviction that humans are the
only things that can understand?

Also, if we are to discuss whether machines understand, we have to come up
with some criterion, or we are merely wasting bandwidth.  If someone wishes
to argue that machines cannot really think or understand, since these are
properties of souls and machines do not have souls, I can't argue with that.
If someone wishes to argue that the Universe is one large mind and that 
everything understands, my only argument is that this isn't what I mean by
understanding.  It seems to me that the only possible criteria are static
and dynamic analysis.  Static analysis, such as the analysis of algorithms,
seems to me to fail on the grounds that we don't know what we are looking for.
If we knew how people understood things, it would be possible to determine
what constraints we could put on understanding.  Since we don't, we can't.

This leaves dynamic analysis, or behavior.  I think that most people would
accept a machine as truly intelligent and understanding if it acted as such
and the people had enough contact with it to realize that.  I think that
this would be an almost automatic and completely correct conclusion.

We must also consider that behavior is finite.  As a matter of rational
expectation, I am probably not going to live another sixty years.  In this
time, I cannot talk faster than about 150-200 words per minute, I type slower
than that, and my actions take time.  Therefore, I can put strict limits on
the amount of words and deeds I can produce for the rest of my life.  In
this case, it seems that we have to put a time and output limit on any
Turing test of understanding.  This is why I object to thought experiments
in behavior that involve tasks that are not possible to complete while this
Universe is still capable of harboring intelligent life.

I would like, then, to return to static analysis of understanding.  How do
we generally use the concept?  We do not generally consider table look-up
to be understanding, and so we would say that a Tic-Tac-Toe novice looking
move up off a list of game positions is not showing understanding of the
game.  Since this is not possible with chess, we can assume that a good
chess-playing system may be showing some understanding of the game.

This is why I am insisting on not allowing a list of all possible chess
positions in the argument.  It is just as possible to construct a list of
all possible written conversations as it is to construct a list of all
possible chess positions:  theoretically doable (if we limit the length of
the conversations, perhaps to one trillion words each) and totally impossible
within the limits of the Universe we live in.  Any task that we would
consider setting to an unknown system to determine understanding could
be satisfied with list look-up, since we cannot set infinite tasks.  However,
most tasks of even moderate complexity could not be satisfied with lists
produced under any reasonable conditions.

Therefore, I claim that it is reasonable to base understanding on complex
behaviors, and that this is the only practical way we have right now.

David Thornley