[comp.ai] Sci. American AI debate

mike@cs.arizona.edu (Mike Coffin) (01/05/90)

I finally got around to reading the artificial intelligence "debate"
in Scientific American (Jan., 1990).  Several points struck me while
reading the two articles.

First, it looks to me like advocates of strong and weak AI don't
really communicate with each other.  Searle, in particular, insists
that he has *proven* that a mind is more than a program, and he
professes puzzlement about why anyone could fail to see the logic of
his position.  The other side brings up counter arguments one after
an other.  Both sides are convinced that their arguments are
"obviously" right and assume that the other side is, willfully or
out of ignorance, missing the whole point.

Second, it seems to me that Searle has set a tough task for
himself---to "prove" logically his contention.  That is tough because
he is trying to prove something about "thought", a term that he never
defines.  Without a definition he will never produce a very convincing
proof.  The other side is in much better shape because they only have
to poke holes in his arguments.

Finally, it seems to me that the Churchlands got the better of this
exchange.  They point out that Searle's "axiom" that syntax cannot
give rise to semantics is (a) not at all obvious and (b) begs the
question.  Searle's only response to this is that it's "true by
definition" (what definition?) or "rather obvious" (to whom?).  Since
the negation of this axiom is essentially a restatement of the strong
AI position, it's hardly seems fair to use it as an assumption.
-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/05/90)

From article <16577@megaron.cs.arizona.edu>, by mike@cs.arizona.edu (Mike Coffin):
>...  They point out that Searle's "axiom" that syntax cannot
>give rise to semantics is (a) not at all obvious and (b) begs the
>question.  Searle's only response to this is that it's "true by
>definition" (what definition?) or "rather obvious" (to whom?). ...

Perhaps he has more of a response than that.  He has an argument
that seems to be intended to bear on this point, which begins:
"As with any logical truth, one can quickly see that it is true,
because one gets inconsistencies if one tries to imagine the
converse."  However, there appear to be no inconsistencies
exhibited in what follows this.  There are also some tricky
shifts in wording involved: sometimes the disputed relation
between syntax and semantics is "constitutive of" and sometimes
just "is".
				Greg, lee@uhccux.uhcc.hawaii.edu

kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) (01/05/90)

In article <16577@megaron.cs.arizona.edu> mike@cs.arizona.edu (Mike Coffin) writes:
>Second, it seems to me that Searle has set a tough task for
>himself---to "prove" logically his contention.  That is tough because
>he is trying to prove something about "thought", a term that he never
>defines.  Without a definition he will never produce a very convincing
>proof.  The other side is in much better shape because they only have

My reaction to reading the Searle article was exactly identical to
reading his earlier paper, Minds, brains, and programs, _Behavioral and
Brain Sciences_, 1980. He's just arguing semantics. His whole point is
about definitions, and as pointed out above, he never provides any. The
word I thought everything hinged on was 'understand.' He doesn't even
argue whether or not a system could be created which answers every
question correctly like the chinese room; he just concedes it and then
yells that it would't 'understand.'

My opinion: If it answers every question correctly, I don't care what
you call it. If you don't want to call it 'understanding,' fine. I think
it's pretty significant.

The second article I thought was much more interesting and pertinent. It
dealt with the altogether more important (in my opinion) question--can
we get the system to produce all the right answers, and how. It was also
the first actual explanation I have seen of parallel (or simplistic
neural-net) approaches. After hearing so much from that area, but not
bothering to look into it, a simple explanation was very welcomed.

-Karl		kpfleger@phoenix.princeton.edu
		kpfleger@pucc (bitnet)

mike@cs.arizona.edu (Mike Coffin) (01/05/90)

From article <6031@uhccux.uhcc.hawaii.edu>, (Greg Lee):
> From article <16577@megaron.cs.arizona.edu>, (Mike Coffin):
>> ... Searle's only response to this is that it's "true by
>>definition" (what definition?) or "rather obvious" (to whom?). ...
> 
> Perhaps he has more of a response than that.  He has an argument
> that seems to be intended to bear on this point, which begins:
> "As with any logical truth, one can quickly see that it is true,
> ...

You are right, of course.  I intended to include a paragraph on
this argument but somehow forgot.  As you point out, he "proves"---if
one uses the term very loosely---that syntax and semantics are
different.  He doesn't prove that semantics cannot be produced by,
or arise from, syntax.

Thanks for the correction.



-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/05/90)

Searle's argument does not make much sense to me, as stated.  For one
thing, I don't agree with any of his axioms.  But I think there is a
related issue, very familiar to programmers, that might help to identify
the intuitive content of the argument.  That is the issue of modularity.
Recall the difficulty people have had in running some programs written
for the ibmpc on clones, due to the programs' direct manipulation of the
display.  For such cases, there are direct dependencies between the
higher level parts of the program + operating system and low level parts
-- the hardware peripherals.  One could perhaps say, using Searle's
language, that these ill-structured programs contain symbols with
intrinsic semantic content, or that the hardware displays "causal
powers" with respect to program behavior.

Under this reinterpretation, Searle's argument becomes the conjecture
that the human mind/brain is ill-structured.  The mind-program could not
be run with different hardware peripherals.  But although this could
turn out to be true, it is not logically true.

				Greg, lee@uhccux.uhcc.hawaii.edu

cmiller@SRC.Honeywell.COM (Chris Miller) (01/05/90)

>From: kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger)

>  My reaction to reading the Searle article was exactly identical to
>  reading his earlier paper, Minds, brains, and programs, _Behavioral and
>  Brain Sciences_, 1980. He's just arguing semantics. His whole point is
>  about definitions, and as pointed out above, he never provides any. The

It's nice to see my basic feelings about this issue finally getting some
airplay in this thread.

Perhaps it's merely my background in cognitive psychology, but I find the
whole question 'can computers think?' to be a subtle bit of misdirection.
I believe that this whole debate (both on the net and in general) is
actually an attempt to define the term "thought" (and related terms like
"understand", "know", "mean", etc.)  It may be more accurate to claim that
this is a battle for the right to apply the term "thought" to a new
process(es) which, although it certainly differs from the one classically
called "thought", may or may not do so in a way which "makes a difference."

For millenia, it has been sufficient to allow the word "thought" to denote
nothing much more complicated or well-defined than "that activity which
humans engage in while scratching their heads."  The various fields of AI
are now providing candidate refinements to that definition in the form of
process models.  These models are held up with the claim "See, this is
THINKING."

Searle (and others) are saying "No, that is not THINKING.  THINKING is
something different."  But they have not yet, to my satisfaction, either
spelled out what that difference is, or provided an alternate definition of
"thinking" which makes it clear.  I am not necessarily opposed to the idea
that "thinking" IS different, I just don't see HOW, yet.

Thus, this argument is less about what computers can or cannot do, and more
about definitions and semantics and, especially, the awarding of labels.
If AI programs cannot "think," then perhaps we need some new verb to
describe what it is that they do and how this process differs from both
traditional, algorithmic "computing", and human "thinking".

Curiously, I think a similar situation arose during the early and middle
parts of this century in linguistics over the question of what constituted
"language", "communication," and "meaning."  It had sort of always been
assumed that "language" was something that humans alone did, but the
discovery of symbolic communications between honeybees and especially the
experiments in sign language with chimps threw this contention into
question.  The debate gradually led to a much more specific definition of
"language" which largely took these border cases into account and served to
practically distinguish what humans did from what these animals did. 

I'm fuzzy on this stuff.  Perhaps someone who knows it better can
elaborate.

--Chris

mmt@client1.DRETOR.UUCP (Martin Taylor) (01/06/90)

I think that what both Searle and the Churchlands (and all commentary I have
yet seen) miss is that the brain is a dynamic system whose activity in
response to an input does not stop when the "desired" output has been
produced.  There is an old saying, whose provenance I forget, about an
old man who says "Sometimes I just sits and thinks, and sometimes I just sits."
Well, Searle in the Chinese room just sits.  In dynamic terms, the
Chinese room has a zero-energy point attractor, whereas in any thinking
system the effect of a perturbation (the entry of a Chinese input) may
last indefinitely long, and may even be to shift the system from one
basin of attraction into another, so that subsequent inputs may be
interpreted completely differently.  It seems to me that the common-sense
version of "understanding" has to be based on appropriate reactions to
the intentions of the interrogator and that the things the interrogator
inputs must be affected by the output of the Chinese room.  Hence, no
account of the Chinese room can be complete if it deals with only a short
sequence of interchanges.

To put the point in everyday language, I do not believe there could be a
finite rule set that would accomodate the interchanges with a cooperative
partner, given that the partner's input comes at unpredictable moments in
the "thinking" after-effects of prior inputs.  Of course, any specific
example could be accomodated by specific rules, but imagine the set of
possibilities of the kind in which a statement from the interlocutor such
as "grass is red" might be followed by (1) "No it isn't", (2) "Oh, come on."
(3) "Yep, I'll come by on Sunday" (4) "Maybe I should phone the police"
(5) "Have another."  I can imagine situations in which each of these would
be suitable and effective responses, but I cannot imagine rule sets for
the Chinese room that could provide them in the absence of "understanding".
But the premise of the Chinese room is that such responses would be
produced in the absence of understanding.

Searle claims it to be prima facie obvious that syntax cannot give rise
to semantics.  I find this claim analogous to a claim that a set of distances
between points cannot be enough to allow one to determine the placement
of the points in an N-dimensional space.  But Roger Sheppard showed that
one can, even if one knowns only the ordering of the distances, if one
has enough distances.  Similarly I believe it would be hard to prove
that syntax cannot converge on semantics if one has enough connected
discourse with which to work.  Searle's statement that the same syntax
could be used of a chess game and of the stock market may be true for small
samples, but is unlikely to be true for large ones.  After all, unless one
wishes to be truly mystical, the structure of the input is ALL one has
to work with as a newborn infant, and from this structure (the syntax of
the environment) one determines all the meanings of the world, including
those of language.

Returning to the theme of ongoing activity:  classical symbolic programmes
usually either return the computer to its original state when they have
provided the desired output, or they move to some new state differentiated
from the original in that some memory registers have different values.
A thinking system will do more than this.  It will continue to work and
to change state.  Now there is a classical problem known as the "Halting
Problem": will a particular program halt?  A program may provably halt
(achieve a point attractor), provably not halt (achieve a limit cycle),
or not provably halt (perhaps reach a strange attractor or approach a
non-strange attractor more slowly than can be studied).  The last form
of program seems to be a candidate for a thinking system.

There is lots more behind this, but it would be labouring the point to go
into deeper detail in this forum.  Suffice it to say that I think Searle's
Chinese room has nothing to tell about the possibility of a purely
symbolic programme forming an understanding system, both because it is
not dynamic and because the rule-set cannot in principle be adequate
in the absence of dynamic continuations of performance after the prescribed
output has been produced.  The Churchlands arguments do not refute Searle,
though they do provide, in parallel systems, a mechanism that will (I think)
more readily produce real thinking systems than will a serial architecture
(recognizing that the dynamics requires responses in finite time, if
not real time).

Finally, this approach is independent of any questions about the symbol
grounding problem, on which I remain agnostic.
-- 
Martin Taylor (mmt@zorac.dciem.dnd.ca ...!uunet!dciem!mmt) (416) 635-2048
If the universe transcends formal methods, it might be interesting.
     (Steven Ryan).

gilham@csl.sri.com (Fred Gilham) (01/06/90)

kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) writes:
---------
Thus, this argument is less about what computers can or cannot do, and more
about definitions and semantics and, especially, the awarding of labels.
If AI programs cannot "think," then perhaps we need some new verb to
describe what it is that they do and how this process differs from both
traditional, algorithmic "computing", and human "thinking".
---------

You mention three processes:  1) what AI programs do, 2) algorithmic
computing, and 3) human thinking.  However, there are at most two
distinct processes here.  That is, AI programs + hardware + some
intelligent being to interpret the results = algorithmic computing.
This is a basic concept.  There are no programs that run on computers
that are not algorithmic.

The question is whether 2 and 3 are equivalent.  The strong AI
position is that they are.  But the whole point of the Chinese Room
argument is that Searle can be doing something that falls in category
2 yet not be doing 3, AND can be aware of the fact that he is doing 2
and not doing 3.

-Fred Gilham  gilham@csl.sri.com

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/07/90)

From article <2818@client1.DRETOR.UUCP>, by mmt@client1.DRETOR.UUCP (Martin Taylor):

>Searle claims it to be prima facie obvious that syntax cannot give rise
>to semantics.  I find this claim analogous to a claim that a set of distances
>between points cannot be enough to allow one to determine the placement
>of the points in an N-dimensional space.

Yes, it certainly is not obvious.  Take two formal syntactic systems,
each of which characterizes a set of statements, and associate to each
statement of the first some statement of the second system as its
interpretation.  We now have a formal semantic system constituted from
syntactic systems.

The argument gains some of its plausibility from an equivocation on the
term `formal', which is sometimes used in its proper sense as meaning
"characterized by reference only to form" and sometimes as meaning
"uninterpreted or not semantic".  (Logicians who study formal semantics
would certainly be surprised to learn that the name of their field is an
oxymoron, as it would be if `formal' meant "not semantic".)

				Greg, lee@uhccux.uhcc.hawaii.edu

gilham@csl.sri.com (Fred Gilham) (01/07/90)

I wrote:
-------
You mention three processes:  1) what AI programs do, 2) algorithmic
computing, and 3) human thinking.  However, there are at most two
distinct processes here.  That is, AI programs + hardware + some
intelligent being to interpret the results = algorithmic computing.
This is a basic concept.  There are no programs that run on computers
that are not algorithmic.

The question is whether 2 and 3 are equivalent.  The strong AI
position is that they are.  But the whole point of the Chinese Room
argument is that Searle can be doing something that falls in category
2 yet not be doing 3, AND can be aware of the fact that he is doing 2
and not doing 3.
-----

The second paragraph should have said:

The question is whether 2 and 3 are equivalent.  The strong AI
position is that they are.  It claims that if a computer system could
pass the Turing test, it would be doing 3 by doing 2.  Searle's
argument says that if he himself could pass the Turing test with a
system composed of rules in books, he would know that he is still not
doing 3.  If he is not, then what is?  Because his procedure with the
books and rules is equivalent in power to any algorithmic procedure
that could be executed on a computer, the same question applies to a
computer that could pass the Turing test.

-Fred Gilham    gilham@csl.sri.com

kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) (01/07/90)

I can't believe (actually I can; we're people) the number of times the
word 'understand(s)' has appeared in articles in this and related
threads in the context of "this/that/theotherthing does/doesnot
UNDERSTAND" without any two people having the same idea of what it means
to understand. There are a handful of people arguing a handful of
different points with a handful of different ideas of what it is to
understand. This is why definitions are usually provided at the
beginnings of discussions like this (presumably in the article that
starts it all)--so just this sort of thing doesn't happen.

It was pointed out that Searle is just going by his intuitive idea of
what we mean when we say we 'understand'. But WE DON'T ALL MEAN EXACTLY
THE SAME THING. I can't believe anyone out there actually thinks that
the word 'understand' isn't open for a fair amount of interpretation.
But we're all giving it our own interpretation, especially the two
groups in the two sides of the argument.

We need a definition for 'understand'. I realize that you can't define
everything exactly. But if we can't get an exact definition, we at least
need features or characteristics. In other words, we need things which
provide a test for understanding. (One was suggested a while ago, but it
doesn't satisfy the purpose I'm suggesting it for:) We need to agree on
what type of behavior we can label as behavior which identifies
understanding because then we can discuss whether or not such and such a
system will produce this behavior. OTHERWISE, WE ARE ARGUING AT AN
IMPASSE.

Recently, the argument was rephrased slightly:

In article <GILHAM.90Jan6100241@cassius.csl.sri.com> gilham@csl.sri.com (Fred Gilham) writes:
>You mention three processes:  1) what AI programs do, 2) algorithmic
>computing, and 3) human thinking.  However, there are at most two
>distinct processes here.  That is, AI programs + hardware + some
>intelligent being to interpret the results = algorithmic computing.
>This is a basic concept.  There are no programs that run on computers
>that are not algorithmic.
>
>The question is whether 2 and 3 are equivalent.  The strong AI
>position is that they are.  It claims that if a computer system could
>pass the Turing test, it would be doing 3 by doing 2.  Searle's
>argument says that if he himself could pass the Turing test with a
>system composed of rules in books, he would know that he is still not
>doing 3.  If he is not, then what is?  Because his procedure with the

The strong AI position is actually a little more general (as I
understand it) because 2 can do 3, but might also be able to do things
which 3 can't.

"he would know that he is still not doing 3." WHAT?!?!

How in the world does he know what 3 is?! If he knows what 3 (human
thinking) is, then why doesn't he publish a book and we can give all the
psychologists in the world something else to do!

-Karl		kpfleger@phoenix.princeton.edu
		kpfleger@pucc

jgk@osc.COM (Joe Keane) (01/09/90)

I think Mr. Jones and Mr. Pfleger have hit on a key point, and we should
pursue this point further.  In my opinion, Searle's argument is basically an
appeal to common sense reasoning.  There's nothing inherently wrong with that,
but i think that in this case we should examine it carefully.

The terms `thinking' and `understanding' are key in his argument, and we are
expected to draw conclusions based on our interpretation of their meaning.
These are familiar, commonly used terms, and most people would not believe
there can be a great dispute about their meaning.  After all, we know when we
are thinking, and we have a good idea when we understand something or when we
don't.  Furthermore, although we know it's impossible to know other people's
thoughts, we often have a good idea what they are thinking, and whether or not
they understand something.  This comes from what they say in conversation, and
often from facial expressions and other body language.

However, when we apply these terms to things other than people similar to us,
some problems start to appear.  People aren't even consistent about whether
animals can think or understand.  I may say that a language understands how to
manipulate complex numbers, or that a program is thinking about the result i
want.  Some may criticise this usage as being overly creative; the interpreter
is executing a predefined algorithm, so surely it doesn't understand anything.
But i believe this is just a language usage issue.

Now, to the main point.  There is no test we can perform to determine for sure
whether a system understands something or not; the best we can do is ask it.
Certainly we can try to make an operational definition of understanding, but
Searle specifically says that's not what he's talking about.  In fact, there
is no experiment we can come up with to determine whether or not Searle's
argument is correct, or even to give evidence either way.  Now, as scientists,
we should start to be a bit distressed that we're arguing so much about
something which has no basis in reality.

The terms `thinking' and `understanding' now sit with similar terms like `free
will' and `intention'.  The property these terms have in common is that they
do not refer to any property in the real world; they are concepts completely
invented and defined by humans.  As far as i'm concerned, these terms are for
philosophers to argue about, and scientists should try to avoid getting mired
in what is strictly a philosophical debate.

utility@quiche.cs.mcgill.ca (Ronald BODKIN) (01/09/90)

In article <12760@phoenix.Princeton.EDU> kpfleger@phoenix.Princeton.EDU (Karl Robert Pfleger) writes:
>"he would know that he is still not doing 3." WHAT?!?!
>
>How in the world does he know what 3 is?! If he knows what 3 (human
>thinking) is, then why doesn't he publish a book and we can give all the
>psychologists in the world something else to do!
	An excellent point -- I would say he BELIEVES he is not doing 3,
but there is a large gap between opinion, and knowledge.
	As to the question about defining understanding, I think the point
of this debate is two-fold: 1) people ARE implicitly arguing about
what understanding is (when Turing proposed his test, he was trying to figure
out some criterion for intelligence -- and in reply to a previous posting
thereon, I think it was intended as a SUFFICIENT condition, although it
was well-pointed out that it may NOT BE), and 2) after establishing some
criteria for that, they THEN argue about whatis feasible in AI.  And to a
large extent, you're right that much of that is trivial, given certain
notions about what understanding is.
		Ron