[comp.ai] letter to THE NEW YORK REVIEW concerning AI

smoliar@vaxa.isi.edu (Stephen Smoliar) (02/09/89)

                                       February 8, 1989







New York Review
250 West 57th Street
New York, NY 10107



To whom it may concern,
The  exchange over "Artificial Intelligence and the Chinese Room"
in  the  February  16,  1989  New  York  Review   was   a   great
disappointment,  since  on  the  basis of the published texts, it
would appear that neither Elhanan  Motzkin  nor  respondent  John
Searle  was  writing  from a position of experience in artificial
intelligence.    Fortunately,  Searle  was  good  enough  to  set
straight  most  of  Motzkin's  naive understanding of the work of
Alan Turing, not to mention the  fact  that  Motzkin  missed  the
whole point of the original Chinese room argument.  However, this
left Searle with the last word, so that the point  he  wanted  to
make appeared to remain standing soundly:

    .  .  .  a digital computer is a device which manipulates
    symbols,  without  any  reference  to  their  meaning  or
    interpretation.    Human  beings, on the other hand, when
    they think, do something much more than that.    A  human
    mind   has  meaningful  thoughts,  feelings,  and  mental
    contents generally.  Formal  symbols  by  themselves  can
    never be enough for mental contents, because the symbols,
    by definition, have no  meaning  (or  interpretation,  or
    semantics)  except  insofar as someone outside the system
    gives it to them.


     This argument first appeared in 1980, in the article "Minds,
Brains,  and  Programs" in The Behavioral and Brain Sciences; and
as recently as November 17, 1988, Searle espoused essentially the
same  view  in  a lecture at UCLA.  However, a good deal has been
achieved in the study of mind over these intervening eight years,
making  it worth while to enquire as to whether Searle's argument
has weathered the progress of more recent insights.  Most notable
is  the observation that the recent contribution of Marvin Minsky
in The Society of Mind has not  prompted  Searle  into  at  least
minor revisions of his arguments.

     One  of  the  most  difficult  obstacles to trying to reason
through Searle's logic is his tendency to play  rather  fast  and
loose  with  words  like  "understanding"  and  "knows."    Thus,
assuming himself to be  the  symbol-processing  man  manipulating
Chinese  symbols  in  a convincing manner, he is still willing to
state baldly, "It is just a plain fact about me  that  I  do  not
understand  Chinese."    This  "plain  fact,"  however,  might be
questioned in light of two other (hopefully sounder) plain facts:
First,   Searle  is  never  willing  to  say  enough  about  what
constitutes understanding to support why he  should  come  to  so
obvious  a  conclusion.    Second, because he dodges the issue of
understanding, he does  not  seem  willing  to  acknowledge  that
introspection  may  ultimately  be  a  very  poor  judge  of  his
understanding.  If some body of native Chinese speakers  are  all
willing to acknowledge that he understands Chinese, regardless of
the specific means for exhibiting his convincing behavior, who is
he  to  argue on the basis of his potentially deceptive powers of
introspection?

     Minsky, on the other hand, is less inclined  to  legerdemain
with  highly  charged  words.  (Indeed, one of the more memorable
aphorisms from The Society of Mind is:    "Words  should  be  our
servants, not our masters."  [7.9]) Thus, he is able to develop a
"working definition" of understanding  based  on  an  ability  to
discriminate  differences  (this  is in Chapter 23); and there is
nothing about this definition which would deny  that  Searle,  or
anyone  else  in  his  Chinese  room,  is  actually understanding
Chinese.  Now the point of this argument is not  to  assert  that
Minsky  is  right  and  Searle is wrong.  Rather, it is simply to
observe that much of Searle's argument rests on  claims  that  he
wishes  to  pass  off  as  obvious.    There  seems  to be enough
potentially contrary  evidence  to  indicate  that,  while  these
observations  may  be  obvious  to  Searle, he ought to draw upon
sounder forms of argument to convince the rest of us.



                                       Sincerely,



                                       Stephen W. Smoliar

bwk@mbunix.mitre.org (Barry W. Kort) (02/10/89)

Bravo, Stephen.  Nicely done.

--Barry Kort

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/10/89)

smoliar@vaxa.isi.edu (Stephen Smoliar) of USC-Information Sciences Institute
posted his letter to THE NEW YORK REVIEW concerning AI in which he wrote:

" Searle... play[s] rather fast and loose with words like
" "understanding"... he is still willing to state baldly, "It is just a
" plain fact about me that I do not understand Chinese." This "plain
" fact," however, might be questioned... Searle is never willing to say
" enough about what constitutes understanding to support why he should
" come to so obvious a conclusion... he does not seem willing to
" acknowledge that introspection may ultimately be a very poor judge of
" his understanding. If some body of native Chinese speakers are all
" willing to acknowledge that he understands Chinese... who is he to
" argue on the basis of his potentially deceptive powers of introspection?

Look, I'm a critic of Searle's, but with enemies like this, Searle can
afford unilateral disarmament! Read the above passage again, and
look what you're brushing aside with this sort of hand-waving! I'm not
the final authority on whether or not I understand Chinese? Let's hope
it's not up to a body of natives of any kind to "acknowledge" that I'm
not in pain either! "Fast and loose" indeed; to criticize Searle's
argument one must first set aside one's current dogmas or wishful
thinking and UNDERSTAND it!

" a good deal has been achieved in the study of mind over these
" intervening eight years... Most notable is the... recent contribution
" of Marvin Minsky in The Society of Mind... there is nothing about
" [Minsky's] definition which would deny that Searle, or anyone else in
" his Chinese room, is actually understanding Chinese.

Without prejudice as to whether or not much has been achieved in the
study of mind lately, surely this is not a matter of "definition," and
if there is a theory in which it is, so much the worse for that theory.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

smoliar@vaxa.isi.edu (Stephen Smoliar) (02/13/89)

Stevan Harnad apparently wishes to take issue with my having observed that
Searle is playing "fast and loose with words like 'understanding.'"
Unfortunately, I can only come away with the impression that Stevan,
himself, is not doing any better at it!  In a review I have recently
completed and submitted of THE SOCIETY OF MIND, I being with the observation
that a key theme of this book "is that the study of mind is misguided
by confused assumptions about what is simple and what is complicated."
One may, perhaps, best appreciate this theme by examining the Glossary
of the book.  If we look up "intelligence," we find:  "A term frequently used
to express the myth that some single entity or element is responsible for the
quality of a person's ability to reason."  Under "consciousness" Minsky writes,
"the word is used mainly for the myth that human minds are 'self-aware' in the
sense of perceiving what happens inside themselves."  Sensitive souls may be
upset that Minsky should be so bold as to use a word like "myth;"  but as in
the story of the mule and the two-by-four, when we get too entrenched in our
beliefs, it often takes strong words like "myth" to revive our ability to
question those beliefs.

With regard to the current argument, I do not think it is an accident that
there is no entry for "understanding" in Minsky's glossary.  Searle may think
there are plain facts about understanding.  Harnard apparently would rather
find HIS plain facts in the area of MISunderstanding.  However, Minsky seems
to feel that the word has to be treated with even more delicacy than terms
such as "intelligence," "consciousness," and "memory."  This is not to say
that he has ignored the issue of understanding.  I believe I made this quite
clear in my original article, and it should be apparent to anyone who has
spent any serious time with THE SOCIETY OF MIND.  The important point is
that Minsky has shown more respect to the concept than the combined forces
of Searle and Harnad have yet managed to muster with their philosophical word
games!

I also fear that in his zeal to argue about "understanding," Harnard seems to
have missed the "bottom line" of my argument.  I would now be willing to
generalize my conclusion to include Harnard as well:  much of their arguments
rest of claims that they wish to pass off as obvious.  Apparently, what is
obvious to both Searle and Harnad is still questionable to some of us who
spend more time in the trenches of our code than in the speculation of others'
achievements.  Arguments are resolved not by claims of the obvious but by
recognition of when more intense scrutiny is in order.

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/15/89)

smoliar@vaxa.isi.edu (Stephen Smoliar)
of USC-Information Sciences Institute writes:

" Stevan Harnad apparently wishes to take issue with my having observed
" that Searle is playing "fast and loose with words like
" 'understanding.'" Unfortunately, I can only come away with the
" impression that Stevan, himself, is not doing any better at it!... I do
" not think it is an accident that there is no entry for "understanding"
" in Minsky's glossary. Searle may think there are plain facts about
" understanding. Harnad apparently would rather find HIS plain facts in
" the area of MISunderstanding....  Minsky has shown more respect to the
" concept than the combined forces of Searle and Harnad have yet managed
" to muster with their philosophical word games!
" 
" Apparently, what is obvious to both Searle and Harnad is still
" questionable to some of us who spend more time in the trenches of our
" code than in the speculation of others' achievements. Arguments are
" resolved not by claims of the obvious but by recognition of when more
" intense scrutiny is in order.

Tell me, down there in the trenches, can you still tell the difference
between this: (1) "Koran reggel ritkan rikkant a rigo" and this: (2)
"How much wood could a woodchuck chuck if a woodchuck could chuck
wood"? Call that difference "X." X is all that's at issue in the
Chinese Room Argument. No word games. Sometimes it's good to come out of
the trenches and breathe some fresh air. (By the way, you seem to have
completely missed my point about misunderstanding...)
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

tsf@PROOF.ERGO.CS.CMU.EDU (Timothy Freeman) (02/17/89)

In article <Feb.15.00.46.48.1989.18756@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>Tell me, down there in the trenches, can you still tell the difference
>between this: (1) "Koran reggel ritkan rikkant a rigo" and this: (2)
>"How much wood could a woodchuck chuck if a woodchuck could chuck
>wood"? Call that difference "X." X is all that's at issue in the
>Chinese Room Argument.

Seems like X is a label for a subjective phenomenon.  

The funny thing is, people are able to respond to sequences of words
sometimes without having "X" (in hypnosis or experiments with subliminal
messages, for instance).

Another funny thing is that I have had "X" even though the message
didn't make enough sense to me for me to be able to use it (several
times in college, for instance).

Do you want "understanding" to mean the subjective sense of knowing
what is going on (which seems to be "X") or the behavioral aspect
(which would require some sort of behavioral test to show that the
"understander" is actually able to make use of the information)?

The subjective sensation is, in itself, totally useless.  The behavior
of the participating systems is what matters.
-- 
Tim Freeman
Arpanet: tsf@theory.cs.cmu.edu
Uucp:    ...!seismo.css.gov!theory.cs.cmu.edu!tsf
(Or maybe try changing "theory" to "proof.ergo" in any of the above.)
-- 

smoliar@vaxa.isi.edu (Stephen Smoliar) (02/17/89)

In article <Feb.15.00.46.48.1989.18756@elbereth.rutgers.edu>
harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>
>Tell me, down there in the trenches, can you still tell the difference
>between this: (1) "Koran reggel ritkan rikkant a rigo" and this: (2)
>"How much wood could a woodchuck chuck if a woodchuck could chuck
>wood"? Call that difference "X." X is all that's at issue in the
>Chinese Room Argument. No word games.
>
Remember, all I want to argue is that there is nothing OBVIOUS about that
difference X.  Perhaps I might be able to illustrate this point with an
alternative set of examples.  Both of these sentences come from the same
source, but I would argue that there will be some number of readers who
would be willing to say that they understand one and not the other.  Thus,
I would be interested in any hypotheses as to why the difference between
them is "obvious."  Here are the sentences:

	(1)  Where is my gracious Lord of Canterbury?

	(2)  Howbeit, they would hold up this Salique law
	     To bar your Highness claiming from the female,
	     And rather choose to hide them in a net
	     Than amply to imbar their crooked titles
	     Usurp'd from you and your progenitors.

(Note:  I, personally, have no trouble with either sentence;  but I attribute
that to my familiarity with the text.  Since Searle likes GEDANKEN experiments,
consider the case of, say, a ten-year-old American child, who "obviously" (to
Searle at least) understands Engligh.  How would he react to these two
sentences?)

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/19/89)

tsf@PROOF.ERGO.CS.CMU.EDU (Timothy Freeman) of
Carnegie-Mellon University, CS/RI wrote:

" Seems like X [what it's like to understand vs. not understand a given
" language] is a label for a subjective phenomenon... [But] people are
" able to respond to sequences of words sometimes without having "X" (in
" hypnosis or experiments with subliminal messages, for instance)...
" [Moreover,] I have had "X" even though the message didn't make enough
" sense to me for me to be able to use it (several times in college, for
" instance).

(1) Yes, understanding has subjective as well as objective manifestations.

(2) These tend to swing together, but as a basis for having a mind (in
cases of doubt, like stones, thermostats, machines, and in fact any
body other than one's own), subjective understanding is surely primary.

(3) The instances you describe of a dissociation between subjective
understanding and its objective manifestations (i.e., cases in which
they DON'T swing together) in OURSELVES (in whom the overall presence
of understanding [or a mind] itself is NOT in doubt) are simply
NOT RELEVANT to cases other than ourselves (in which it is whether or not
they understand [or have a mind] at all that is at issue.) First show
that your candidate has understanding (or a mind) at all; until then,
projecting our own phenomenology onto it is not supportive: It's circular.

(4) It is indeed the subjective manifestation of understanding Chinese
that Searle is denying of himself, as well as the computer, in his
Chinese Room Argument: That's all. (He is also ASSUMING, for the sake
of argument, that all objective manifestations of understanding could
be successfully achieved through symbol manipulation alone, as "Strong
AI" supposes; that assumption, of course, may well be wrong [because of
the symbol grounding problem, as I argue in my own paper], but that's a
different issue.)

" Do you want "understanding" to mean the subjective sense of knowing
" what is going on (which seems to be "X") or the behavioral aspect
" (which would require some sort of behavioral test to show that the
" `understander' is actually able to make use of the information)?
" The subjective sensation is, in itself, totally useless. The behavior
" of the participating systems is what matters.

The subjective sensation may indeed be totally useless (though it
certainly doesn't SEEM that way), but nevertheless it is the only
thing distinguishing us from a mindless automaton, and its presence or
absence is precisely what's at issue in The Chinese Room Argument.
That's why I keep patiently reminding enthusiasts that this is not a
"definitional" issue at all, i.e., it does not depend on what I "want
`understanding' to mean"! Otherwise we could simply "define"
understanding as "behaving as if one understands" and that would be
all there was to it!

Searle's argument is specifically designed to remind us (or at least
those of us who are not so committed to a "definition" as to have
forgotten what it was all about to have a mind before the wishful
thinking metastasized) that -- at least for the teletype version of the
turing test -- "as if" can't be the real thing!
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

harnad@elbereth.rutgers.edu (Stevan Harnad) (02/19/89)

smoliar@vaxa.isi.edu (Stephen Smoliar) of USC-Information Sciences Institute
wrote:

" Remember, all I want to argue is that there is nothing OBVIOUS about that
" difference X [between understanding and not understanding a language]...
" [e.g., compare]: `Where is my gracious Lord of Canterbury?' [and]
" `Howbeit... amply to imbar their crooked titles Usurp'd from you and
" your progenitors.' [How would]... a ten-year-old American child, who
" `obviously' (to Searle at least) understands English... react to these
" two sentences?

As pointed out in prior iterations, DEGREE of understanding, or
MISunderstanding, or a FALSE POSITIVE sense of understanding are not at
issue in Searle's Chinese Room. What's at issue is whether there's ANY
understanding AT ALL (of Chinese) going on in there. How much a
10-year-old might or might not understand of middle English is simply
beside the point. The 10-year-old and Searle understand SOME English,
but NO Chinese. That's all there is to it. If you find continuous text
too ambiguous, try isolated words in English and Chinese. Searle will
still understand (many of) the former and NONE of the latter. In
turning away from the obvious you are simply misconstruing Searle's
argument.
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (02/21/89)

In article <4297@pt.cs.cmu.edu> tsf@PROOF.ERGO.CS.CMU.EDU (Timothy Freeman) writes:
>The subjective sensation is, in itself, totally useless.  The behavior
>of the participating systems is what matters.

Absolute balls.

The subjective feeling of understanding is vital to much human action.
Action without understanding is regarded as reckless and subject to the full continuum
of moral disapprobation.

Timothy's  mypoic blurt is further evidence of the positivist drivel which infests
the underlying dogmas of AI research(ers).  Human behaviour is characterised by
agency and intention based on personal understanding of situations.  Such a characterisation
cannot be applied to programs.

And to Steven and his trench-footed collaborators, just remember what you find as
you dig a trench deeper, and what you can see when your in it.

Put your shovel away and sit out on the meadow awhile.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

barash@mmlai.UUCP (Steve Barash) (02/24/89)

In article <2447@crete.cs.glasgow.ac.uk>, (Gilbert Cockton) writes:
> In article <4297@pt.cs.cmu.edu> (Timothy Freeman) writes:
> >The subjective sensation is, in itself, totally useless.  The behavior
> >of the participating systems is what matters.
> 
> Absolute balls.
> 
> The subjective feeling of understanding is vital to much human action.

Does this mean you are measuring the importance of subjective feeling by
its effect of observable action?  If so, Tim's point holds.  If not, how
are you measuring its importance?

> Gilbert Cockton, Department of Computing Science,  The University, Glasgow
> 	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

Steve Barash @ Martin Marietta Labs / Artificial Intelligence Department
ARPA: barash@mmlai.uu.net
UUCP: uunet!mmlai!barash

jeff@aiai.ed.ac.uk (Jeff Dalton) (02/25/89)

I don't always agree with Stevan Harnard, but this time I do.  It's
easy to attack Searle's arguments with a behaviorist insistence that
"only the behaviour maters" or that "consciousness" is a "myth" (after
all, how can you prove to someone else that your consciousness makes a
difference?), and easy to try to put the burden of proof on the other
guy by demanding "objective evidence" and "definitions".  This may be
effective in debate, since hard evidence is always more immediately
convincing ("seeing is believing"), but doesn't really settle the
issue.  I'm sometimes reminded of an old joke:

  Two behaviorists meet in the morning, and one says to the other,
  "you're fine, how am I?"

Now, the behaviorist attitude can be OK.  Perhaps you just care about
about getting intelligent behavior and think questions of "understanding"
are a waste of time.  Well, no one says you have to care about such
questions, but there may still be some interest there for someone else.

You might say that if subjectivity makes a difference it should show
up in behavior somewhere.  Well, maybe it does.  But we certainly
don't know enough now to say where.  After all, we just don't know how
far we can get by considering the behavior alone.  And the best time
to ask whether machines understand may be after we have some machines
with intelligent behavior, not now when we don't know much of anything
about what such machines will be like.

What does Searle's argument actually show?  Suppose we have Searle,
and he's internalized all the instructions for answering questions in
Chinese, and he's actually able to answer such questions.  Searle then
says that he does not understand Chinese.  I think it's pretty clear
what he means.  Consider the difference between what you do when
replying to a question in a language you understand and Searle, who
would be going through some complicated procedure to produce an answer,
which still means nothing to him, but which turns out to make sense to
someone who knows Chinese.  Perhaps this all becomes pretty much
automatic, so that Searle can do it quickly.  But he still can't say
in English what his answers mean -- he has to ask someone who knows
Chinese.  I don't think we need a precise definition of "understanding"
to see that there's a difference here.  We don't even need to decide
if this difference involves "understanding" or some other thing that
should be called something else.

But what's happened, in effect, is that Searle's brain is now running
a program that answers Chinese questions.  Searle may not understand
Chinese, but for all we know his running of the program amounts to
having a separate entity inside him that does.

bwk@mbunix.mitre.org (Barry W. Kort) (02/26/89)

In article <2447@crete.cs.glasgow.ac.uk>, (Gilbert Cockton) writes:

 > The subjective feeling of understanding is vital to much human action.

In article <509@mmlai.UUCP> barash@mmlai.UUCP (Steve Barash) responds:

 > Does this mean you are measuring the importance of subjective
 > feeling by its effect of observable action?  

I don't know how others experience "the subjective feeling of
understanding", but I experience it as a neurochemical rush
when the entropy of my mental models undergoes a dramatic decrease.
This happens when the pieces of my mental jigsaw puzzle are finally
arranged such that every piece fits in a tight interlocking lattice,
and a big picture is revealed in the freshly woven tapestry.

My scientific mind calls this mental event "Aha! Insight".  My
spiritual mind calls it "Epiphany".  I claim it is better than
eating chocolate to get the same sensation.  

--Barry Kort

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (02/28/89)

In article <509@mmlai.UUCP> barash@mmlai.UUCP (Steve Barash) writes:
>Does this mean you are measuring the importance of subjective feeling by
>its effect of observable action?  If so, Tim's point holds.  If not, how
>are you measuring its importance?

I am 'measuring' the importance of subjective feeling by the importance to which
individuals attribute to it in explaining their behaviour.

I have coached gymnastics and used to compete.  I know how to stand-in (or 'spot'
for some Americans) for a wide-number of moves.  I attempt to teach how to stand in
to a novice coach.  I take her/him through the principles and set up their rule-base in
the process :-)  I demonstrate the principles with young gymnasts.  I ask the
novice coach to stand in.  He/she says that he/she doesn't understand.  I ask her/him
to talk me through standing-in for the move.  She/he parrots out the rule-base
verbatim.  Their knowledge-base is perfect.

But they say they don't understand, and don't want to stand in.

What observable actions are there here?  An utterance - "I do not understand", a
recital ("To stand in for a back somersault from a round off, stand on the side to
which the gynmnast turns in the round off, slightly forward of where they will place
their hands, ..." etc. etc) and a refusal (perhaps implicit in the utterance).

Contrast another scenario ..

The utterance - "I do understand, but I'm unsure, I don't want to get it wrong. I
know that (recital ("To stand in for a back somersault from a round off, stand on the side to
which the gynmnast turns in the round off, slightly forward of where they will place
their hands, ..." etc. etc)) and a refusal (implicit in the utterance).

In both cases the obervable actions are the same, a refusal to do something.
But the causes are different, and cannot be observed.  We have to believe the
people involved or convince them otherwise.  Do you count speech as an obervable
action?

Can we tell someone that they do understand when they don't?  Can we
tell someone that they should be confident when they are not?  

Subjective states here can be 'observed' (by a blind person at that) to influence
people's behaviour.  Knowledge of a rule-base is not a necessary condition for
action.  Anyone who believes otherwise is trapped in some illusion forced on them by
their adherance to strong AI.  I'd love to video a day in the life of a strong AI
guru to show them how agency, intention and understanding are vital to the way they
live their lives.

Let's stick to the gymnastics example.  Would any advocate of strong AI with
children or grandchildren agree to take classroom instruction on standing in for
somersaults, and when paper-based tests showed that their knowledge-base was a
perfect replica of mine (how do we show that though :-)), they would then go and
stand-in while their (grand)child attempted somersaults?  Why not (I accept dropping
out because of lack of strength or reaction times, but then we could train these
up)?

Humans have the good sense to refuse to act when they realise that their
understanding is inadequate.  This lack of understanding cannot be measured by any
paper-based psychometric test.  It is not a question of what one can regurgitate.
That little old homunculus is right in there saying "no way Jose"!

The problem with AI-based systems is the lack of any facility for determining when
they do not understand something.  When they do not "know" is a different issue.

The upshot is that a responsible AI-based system is impossible.  Responsibility must
lie with the programmers.

How many AI programmers would take responsibility for anything they programmed?

-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (03/03/89)

In article <2481@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>The problem with AI-based systems is the lack of any facility for determining when
>they do not understand something.  When they do not "know" is a different issue.
>
>The upshot is that a responsible AI-based system is impossible.  Responsibility must
>lie with the programmers.
>
>How many AI programmers would take responsibility for anything they programmed?
>

I think we argued about his before, but would your objection also apply
to neural networks?  The "programming" here comes in the form of training,
in many cases.  The system can be said to "understand" how to do something
(say, a robot that spots for a gymnast) when it performs correctly under 
supervision by someone who "understands" correct performance.  Or do you
not consider this AI?  The robot's brain could certainly have some notion
of whether it was doing well even if it needed feedback from its trainer,
which could constitute responsibility.

P.S.  Your terminal must have more than 80 character lines.  All of your
postings seem to have lines that wrap around.

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/10/89)

In article <2369@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>In article <2481@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>The system can be said to "understand" how to do something
>(say, a robot that spots for a gymnast) when it performs correctly under 
>supervision by someone who "understands" correct performance.

Agreed, almost.  At this stage, the supervisor "trusts" the robot.  It
has to demonstrate some period of continued competence before it would
be accepted as having a good understanding.

At this point, most problems do not require further "training", just a
comment.  As far as I know, there is only one way to "train" a neural
network, whereas the growth of understanding in the presence of an
expert passes through several forms of training, which may eventually
reduce to the shake of a head and a gesture.

Is there a sense in which neural network training requires

	a) an artificial, well-designed task 
	b) continued practice over this task.

If so, then ho, ho, ho, because life just ain't so simple.  The good
guys in this world are the ones who don't need Skinnerian programmed
instruction.

The topic is life, not the behavioural modification of the mentally ill
or deficient.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (03/13/89)

In article <2564@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>
>comment.  As far as I know, there is only one way to "train" a neural
>network, whereas the growth of understanding in the presence of an
>expert passes through several forms of training, which may eventually
>reduce to the shake of a head and a gesture.
>
>Is there a sense in which neural network training requires
>
>	a) an artificial, well-designed task 
>	b) continued practice over this task.
>
>If so, then ho, ho, ho, because life just ain't so simple.  The good
>guys in this world are the ones who don't need Skinnerian programmed
>instruction.
>
>The topic is life, not the behavioural modification of the mentally ill
>or deficient.

There are several ways to train neural networks, and we are continually
discovering new ones.  None of the ways are likely to be the same as
the way real neural networks learn (yet).  Another way to make a
neural network is to hard-wire it so that it does it tasks well 
from the beginning.  The "good guys" like you and me who aren't mentally
deficient had a lot of our connections already made when we came out
of our mammas.  Figuring out what those connections are is the province
of the neuroanatomists as well as the congnitive scientists.

Neural network training often produces some surprises even to the
trainers.  For example, if you look at Hinton's and Rumelhart's
program where they trained a network to distinguish the letter
C from the letter T (letters could be rotated in 90 degree increments),
some of the internal neurons developed as center surround detectors
without being told to do so.  These detectors are found in retina
and brain of all species at least down to the toad.

We aren't ready to build a human brain yet (still working on the toad),
but I think this is right track (rather than using production rules).
We can't be expected to recapitulate a billion years of evolution in
20.