[comp.ai] Denotational confusion

kck@g.gp.cs.cmu.edu (Karl Kluge) (03/05/89)

> From: roelw@cs.vu.nl 
> 
> How then could a universal TM (i.e. a computer) fed with a program which can
> answer questions in Chinese ever come to "know" the denotation of the
> symbols it is manipulating? The outcome of its computation is invariant
> under changes of denotation of the symbols it manipulates; the people
> programming the UTM may change the denotation of symbol xyz from chair to
> table or to anything else, without it making the slightest difference to the
> computation.

I can change the denotation of the symbol "symbol" in the above passage from
"symbol" to "soup can". That wouldn't make the slightest difference in the
process by which you generated the passage -- how did *you* ever come to
"know" the denotations of words/symbols in your mind?

As to the assertion that the outcome is invarient under changes in symbol
denotation -- this is true only for trivial meanings of the word "outcome".
Suppose we have the following bit of interaction (Q = kck, A = program)

A -- I'm thinking of an xyz.

Q -- What's an xyz?

A -- Why don't we play a game of 20 questions and see if you can guess?

Q -- O.K. Is it a person, place, or thing?

A -- Thing....(etc.)

Q -- Would I find several xyzs aranged around a table in a typical dining
room?

A -- Yes.

Q -- Do many or most xyzs have a part called the seat?

A -- Yes.

etc.

Changing the denotation of the symbol xyz from "chair" to "table", while it
does not change the progress or output of the program, very definitiely does
change the "outcome", in that the program's output makes sense under the
denotation of xyz by "chair", but not under the denotation of xyz as "table".

In general, it only makes sense to talk about the programmer "changing the
denotation of a symbol in a program" when that change produces corresponding
changes it the program's output behavior, i.e. "changing the denotation of
symbol 'xyz' from 'hide behind the nearest rock' to 'cover yourself with
barbacue sauce and jump up and down and yell'" only makes sense if the
generation of the symbol "xyz" in the program produces the corresponding
difference in behavior. Given that, the program will "learn" the denotations
of its symbols the same way a human does -- by getting kicked in the rear
end when it treats pigs as pens, tigers as tieclips, or cars as cats.

Look, patterns of neural activation have no magic denotational connection to
the outside world either, other than that natural selection will very
quickly get rid of organisms whose patterns of neural activity don't produce
behavior which is coherent with respect to how the external world operates
(this includes social interactions as well as physical interactions).  

> It seems to me that those who believe a UTM could be programmed into
> understanding the meaning of the symbols it manipulates, either 
> 1. use a nonstandard definition of what a TM is, or 
> 2. use a nonstandard definition of what the denotation of an expression is,
> or
> 3. are in the grip of an ideology which prevents them from seeing a simple 
> truth.

It's interesting that most people on both sides of this debate tend to view
the people on the other side as obtuse and blinded by an ideology, if not
actually brain dead.

On a less serious note...

Well, I guess it's time to clear up any confusion...mail was received
at the account "kck@g.cs.cmu.edu" reading in part as follows:

> Date: Wed, 1 Mar 89 10:03:48 GMT
> To: Karl.Kluge@g.gp.cs.cmu.edu
> Subject: Re: Reply to Harnad re:Chinese Room
> 
> Thanks for the mail.  There was a rumour going round here that Karl Kluge
> was a pseudonym for some other CMU guru, given the meaning of "Kluge" in
> German.

Actually, "Karl Kluge" is *not* a pseudonym for a "CMU guru". It is the name
of an experimental AI program which takes articles off the net and tries to
produce coherent responses. Anyone who has read the nets for any length of
time will recognize this as an extremely ambitious goal given the nature of
much of the input.

The "kluge" (German for "wise or intelligent man") program integrates
results from many other AI research projects:
* the netnews article parser is adapted from the case-frame analysis 
  techniques used in Japanese-English paraphrase/translation work here 

* various results from Jaime Carbonell's work (with others) on reasoning 
  by analogy 

* ideas from the work at Yale on scripts, including a "netnews argument"
  script which goes through stages of anger, denial, bargaining, depression,
  and acceptance.

* semantic net encodings of the articles in "Mind Design" (ed. John
  Haugeland) (note the frequent references to this book in the "kluge"
  postings in the Chinese Room thread)

The above are integrated into a SOAR-like framework that does searches in
"argument spaces" rather than "problem spaces". Arguments are ultimately
represented by propositional units called "quibbles". Strings of quibbles
are called "wits", which are recursively divided into "half-wits", which are
acted on by an "operators-are-standing-by" control structure.

How well does this program work? Well, let's take the following fragment of
text from an article in the Chinese Room thread:

> Hence the linguistic fact in English (and French, German, Chinese etc? -
> comments please) that any mechanical process cannot possess understanding.
> It is a central feature of "understanding" that mechanical processes are not
> involved.  

Let's feed this into ELIZA/DOCTOR:
*************************************************************************
% /usr/games/doctor

speak up! hit 2 returns after typing

> Hence the linguistic fact in English (and French, German, Chinese etc? - 
> comments please) that any mechanical process cannot possess understanding.

i am not sure i understand you fully

> It is a central feature of "understanding" that mechanical processes are 
> not involved.

suppose It were not a central feature of "understanding" that mechanical
processes are not involved
*****************************************************************

It sort of makes sense, but seems too mechanical. Let's compare it with the
output of the "kluge" program:
*****************************************************************
From: kck@g.gp.cs.cmu.edu (Karl Kluge)
> Hence the linguistic fact in English (and French, German, Chinese etc? - 
> comments please) that any mechanical process cannot possess understanding.
> It is a central feature of "understanding" that mechanical processes are not
> involved.

Huh? Brains understand. Brains are physical objects obeying physical laws.
Either we regress into dualism or we acknowledge that understanding arises
from the physical interactions of the parts of the brain. Even Searle
accepts that.
*****************************************************************

Is the output of the "kluge" program an improvement over the output of the
"ELIZA" program? Hard to tell. 

As an interesting sidelight, the program was originally supposed to be
integrated with a speech recognition facility and used to put responses onto
President Reagan's teleprompter at press conferences. It was hoped that
this would result in more coherent responses than those the President was
generating on his own.  This part of the research was cancelled after the
unfortunate "So, Mr. Donaldson, tell me more about your mother" incident at
an early experimental test.

The experiment (or "hoax" if you're feeling less charitable) was almost
exposed when the program's name wound up in several issues of AILIST in the
subject line of a bunch of articles (only one of which actually quoted text
produced and posted by the program). In order to prolong the experiment, I
was approached and asked to pose as "Karl Kluge" when people came to visit.
My real name is Harry Bovik, Jr., and I'm the son of the PI on the "kluge
project" research contract.

Dad hopes to present the Kluge Program work at IJCAI in Detroit. His
interest in AI may come as a suprise to those who only know him from his
earlier work on "thaumaturgic circles, rings, and fields", done jointly with
Dana Scott, P. E. I. Bonewits, and Wingate Peaslee (Peabody Chair in
Psychology, Miskatonic University). Abstract math and the relationship
between certain solutions to General Relativity and descriptions in the
_Necronomicon_ and _The Clavicle of Solomon_ may seem a far cry from AI, but
he sees it as tying in with his work in cognitive parapsychology.

In any case, I hope this clears up any confusion regarding the source of the
postings done under the name "Karl Kluge".

Harry Bovik, Jr. 

Send mail to "kck@g.cs.cmu.edu". Opinions are mine, and do not reflect those
of DARPA, the School of Computer Science, or anyone else for that matter.

Anyone trying to send email to my father should be aware that radio contact
has been lost with the Miskatonic expedition in Australia as a result of an
unexpected sandstorm, and he may be away from CMU longer than he originally
planned...

-- 

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (03/06/89)

From article <4415@pt.cs.cmu.edu>, by kck@g.gp.cs.cmu.edu (Karl Kluge):
" > From: roelw@cs.vu.nl 
" > 
" > How then could a universal TM (i.e. a computer) fed with a program which can
" > answer questions in Chinese ever come to "know" the denotation of the
" > symbols it is manipulating? The outcome of its computation is invariant
" > under changes of denotation of the symbols it manipulates; ...
" ...
" As to the assertion that the outcome is invarient under changes in symbol
" denotation -- this is true only for trivial meanings of the word "outcome".

I agree, but couldn't we reach the conclusion more immediately by
considering the outcome to be given in terms of the denotations of any
symbols it contains.  Then it is not, in general, invariant.

		Greg, lee@uhccux.uhcc.hawaii.edu

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/10/89)

In article <4415@pt.cs.cmu.edu> kck@g.gp.cs.cmu.edu (Karl Kluge) writes:
>Actually, "Karl Kluge" is *not* a pseudonym for a "CMU guru". It is the name
>of an experimental AI program which takes articles off the net and tries to
>produce coherent responses. Anyone who has read the nets for any length of
>time will recognize this as an extremely ambitious goal given the nature of
>much of the input.

Then I was right!

Karl Kluge and the AI programs like her are unread, despite Karl's
generated anger (joy <3) last summer that I was suggesting that AI
types were unread and unwashed social misfits.

Of course, I was suggesting nothing of the sort, but it is clear that
Karl Kluge at least is unwashed, and if she's running on anything less
than dozens of Crays then she won't have too much time for partying
either!

As for reading the news all day, I can't think of a better way to end
up uninformed, misinformed and illiterate :-) :-0 :-)
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert