[comp.ai] Gilbert Cockton and AI

kck@g.gp.cs.cmu.edu (Karl Kluge) (05/29/88)

In response to various posts...

> From: gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
>  AI depends on being able to use written language (physical symbol
>  hypothesis) to represent the whole human and physical universe.  AI
>  and any degree of literate-ignorance are incompatible.  Humans, by
>  contrast, may be ignorant in a literate sense, but knowlegeable in
>  their activities.  AI fails as this unformalised knowledge is
>  violated in formalisation, just as the Mona Lisa is indescribable.
>  Philosophically, this is a brand of scepticism.  I'm not arguing that
>  nothing is knowable, just that public, formal knolwedge accounts for
>  a small part of our effective everyday knowledge (see Heider).

This shows the extent of your misunderstanding of the premises underlying
AI. In particular, you appear to have a gross misunderstanding of the
"physical symbol hypothesis" (sic). 

First, few AI researchers (if any) would deny that there are certain motor 
functions or low level perceptual processes which are not symbolic in nature.

Second, the implicit (and unproven) assumption in the above quote is that 
knowledge which is not public is also not formal, and that the inability to 
access the contents of an arbitrary symbol structure in the mind implies the 
absence of such symbol structures. Nowhere does the Physical Symbol System
Hypothesis imply that all symbol structures are accessible by the conscious
mind, or that all symbols in the symbols structures will match concepts 
that map onto words in language.

>  Because so little of our effective knowledge is formalised, we learn
>  in social contexts, not from books.  I presume AI is full of relative
>  loners who have learnt more of what they publicly interact with from
>  books rather than from people.  Well I didn't, and I prefer 
>  interaction to reading.  

You presume an awful lot. Comments like that show the intellectual level
of your critique of AI.

> The question is, do most people WANT a computational model of human
> behaviour?  In these days of near 100% public funding of research,
> this is no longer a question that can be ducked in the name of
> academic freedom.  

Mr. Cockton, Nature does not give a damn whether or not people WANT a 
computational model of human behavior any more than it gave a damn
whether or not people wanted a heliocentric Solar System.

> I am always suspicious of any academic activity which has to request that it
> becomes a philosophical no-go area.  I know of no other area of activity which
> is so dependent on such a wide range of unwarranted assumptions.

AI is founded on only one basic assumption, that there are no models of
computation more powerful (in a well defined sense) than the Turing machine,
and in particular that the brain is no more powerful a computational
mechanism than the Turing machine. If you have some scientific evidence
that this is a false assumption, please put it on the table.

> My point was that MENTAL determinism and MORAL responsibility are incompatible.
> I cite the whole ethos of Western (and Muslim? and??) justice as evidence.

Western justice *presuposes* moral responsibility, but that in no way serves
*as evidence for* moral responsibility. Even if there is no moral responsib-
ility, there will still always be responsibility as a causal agent. If a
faulty electric blanket starts a fire, the blanket is not morally responsible.
That wouldn't stop anyone from unplugging it to prevent future fires.

> If
> AI research has to assume something which undermines fundamental values, it
> better have a good answer beyond academic freedom, which would also justify
> unrestricted embryo research, forced separation of twins into controlled
> upbringings, unrestricted use of pain in learning research, ...

What sort of howling non-sequitor is this supposed to be? Many people feel
that Darwinian evolution "has to assume something which undermines fundam-
ental values", that isn't an excuse to hide one's head in the sand and
ignore evolution, or to cut funding for research into evolutionary biology.

> > I regard artificial intelligence as an excellent scientific approach to the
> > pursuit of this ideal . . . one which enables me to test flights of my 
> > imagination with concrete experimentation.
> I don't think a Physicist or an experimental psychologist would agree
> with you. AI is DUBIOUS, because so many DOUBT that anyone in AI has a
> elaborated view of truth and falsehood in AI research. So tell me, as
> a scientist, how we should judge AI research?  In established sciences, the
> grounds are clear.  Certainly, nothing in AI to date counts as a controlled
> experiment, using a representative population, with all irrelevant variables
> under control.  Given the way AI programs are written, there is no way of even
> knowing what the independent variable is, and how it is being driven.  I don't
> think you know what experimental method is, or what a clearly formulated 
> hypothesis is either.  You lose your science badge.

Well, what kind of AI research are you looking to judge? If you're looking
at something like SOAR or ACT*, which claim to be computational models of
human intelligence, then comparisons of the performance of the architecture
with data on human performance in given task domains can be (and are) made.

If you are looking at research which attempts to perform tasks we usually
think of as requiring "intelligence", such as image understanding, without
claiming to be a model of human performance of the task, then one can ask
to what extent does the work capture the underlying structure of the task?
how does the approach scale? how robust is it? and any of a number of other
questions.

> Don't you appreciate that free will (some degree of choice) is essential
> to humanist ideals.  Read about the Renaissance which spawned the Science on
> whose shirt tails AI rides.  Perhaps then you will understand your intellectual
> heritage.  

Mr. Cockton, it is more than a little arrogant to assume that anyone who
disagrees with you is some sort of unread, unwashed social misfit, as you
do in the above quote and the earlier quote about the level of social
interaction of AI reearchers. If you want your concerns about AI taken
seriously, then come down off your high horse.

Karl Kluge (kck@g.cs.cmu.edu)

People have opinions, not organizations. Ergo, the opinions expressed above
must be mine, and not those of CMU.