[comp.ai] Me and Karl Kluge

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (06/01/88)

In article <1792@pt.cs.cmu.edu> kck@g.gp.cs.cmu.edu (Karl Kluge) writes:
>>  Because so little of our effective knowledge is formalised, we learn
>>  in social contexts, not from books.  I presume AI is full of relative
>>  loners who have learnt more of what they publicly interact with from
>>  books rather than from people.  
>
>You presume an awful lot. Comments like that show the intellectual level
>of your critique of AI.
I also presume that comparative attitudes to book and social knowledge
are a measurable, and probably fairly stable, feature of someone's
make up.  It would be intriguing to test the hypothesis that AI
researchers place more faith in the ability of text (including
programs) to capture social reality than other academic groups.  Now,
does this still have no intellectual respectability?
>
>Well, what kind of AI research are you looking to judge? If you're looking
>at something like SOAR or ACT*, which claim to be computational models of
>human intelligence, then comparisons of the performance of the architecture
>with data on human performance in given task domains can be (and are) made.
You obviously have missed my comments about work by John Anderson and
other psychological research.  If AI were all conducted this way,
there would be less to object about.

>If you are looking at research which attempts to perform tasks we usually
>think of as requiring "intelligence", such as image understanding, without
>claiming to be a model of human performance of the task, then one can ask
>to what extent does the work capture the underlying structure of the task?
>how does the approach scale? how robust is it? and any of a number of other
>questions.
OK then. Point to an AI text book that covers Task Analysis?  Point to
work other than SOAR and ACT* where the Task Domain has been formally
studied before the computer implementation?  My objection to much work
in AI is that there has been NO proper study of the tasks which the
program attempts to simulate.  Vision research generally has very good
psychophysical underpinnings, and I accept that my criticisms do not
apply to this area either.  To supply one example, note how the
research on how experts explain came AFTER the dismal failure of rule
traces in expert systems to be accepted as explanation.  See Alison
Kidd's work on the unwarranted assumptions behind much (early?) expert
systems work.  One reason I did not pursue a PhD in AI was that one
potential supervisor told me that I didn't have to do any empirical
work before designing a system, indeed I was strongly encouraged NOT
to do any empirical studies first.  I couldn't believe my ears.  How
the hell can you model what you've never studied?  Fiction.

>Mr. Cockton, it is more than a little arrogant to assume that anyone who
>disagrees with you is some sort of unread, unwashed social misfit
When did I mention hygiene? On "unread", this a trivial charge to
prove, just read through the references in AAAI and IJCAI.  AI
researchers are not reading what educational researchers are reading,
something which I can't understand, as they are both studying the same
thing.  Finally, anyone who is programming a lot of the time cannot be
studying people as much as someone who never programs. 

I never said anything about being a misfit.  Modern societies are too
diverse for the word to be used without qualification.  Being part of a
subculture, like science or academia is only a problem when it prevents
comfortable interaction with people from different subcultures.  Part
of the subculture of AI is that the intellectual tools of maths and
physics transfer to the study of humans.  Part of the subculture of
human disciplines is that they do not.  I would be a misfit in AI, AI
types could be misfits in a human discipline.  I've certainly seen a
lot of misanthropy and "we're building a better humanity" in recent
postings.  Along with last year's debate over "flawed" minds, it's
clear that many posters to this group believe they can do a better job
than whatever made us. But what is it exactly that an AI is going to be
better than?  No image of man, no superior AI.  Wonder that's why some
AI people have to run humanity down.  It improves the chance of ELIZA
being better than us.

The point I have been making repeatedly is that you cannot study human
intelligence without studying humans.  John Anderson and his paradigm
partners and Vision apart, there is a lot of AI research which has
never been near a human being. Once again, what the hell can a computer
program tell us about ourselves?  Secondly, what can it tell us that we
couldn't find out by studying people instead?
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

	     The proper object of the study of humanity is humans, not machines

smoliar@vaxa.isi.edu (Stephen Smoliar) (06/09/88)

I see that Gilbert Cockton is still judging the quality of AI by his
statistical survey of bibliographies in AAAI and IJCAI proceedings.
In the hope that the rest of us agree to the speciousness of such arguments,
I shall try to take a more productive approach.
In article <1312@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>
>The point I have been making repeatedly is that you cannot study human
>intelligence without studying humans.  John Anderson and his paradigm
>partners and Vision apart, there is a lot of AI research which has
>never been near a human being. Once again, what the hell can a computer
>program tell us about ourselves?  Secondly, what can it tell us that we
>couldn't find out by studying people instead?

Let us consider a specific situation.  When we study a subject like physics,
there is general agreement that a good textbook must include not only an
exposition of fundamental principles but also a few examples of solved
problems.  Why are these examples of benefit to the student?  It would
appear that he uses them as some sort of a model (perhaps the basis for
analogical reasoning) when he starts doing assigned problems;  but how
doesd he know when an example is the right one to draw upon?  The underlying
question is this:  HOW DOES KNOWLEDGE OF SUCCESSFULLY SOLVED PROBLEMS
ENHANCE OUR ABILITY TO SOLVE NEW PROBLEMS?

Now, the question to Mr. Cockton is:  What have all those researchers who
don't spend so much time with computer programs have to tell us?  From what
I have been able to discern, the answer is:  NOT VERY MUCH.  Meanwhile, there
are a variety of AI projects which have begun to address the questions
concerned with what constitutes experiential memory and how it might be
modeled.  I am not claiming they have come up with any answers yet, but
I see no more reason to rail against their attempts than to attack attempts
by those who would not sully their investigative efforts with such ugly
artifacts as computer programs.

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (06/10/88)

In article <43@aipna.ed.ac.uk> rjc@aipna.ed.ac.uk (Richard Caley) writes:
>In <1312@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes
>> work other than SOAR and ACT* where the Task Domain has been formally
>> studied before the computer implementation?  
>Natural language processing. Much ( by no means all ) builds on the work
>of some school of linguistics.
and ignores most of the work beyond syntax :-) Stick to the
computable, not the imponderable.  Hmm pragmatics.  I know there is AI
work on pragmtics, but I don't know if a non-computational linguist
working on semantics and pragmatics would call it advanced research work.
>One stands on the shoulders of giants. Nobody has time to research
>their subject from the ground up.
But what when there is no ground? Then what?  Hack first or study?
Take intelligent user interfaces, hacking first well before any study
of what problems real users on real tasks in real applications face
(exception Interllisp-D interface, but this was an end-user project!).

>According to your earlier postings, if ( strong ) AI was successful it
>would tell us that we have no free will, or at least that we can not assume
>we have it. I don't agree with this but it is _your_ argument and something
>which a computer program could tell us.
Agreed.  Anything ELSE though that may be useful? (I accept that proof
of our logical (worse than physical) determinism would be a revelation)

>What do the theories of physics tell us that we couldn't find out by
>studying objects.
Nothing, but as these theories are based on the study of objects, we
know that if we were to repeat the study, we would confirm the
theories. Strong AI on the other hand conducts NO study of people, and
thus if we studied people in an area colonised at present by hackers
only, then we have no reason to believe that we would confirm the
model in the hacker's mind.  There is no similarity at all between the
theories of physics and computational models of human behaviour, it
just so happens that some (like ACT*) do have an empirical input.  The
problem with strong AI is that you don't have to have this input.  No one
would dare call something a theory in physics which was based solely on
one individual's introspection constrained only by their progamming
ability. In AI, it seems acceptable (Schank's work for example, can
anyone give me references to the substantive critiques from within AI,
I know of ones by linguists).

>>     The proper object of the study of humanity is humans, not machines
>Well, there go all the physical sciences, botany, music, mathematics . . .
And there goes your parser too.  "of humanity" attaches to "the
study".  Your list is not such a study, it is a study of the natural
world and some human artifacts (music, mathematics).  These are not
studies of people, OK, and they thus tell us nothing essential about
ourselves, except that we can make music and maths, and that we can
find structures and properties for these artifacts.  A study of
artifacts, cognitive, aesthetic or otherwise, is not necessarily a
study of humanity.  The latter will embrace all artifacts, but not as
objects in themselves, but for their possible meanings.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

	     The proper object of the study of humanity is humans, not machines

rjc@aipna.ed.ac.uk (Richard Caley) (06/13/88)

In <1342@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
>In article <43@aipna.ed.ac.uk> rjc@aipna.ed.ac.uk (Richard Caley) writes:

>>Natural language processing...builds on the work of linguistics.

>and ignores most of the work beyond syntax :-) 

Some does.

>Hmm pragmatics.  I know there is AI
>work on pragmtics, but I don't know if a non-computational linguist
>working on semantics and pragmatics would call it advanced research work.

The criterion for it being interesting would not necessarily be explaining
something new, explaining something in a more extensible/elegant/practical/
formal ( choose your own hobby horse ) is just as good.

>But what when there is no ground? Then what?  Hack first or study?

Maybe my metaphor was not well chosen. Rather than building up it
might be better to see the computational work building down, trying to
ground its borrowed theories ( of language or whatever ) in something
other than their own symbols and/or set theory.

your question becomes, what when there is nothing to hang your work from?
In that case you should go out and do the groundwork or, better, get
someone trained in the empirical methods of that field to do it. 

>(exception Interllisp-D interface, but this was an end-user project!).

ARGH don't even mention it, it just lost my days work for me.

>(I accept that proof of our logical (worse than physical) determinism
>would be a revelation)

Well physical determinism wouldn't be a revelation to many of us
who assume it already. I don't know your definition of logical determinism
so I can't say whether that is worse. If it is meant to apply to all
possible outcomes of strong AI it can't imply lack of free will ( read
as the property of making your own decisions, rather than exclusion from
causality ), what does it imply that is so shocking.

>>What do the theories of physics tell us that we couldn't find out by
>>studying objects.
>Nothing.

But they do. Studying objects just tells you what has happened. A (correct)
theory can be predictive, can be explainatory, can allow one to deduce
properties of the system under study which are not derivable from the
data.

>Strong AI on the other hand conducts NO study of people, 

Strong AI does not require the study of people, it is not "computational
psycology". AI workers study people in order to  avoid reinventing wheels.

>>>     The proper object of the study of humanity is humans, not machines

>And there goes your parser too.

 Oops. I'm afraid I read it as parallel to "The proper study of man is man".

It does seem to be something of a hollow statement; I can't think of
many people who study machines as a study of humanity ( except in the
degenerate case, if one believes humans are machines ). Some people
use machines as tools to study human beings, some study and build
machines.

jeff@aiva.ed.ac.uk (Jeff Dalton) (07/06/88)

In article <1342@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
  work on pragmtics, but I don't know if a non-computational linguist
  working on semantics and pragmatics would call it advanced research work.

Well, if you don't know ...