[comp.ai] The epistemology of the common sense world

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (11/08/88)

This is a response to three digest postings.  I'm posting it to the 
net to avoid clogging up people's mailboxes, since it's so long.
I'll leave the digest moderator to decide what, if anything, should be
mailed out on the digest.

In article <4pcvX@SAIL.Stanford.EDU> JMC@SAIL.STANFORD.EDU (John McCarthy) writes:
>(3) through studying the tasks presented in the achievement of
>goals in the common sense world.
>
>No one of the approaches can be excluded by a priori arguments,
Nor can their potential success be assumed. As with all study, some
standards are required for distinguishing quality and scholarship from
amateurish and slipshod drivel.  Given that 
>AI is the third approach.
we require enlightenment on how AI workers are trained to study tasks
in the common sense world.  Task analysis and description is an
established component of Ergonomics/Human Factors.  I am curious as to
how AI workers have absorbed the experience here into their work, so
that unlike old-fashioned systems analysts, they automate the real
task rather than an imaginary one which fits into the machine.

If I were to choose 2 dozen AI workers at random and ask them for an
essay on methodological hygiene in task analysis and description, what
are my chances of getting anything even approaching the sophomore psychology
level of a school which takes non-positivist psychology seriously (i.e.
does not equate method fully with statistics)?

>but most AI researchers stick to experimental programming & also AI theory.
and not I take it how to understand what they are trying to program?

>I have left out sociology, because I think its contribution will be peripheral.
Far from it, its contribution is already central.  I suggest you read
some sociology of knowledge/cognitive sociology before dismissing the
importance of methods of social investigation in the description of
common sense tasks.  There are a number of cognitive sociologists and
anthropologists at UCSD for one who are near enough to put the case
for the proven value of the social perspective on common sense knowledge.

>In my opinion the core of AI is the study of the common sense world
>and how a system can find out how to achieve its goals.
Someone else already is on this ground, although they do not stick
rigidly to system modelling of behaviour.  Why not share this study
rather than work in the isolation of LISP?

>Achieving goals in the common sense world involves a different kind of
>information situation than science has had to deal with previously.
Again, the sociology of knowledge has been here already.  However, its
response is very different from AI.  It is true that 
>This fact causes most scientists to make mistakes in thinking about
>it.  Some pick an aspect of the world that permits a conventional
>mathematical treatment and retreat into it.  The result is that
>their results often have only a metaphorical relation to intelligence...

But is the replacement of continuous mathematics by discrete
mathematics really a sign of progress?  Is it really the true
alternative?  Again, a sociological approach could see AI as just
another version of scientific elitism, trying, as classic science has,
to impose a marginal view of reality on the common sense majority.  AI
may model common sense, but it will never BE common sense.  What is
true of classical calculus based physics will be true of AI.  There is
no real shift here.

>Why does the common sense world demand a different approach?  Here are
>some reasons. ...
Short list, no reference to established work on common sense knowledge,
especially from education (my original discipline).  Talk to an
enlightened science teacher about curriculum design (I assume,
recklessly, that I can imagine what the American version of this would
be, perhaps there are none in the European sense).

>Therefore, some people regard the common sense world as unfair
>and refuse to do science about it.
Refuse to study it, or refuse to be positivists, or refuse to use
calculus.  What is doing science?  People have been studying this
common sense world for longer than AI workers have been stumbling
towards this position.

In a previous article, Ray Allis writes:
> I don't believe digital computers, or rather physical symbol systems,
> can be intelligent.  It's more than difficult, it's not possible.
Agreed.  Arguments though, these guys don't die gracefully :-)

> No, "I said what I meant, and I meant what I said".
We don't all speak the same English remember.

> "Intelligent" is an adjective, usually
> modifying the noun "behavior", and it does describe something measurable;
Come off it, please get your understanding of psychometrics up to
scratch before you make this sort of assertion.  Psychometricians
*BELIEVE* that intelligence can be measured, and have been trying to
get a good measure of it for decades.  By psychometric standards, good
measures now exist, the problem is, as you show know, is what is the
measure measuring?  Again, there may be cultural differenceces here,
as many Europeans are not as keen on testing as Americans (especially
the snake oil psychology of personnel consultants).

To sum up, this is a very old chestnut, it is cracked and smelly and
no-one who has read around the subject believes that intelligence as
redefined daily in social interaction is being measured by anything in
the battery of psychometricians (battery of tests, yup, always
thought they had the subtelty of a howitzer).

> Common sense does not depend on social consensus.
Forget "consensus" if this smacks of some organised decision making
process.  But, common sense cannot be COMMON if it does not spread evenly
across members of a society.

> those behaviors which nearly everyone acquires in the course of existence,
They do not.  Read about socialisation and the effect of neglect in
childhood.  Most common sense is learned, and I take a Piagetian view here
where learning is a combination of assimilation and conscious adaptation.  

> such as reluctance to put your hand into the fire.
Learnt, and yes universal, but far, far too restrictive a view of
common sense.  Is correct past tense or plural formation then not
common sense.  Are you really restricting common sense to knowledge 
of the physical world?

Finally!, David H. West writes

>>There are tasks in the world.  Computers can assist some of these
>>tasks, but not others.  Understanding why this is the case lies at the
>>heart of proper human-machine system design.  The problem with hard AI is
>>that it doesn't want to know that a real division between automatable
>>and unautomatable tasks does exist in practice.  

>You seem to believe that this boundary is fixed.  Well, it will be
>unless people work on moving it.
Not unless at all. It may remain fixed despite them!  Actually, I
don't think the boundary is fixed.  In my own area of HCI, tasks are
constantly being transferred from the operator to the machine.  The 
difference between HCI and AI is that AI types don't seem to care about
the resultant task allocation, or the quality of the automated tasks
FROM THE USER'S PERSPECTIVE.  I await the correcting flames.

>>many AI workers so damned ignorant of the problems with
>>operationalising definitions of intelligence, as borne out by nearly a
>>century of psychometrics here? 

>Why were those who wanted instead to
>measure forces so damned ignorant of the problems with the
>philosophical approach?   Maybe they weren't, and that's *why* they
>preferred to measure forces.

Way off the mark.  The people I am talking about *DO* prefer to measure
intelligence, and they have failed.  OK?  You seem to think the AI
brigade are doing the hard positivist science.  My complaint is that
they are ignorant of the attempts at hard positivist science, not that
they do not philosophise all day.  Your "yeah, let's do it attitude"
does not export itself as well as Coca Cola (TM).

>>Such social constructs cannot have a machine embodiment, nor can any
>Cannot? Why not?  "Do not at present" I would accept.
There are at least 23 senses of the verb "to be".  The sense I am
using it in is that social means in a society.  Stick it in a machine
and it is no longer social.  It is ossified, frozen and dead,  Duane's
essay on how we open windows cast for eternity in the marble of LISP,
a passable photograph, but a lousy situation.

>*A* relevent discipline.  AI is at present more concerned with the
>structure and machine implementation of common sense than with its
>detailed content.  
And, as I have long argued, will get it hopelessly wrong because of
its cupbaord generation of reality.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

ok@quintus.uucp (Richard A. O'Keefe) (11/19/88)

In article <1840@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>we require enlightenment on how AI workers are trained to study tasks
>in the common sense world.  Task analysis and description is an
>established component of Ergonomics/Human Factors.  I am curious as to
>how AI workers have absorbed the experience here into their work, so
>that unlike old-fashioned systems analysts, they automate the real
>task rather than an imaginary one which fits into the machine.

>If I were to choose 2 dozen AI workers at random and ask them for an
>essay on methodological hygiene in task analysis and description, what
>are my chances of getting anything ...

I'm far from sure that I could _write_ such an essay, but I'd very much
like to _read_ it.  (I was hoping to find topics like that discussed in
comp.cog-eng, but no such luck.)  Could you give us a reading list, please?

I may have misunderstood him, but Donald Michie talks about "The Human
Window", and seems to be saying that his view of AI is that it uses
computers to move a task in complexity/volume space into the human window
so that humans can finish the job.  This would suggest that MMI and that
sort of AI should have a lot in common, and that a good understanding of
task analysis would be very helpful to people trying to build "smart tools".

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (11/22/88)

In article <705@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>I'm far from sure that I could _write_ such an essay, but I'd very much
>like to _read_ it.  (I was hoping to find topics like that discussed in
>comp.cog-eng, but no such luck.)  Could you give us a reading list, please?

Just go to the human factors/ergonomics section of your
bookshop/library.

Better still, talk to an ergonomist/human factors expert.

Conversation is always superior to autodidactics
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert