[comp.ai] Understanding, utility, rigour

gilbert@glasgow.UUCP (06/06/88)

In article <3c671fbe.44e6@apollo.uucp> nelson_p@apollo.uucp writes:
>
>  Because AI would like to make some progress (for a change!).
Does it really think that ignoring the failures of others GUARANTEES
it success, rather than even more dismal failure?  Is there an argument
behind this?

>  With the exception of some areas in physiological pyschology,
> the field is not a science.  
What do we mean when we say this?  What do you mean by 'scientific'?
I ask because there are many definitions, many approaches, mostly polemic.

>  Its models and definitions are simply not rigorous enough to be useful.
Lack of rigour does not imply lack of utility.  Having applied many of
the models and definitions I encountered in psychology as a teacher, I
can say that I certainly found my psychology useful, even the behaviourism
(it forced me to distinguish between things which could and could not
be learned by rote, the former are good CAI/CAL fodder).

Understanding and rigour and not the same thing. Nor is 'rigour' one
thing.  The difference between humans and computers is what can
inspire them.  Computers are inspired by mechanistic programs, humans
by ideas, models, new understandings, alternative views.  Not all are
of course, and too much science in human areas directed towards the creation
of cast-iron understanding for the uninspired dullard.  

>  When you talk about an 'understanding of humanity' you clearly 
>  have a different use of the term 'understanding' in mind than I do.
Good, perhaps you might appreciate that it is not all without value.
In fact, it is the understanding you must use daily in all the
circumstances where science has not come to guide your actions.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

	     The proper object of the study of humanity is humans, not machines