[comp.ai] AI a proper science? - The Cockton debate

cam@edai.ed.ac.uk (Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550) (04/08/89)

In article <2705@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:

> ... methodological straitjacket ... narrow .... 
>... Hence AI texts are far less 'liberal' than
>psychology ones - the latter consider opposing theories and paradigms.

We (Dept of AI, Edinburgh) teach AI to undergrads and postgrads.  A
common complaint, especially from those with a strong background in a
mature science which can afford to be dogmatic about its paradigms (such
as CS :-) :-)), is that "there are too many competing theories in AI".
They want to know "the truth", and get upset by being asked to consider
(as yet) undecidable alternatives, exercise judgement, master competing
arguments, etc..

It's true that followers of some of the dominant AI paradigms sometimes
behave as though it were patently obvious that any other opinion was
trivial nonsense; but doesn't this happen in psychology too?

Gilbert's remarks make sense if they are are considered to be directed
at that subset of AI which presumes that AI is entirely concerned with
writing computer programs which mimic isolated human competences such as
planning, diagnosing, or translating, and presumes the human mind to be
approximately modular in the same sort of way.  But that's just one
strand of AI, albeit the one responsible for the commercial success and
media notoriety of 'expert systems'.

A practical difficulty faced by AI as a distinct discipline is that it
is as yet improperly institutionalised, by which I mean that there are
very few Depts of AI.  Most AI is carried on under the umbrella of Depts
of CS, EE, Psychology, Philosophy, etc., often consciously deciding to
pursue only certain parts of the field, and inevitably coloured by what
the paradigms of the hosting dept permit to be considered as "proper" AI.
Consequently AI literature is peculiarly fragmented, most AI workers
are geographically separated from other components of the field, and so
personal contact at AI conferences and workshops is more important
(because of this institutional separation) than in other disciplines.


	There's more to AI, Horatio ...

-- 
Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK		

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (04/10/89)

In article <330@edai.ed.ac.uk> cam@edai (Chris Malcolm) writes:
>In article <2705@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
>common complaint, especially from those with a strong background in a
>mature science which can afford to be dogmatic about its paradigms (such
>as CS :-) :-)), is that "there are too many competing theories in AI".
>They want to know "the truth", and get upset by being asked to consider
>(as yet) undecidable alternatives, exercise judgement, master competing
>arguments, etc..

Perhaps they were expecting to learn some practical computing skills,
and were disappointed.  I don't think they come on the course
expecting to have to read and analyse.

If you were teaching psychology or philosophy (under)grads, the
complaints might change.  They might ask for the range of competing
arguments to be expanded.

I'd be interested in pointers to the holy wars in Strong AI.  
I'm only aware of logicists v. KBS v. connectionism.  This hardly
covers the full range of theories of 'mind', which in turn are but a
subset of theories of human agency/behaviour.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert