[comp.ai.digest] Question on arguments against AI

ADLASSNI@AWIIMC11.BITNET ("Adlassnig, Peter") (03/05/88)

Is it true that there are two main arguments against the feasibility
of AI?

1) The philosophical and cognitive science argument
   (e.g., Dreyfus, Searle)

2) The computability and complexity theory argument
   (e.g., Lucas(?))

Could someone point out some relevant literature on the second
point, please?

Thank you in advance.

Klaus-Peter Adlassnig
Department of Medical Computer Science
Garnisongasse 13
A - 1090 Vienna, Austria
email: ADLASSNI at AWIIMC11.BITNET

gilbert@hci.hw.ac.UK (Gilbert Cockton) (03/30/88)

In article <8803051150.AA05897@ucbvax.Berkeley.EDU> ADLASSNI@AWIIMC11.BITNET
("Adlassnig, Peter") writes:
>
>Is it true that there are two main arguments against the feasibility
>of AI?
> ....
Forget categories for the moment and come bottom up.
Within formal semantics there are a whole set of problems which reduce
confidence in the comprehensiveness of computational models of human
beliefs and behaviour.

Formal semantics is largely AI off-line, and has an intellectual and
scholarly tradition which pre-dates the LISP bar of AI.  I suggest you
pick up the Cambridge University Press catalogue and chase up any
Linguistics text with 'semantics' in the title.  Most of these monographs
and texts have consensus examples of problems for mathematical accounts
of meaning, especially ones based on two-valued logics.  Everyone in
NLP should know about them.  Basically, AI won't succeed until it
cracks these problems, and there is no reason to believe that they
will ever get anywhere near cracking them.  The gap between
mathematical accounts and reality remains too large.
-- 
Gilbert Cockton, Scottish HCI Centre, Heriot-Watt University, Chambers St.,
Edinburgh, EH1 1HX.  JANET:  gilbert@uk.ac.hw.hci   
ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk UUCP: ..{backbone}!mcvax!ukc!hci!gilbert