NICK@AI.AI.MIT.EDU (Nick Papadakis) (05/28/88)
To: comp-ai-digest@ukc.ac.uk Path: glasgow!gilbert From: Gilbert Cockton <mcvax!cs.glasgow.ac.uk!gilbert@uunet.UU.NET> Newsgroups: comp.ai,comp.ai.digest Subject: Re: AIList Digest V7 #6 Reply to McCarthy from a minor menace Date: Thu, 26 May 88 05:34 EDT References: <8805241911.AA19163@BLOOM-BEACON.MIT.EDU> Reply-To: Gilbert Cockton <mcvax!cs.glasgow.ac.uk!gilbert@uunet.UU.NET> Organization: Comp Sci, Glasgow Univ, Scotland Lines: 112 >There are three ways of improving the world. >(1) to kill somebody >(2) to forbid something >(3) to invent something new. This is a strange combination indeed. We could at least add (4) to understand ourselves. Not something I would ever put in the same list as murder, but I'm sure there's a cast iron irrefutable logical reason behind it. But perhaps this is what AI is doing, trying to improve our understanding of ourselves. But it may not do this because of (2) it forbids something that is, any approach, any insight, which does not have a computable expression. This, for me, is anathema to academic liberal traditions and thus >as long as Mrs. Thatcher is around; I wouldn't be surprised if >Cockton could persuade Tony Benn. is completely inaccurate. Knowledge of contemporary politics needs to be added to the AI ignorance list. Mrs. Thatcher has just cut a lot of IT research in the UK, and AI is one area which is going to suffer. Tony Benn on the other hand, was a member of the government which backed the Transputer initiative. The Edinburgh Labour council has used the AI department's sign in some promotional literature for industries considering locating in Edinburgh. They see the department's expertise as a strength, which it is. Conservatives such as Thatcher look for immediate value for money in research. Socialists look for jobs. Academic liberals look for quality. I may only have myself to blame if this has not been realised, but I have never advocated an end to all research which goes under the heading of AI. I use some of it in my own research, and would miss it. I have only sought to attack the arrogance of the computational paradigm, the "pure" AI tradition where tinkerers play at the study of humanity. Logicians, unlike statisticians, seem to lack the humility required to serve other disciplines, rather than try to replace them. There is a very valuable role for discrete mathematical modelling in human activities, but like statistics, this modelling is a tool for domain specialists and not an end in itself. Logic and pure maths, like statistics, is a good servant but an appalling master. >respond to precise criteria of what should be suppressed Mindless application of the computational paradigm to a) problems which have not yielded to stronger methods b) problems which no other paradigm has yet provided any understanding of. For b), recall my comment on statistics. If no domain specialism has any empirical corpus of knowledge, AI has nothing to test itself against. It is unfalsifiable, and thus likely to invent nothing. On a), no one in AI should be ignorant of the difficulties in relating formal logic to ordinary language, never mind non-verbal behaviour and kinaesthetic reasoning. AI has to make a case for itself based on a proper knowledge of existing alternative approaches and their problems. It usually assumes it will succeed spectacularly where other very bright and dedicated people have failed (see the intro. to Winograd and Flores). > how they are regarded as applying to AI "Pure" AI is the application of the computational paradigm to the study of human behaviour. It is not the same as computational modelling in psychology, as here empirical research cannot be ignored. AI, by isolating itself from forms of criticism and insight, cannot share in the development on an understanding of humanity, because its raison d'etre, the adherence to a single paradigm, without question, without self-criticism, without a humble relationship to non-computational paradigms, prevents it ever disappearing in the face of its impotence. >and what forms of suppression he considers legitimate. It may be partly my fault if anyone has thought otherwise, but you should realise that I respect your freedom of association, speech and publication. If anyone has associated my arguments with ideologies which would sanction repression of these freedoms, they are (perhaps understandably) mistaken. There are three legitimate forms of "suppression" a) freely willed diversion of funding to more appropriate disciplines b) run down of AI departments with distribution of groups across established human disciplines, with service research in maths. This is how a true discipline works. It leads to proper humility, scholarship and ecleticism. c) proper attention to methodological issues (cf the Sussex tradition), which will put an end to the sillier ideas. AI needs to be more self-critical, like a real discipline. Social activities such as (a) and (b) will only occur if the arguments with which I agree (they are hardly original) get the better of "pure" AI's defence that it has something to offer (in which case answer that guy's request for three big breaks in AI research, you're not doing very well on this one). It is not so much suppression, as withdrawal of encouragement. >similar to Cockton's inhabit a very bad and ignorant book called "The Question > of Artificial Intelligence" edited by Stephen Bloomfield, which I >will review for "Annals of the History of Computing". Could we have publishing information on both the book and the review please? And why is it that AI attracts so many bad and ignorant books against it? If you dive into AI topics, don't expect an easy time. Pure AI is attemping a science of humanity and it deserves everything it gets. Sociobiology and behaviourism attacted far more attention. Perhaps it's AI's turn. Every generation has its narrow-minded theories which need broadening out. AI is forming an image of humanity. It is a political act. Expect opposition. Skinner got it, so will AI. >The referee should prune the list of issues and references to a size that > the discussants are willing to deal with. And, of course, encoded in KRL! Let's not have anything which takes effort to read, otherwise we might as well just go and study instead of program. >The proposed topic is "AI and free will". Then AI and knowledge-respresentation, then AI and Hermeneutics (anyone read Winograd and Flores properly yet), then AI and epistemology, then .. -- Gilbert Cockton, Department of Computing Science, The University, Glasgow gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert The proper object of the study of humanity is humans, not machines