mckee@CORWIN.CCS.NORTHEASTERN.EDU (07/22/87)
In AIlist Digest v5 #171, July 6, 1987, Don Norman <norman%ics@sdcsvax.ucsd.edu> wrote: > [Here's why] many of us otherwise friendly folks in the sciences that > neighbor AI [are] frustrated with AI's casual attitude toward theory: > AI is not a science and its practitioners are woefuly untutored in > scientific method." [ 15 lines deleted ] > AI worries a lot about methods and techniques, with many books and > articles devoted to these issues. But by methods and techniques I > mean such topics as the representation of knowledge, logic, > programming, control structures, etc. None of this method includes > anything about content. And there is the flaw: nobody in the field of > Artificial Intelligence speaks of what it means to study intelligence, > of what scientific methods are appropriate, what emprical methods are > relevant, what theories mean, and how they are to be tested. All the > other sciences worry a lot about these issues, about methodology, > about the meaning of theory and what the appropriate data collection > methods might be. AI is not a science in this sense of the word. [ 22 more lines deleted ] I think he's found an issue of critical importance here, so I'm going to pull it out of context even further and repeat it again: "nobody in the field of Artificial Intelligence speaks of what it means to *study* intelligence" (my emphasis). No wonder those of us outside the field have trouble figuring out what AI is really about. My impression is that AI researchers try to study intelligence by building artifacts that will make a convincing show of intelligent behavior. This might be why books on AI methods are all about sophisticated representations and fancy program structures - they're techniques of building more complex (hopefully more intelligent) programs. But this is nearsighted. Intelligence is the *difference* between unintelligent and intelligent behavior. The study of intelligence begins when the programming stops. And on what to do then, the AI textbooks are silent. Now I don't want to spend time talking about the consequences of this failure, Don did that much better than I can. (However, I can't resist throwing in my excuse: programming is fun; science is hard, often boring, work. Science is far more rewarding, though.) What I'm going to discuss in the rest of this note stems from his remark that AI workers are "woefully untutored in scientific method". Assuming for the purposes of discussion that we know enough about intelligence to make principled distinctions between it and stupidity (counterintelligence?), what would the scientific study of intelligence look like? One way of answering this question is to look at some of the enterprises that claim to be scientific, but aren't. The main distinction in the list below is between those fields that are unarguably sciences, and those that fail to be scientific in one way or another. True science, the authentic, natural sciences, are ones like astronomy, geology, biology, physics, or chemistry. False sciences are harder to characterize, but here goes: Here's a list of examples of different claimants to the name "science"; mostly impostors, all of them can be called "quasi-sciences". By looking at them, we can gain some sense of what qualities are necessary for real sciences, since the quasi-sciences don't have them. * Fraudulent sciences: Creation Science, Lysenkoism, Scientology (the most generous thing I can say about these is that they appear to proceed by trusting exceptional, one-of-a-kind reports, and denying persistent, repeated, quantitative, skeptical observations. In rhetoric this is called "appeal to authority.") * Trivial sciences: Clairol Science, barbeque science, accelerator science (Clairol Science has discovered a new way to make your hair silkier and more full-bodied. Barbeque science has conclusively determined that mesquite smoke is superior to hickory smoke. We need to build the superconducting supercollider so America won't fall behind in accelerator science.) * Semi-sciences: Theoretical Physics, Descriptive Linguistics (complementary halves of their respective fields.) * Interdisciplinary Sciences: Materials Science, Neuroscience (characterized by their subject matter not yielding coherently to any single experimental technique or theoretical paradigm.) * Artifact Sciences: Economics, Political Science, Anthropology (Herbert Simon's "sciences of the artificial" - these study artifacts of human society - without civilization, they wouldn't exist. However, civilization is big and complex enough that techniques developed to deal with natural phenomena give useful insights.) * Synthetic Sciences: Mathematics, Computer Science (These study the consequences of small sets of fundamental concepts. Mathematics under Russell&Whitehead and Bourbaki has been "nothing but" an incredibly vast and elegant elaboration of set theory, while [I claim with a certain trepidation] that the fundamental basis of the scientific part of computer science lies in the elaboration of the consequences of the notion of an algorithm.) The authentic, natural sciences, on the other hand, are the body of analytic, experimental studies of phenomena that go on whether or not the experimenter is there to observe them, [philosophers can complain about "naive realism" -- I'll confess to the realism, but not not the naivete] and the results, conclusions, and theoretical relations that tie the studies together. The key concepts here are "experimental" and "objective". If a researcher (or a team of them) isn't doing experiments on some external phenomenon, then it ain't real science. What do you get from real science? Reality. Not wishful thinking, not hallucinations, not mythology, not common sense. (Strictly speaking, what you get is the most compact model of reality consistent with the most reliable, most detailed, widest ranging set of observations.) Uncommon sense. What you don't get is completeness, or even closure. First of all, there's too much knowledge, as anyone with a Ph.D. in a natural science will tell you. Second of all, the universe isn't closed under observation: there's always more detail to examined, further frontiers to be explored, greater complexities to be explained. And most exciting of all, there's the possibility of revolution - that a new model will explain more data, resolve old inconsistencies, or be statable more succinctly, hopefully all at once. The natural sciences generate an interconnected web of explanations that should contain a place for AI, if AI is a science. It's in this explanatory web that people claim to see the bugaboo of reductionism (without which no discussion of scientific method would be complete). Stripped of the argumentative mumbo-jumbo that keeps philosophers in business, a reductionist would claim that a pile of parts on the floor is equivalent to an assembled machine, while a holist would claim that the parts are irrelevant to any description of the machine. Both views are incomplete, but there is indeed an ordering by "is explained in terms of" that reductionists have grabbed onto. Because it's only a partial ordering, I'd like to borrow a term from evolutionary biology and suggest that scientific knowledge has the same kind of familial, clade structure as do charts of the genetic relations among organisms. Reading "<--" as "is used to explain", we have One path through a Cladistic epistemology: Particle Physics <-- Condensed-matter physics <-- Quantum Chemistry <-- Organic Chemistry <-- Molecular Biology/Genetics <-- Developmental Biology <-- Neuroscience <-- Ethology <-- Psychology <-- Cognitive Science <-- Mathematics I would put intelligence in at the same level as mathematics. Congratulations! Scientific AI would be among the most complex of sciences. However, in reality the picture isn't this clean. Aside from those sciences that aren't in a direct explanatory line to intelligence, there are shortcuts among levels due to the logic of experimental science, that makes it possible to do things like manipulate genetic structure and get a behavioral result. But this note is already too long to go into this further, and I've barely alluded to the formal role of the hypothesis. Hope this helps, - George McKee College of Computer Science Northeastern University, Boston 02115 CSnet: mckee@Corwin.CCS.Northeastern.EDU Phone: (617) 437-5204 Usenet: in New England, it's not unusual to have to say "can't get there from hroror2
gilbert@hci.hw.ac.UK (Gilbert Cockton) (08/05/87)
In article <8707270710.AA05885@ucbvax.Berkeley.EDU> mckee@CORWIN.CCS.NORTHEASTERN.EDU writes: a lot, but his go at describing types of not-quite-sciences is interesting. For me, AI should be one of the > >* Interdisciplinary Sciences: Materials Science, Neuroscience > (characterized by their subject matter not yielding coherently > to any single experimental technique or theoretical paradigm.) > My criticism of AI is that most of the workers I meet are pretty ignorant of the CRITICAL TRADITIONS of ESTABLISHED disciplines which can say much about AI's supposed object of study. When AI folk do stop hacking (LISP, algebra or logic - it makes no difference, logic finger and algebra wrist are just as bad as the well known 'computer-bum'), they may do so only to raid a few concepts and 'facts' from some discipline, and then go and abuse them out of sight of the folk who originally developed them and understand their context and deductive limitations. What some of them do to English is even worse :-) >(However, I can't resist throwing in my excuse: programming is fun; > science is hard, often boring, work. Science is far more rewarding, though.) I think the nail's been hit squarely on the head, but to programming we should add amateur philosophy and idealist logic/algebra as other fun pasttimes pursued instead of hard, critical, rigorous argument. I think the major turn-off of AI work can be summed up as a complete lack of candid scholarship. The same is unfortunately true for much applications-driven research in computing. Without reining in AI (or computer applications research) under proper disciplines, I can't really see any prospect for workers developing their critical faculties up to the highest standards of established disciplines. NB - yes there are uncritical, unimaginative automata and disreputable charlatans in all disciplines. But these sorts are not the type who make a DISCIPLINE. AI seems to have few folk who do want it to be a discipline. -- Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN JANET: gilbert@uk.ac.hw.aimmi ARPA: gilbert%aimmi.hw.ac.uk@cs.ucl.ac.uk UUCP: ..!{backbone}!aimmi.hw.ac.uk!gilbert
andrew@trlamct.OZ.AU (Andrew Jennings) (08/15/87)
In article <108@glenlivet.hci.hw.ac.uk>, gilbert@hci.hw.ac.UK (Gilbert Cockton) writes: > > My criticism of AI is that most of the workers I meet are pretty > ignorant of the CRITICAL TRADITIONS of ESTABLISHED disciplines which > can say much about AI's supposed object of study. When AI folk do stop > hacking (LISP, algebra or logic - it makes no difference, logic finger > and algebra wrist are just as bad as the well known 'computer-bum'), > they may do so only to raid a few concepts and 'facts' from some > discipline, and then go and abuse them out of sight of the folk who > originally developed them and understand their context and deductive > limitations. What some of them do to English is even worse :-) > -- I am afraid I cannot let this pass. It almost appears as if you view programming as charlatan in itself ! Suffice it to say that if we view AI as an empirical search then we have some definite criteria : either the program works or it does not. Sure I'm in favour of CRITICAL thought and CRITICAL appraisal of work in AI : its just that I don't want to get buried in a pile of useless Lemmas (no doubt generated by you and your accomplices). Why can't you realise the simple truth : a discipline goes through STAGES of development. First the empirical paradigm dominates, then the engineering paradigm and last of all the theoreticians replete with armchairs. -- UUCP: ...!{seismo, mcvax, ucb-vision, ukc}!munnari!trlamct.trl!andrew ARPA: andrew%trlamct.trl.oz@seismo.css.gov Andrew Jennings Telecom Australia Research Labs