kim@watsup.waterloo.edu (T. Kim Nguyen) (08/05/89)
Anyone seen any mind-blowing (I mean, *GOOD*) definitions of AI? All the books seem to gloss over it... -- Kim Nguyen kim@watsup.waterloo.edu Systems Design Engineering -- University of Waterloo, Ontario, Canada
aarons@syma.sussex.ac.uk (Aaron Sloman) (08/07/89)
kim@watsup.waterloo.edu (T. Kim Nguyen) writes: > Date: 5 Aug 89 02:17:40 GMT > Organization: PAMI Group, U. of Waterloo, Ontario > > Anyone seen any mind-blowing (I mean, *GOOD*) definitions of AI? All > the books seem to gloss over it... > -- > Kim Nguyen kim@watsup.waterloo.edu > Systems Design Engineering -- University of Waterloo, Ontario, Canada Most people who attempt to define AI give limited definitions based on ignorance of the breadth of the field. E.g. people who know nothing about work on computer vision, speech, or robotics often define AI as if it were all about expert systems. (I even once saw an attempt to define it in terms of the use of LISP!). What follows is a discussion of the problem that I previously posted in 1985 (I've made a few minor changes this time)! -- Some inadequate definitions of AI ------------------------------ Marvin Minsky once defined Artificial Intelligence as '... the science of making machines do things that would require intelligence if done by men'. I don't know if he still likes this definition, but it is often quoted with approval. A slightly different definition, similar in spirit but allowing for shifting standards, is given in the textbook on AI by Elaine Rich (McGraw-Hill 1983): '.. the study of how to make computers do things at which, at the moment, people are better.' There are several problems with these definitions. (a) They suggest that AI is primarily a branch of engineering concerned with making machines do things (though Minsky's use of the word 'science' hints at a study of general principles). (b) Perhaps the main objection is their concern with WHAT is done rather than HOW it is done. There are lots of things computers do that would require intelligence if done by people but which have nothing to do with AI, because there are unintelligent ways of getting them done if you have enough speed. E.g. calculators can do complex sums which would require intelligence if done by people. Even simple sums done by a very young child would be regarded as an indication of high intelligence, though not if done by a simple mechanical calculator. Was building calculators to go faster or be more accurate than people once AI? For Rich, does it matter in what way people are currently better? (c) Much AI (e.g. work reported at IJCAI) is concerned with studying general principles in a way that is neutral as to whether it is used for making new machines or explaining how existing systems (e.g. people or squirrels) work. For instance, John McCarthy is said to have coined the term 'Artificial Intelligence' but it is clear that his work is of this more general kind, as is much of the work by Minsky and many others in the field. Many of those who use computers in AI do so merely in order to test, refine, or demonstrate their theories about how people do something, or, more profoundly, because only with the aid of computational concepts can we hope to express theories with rich enough explanatory power. (Which does not mean that present-day computational concepts are sufficient.) For these reasons, the 'Artificial' part of the name is a misnomer, and 'Cognitive Science' or 'Computational Cognitive Science' or 'Epistemics' might have been better names. But it is too late to change the name now, despite the British Alvey Programme's silly use of "IKBS" (Intelligent Knowledge Based Systems) instead of "AI" -- Towards a better definition of AI ------------------------------ Winston, in the second edition of his book on AI (Addison Wesley, 1984) defines AI as 'the study of ideas that enable computers to be intelligent', but quickly moves on to identify two different goals: 'to make computers more useful' 'to understand the principles that make intelligence possible'. His second goal captures the spirit of my complaint about the other definitions. (I made similar points in my book 'The Computer Revolution in Philosophy' (Harvester Press and Humanities Press, 1978; now out of print)). All this assumes that we know what intelligence is: and indeed we can recognise instances even when we cannot define it, as with many other general concepts, like 'cause' 'mind' 'beauty' 'funniness'. Can we hope to have a study of general principles concerning X without a reasonably clear definition of X? Since almost any behaviour can be the product of either an intelligent system (e.g. using false or incomplete beliefs or bizarre motives), or an unintelligent system (e.g. an enormously fast computer using an enormously large look-up table) it is important to define intelligence in terms of HOW the behaviour is produced. -- Towards a definition of Intelligence --------------------------- Intelligent systems are those which: (A) are capable of using structured symbols (e.g. sentences or states of a network; i.e. not just quantitative measures, like temperature or concentration of blood sugar) in a variety of roles including the representation of facts (beliefs), instructions (motives, desires, intentions, goals), plans, strategies, selection principles, etc. NOTE.1. - The set of structures should not be pre-defined: the system should have the "generative" capability to produce new structures as required. The set of uses to which they can be put should also be open ended. (B) are capable of being productively lazy (i.e. able to use the information expressed in the symbols in order to achieve goals with minimal effort). Although it may not be obvious, various kinds of learning capabilities can be derived from (B) which is why I have not included learning as an explicit part of the definition, as some people would. There are many aspects of (A) and (B) which need to be enlarged and clarified, including the notion of 'effort' and how different sorts can be minimised, relative to the system's current capabilities. For instance, there are situations in which the intelligent (productively lazy) thing to do is develop an unintelligent but fast and reliable way to do something which has to be done often. (E.g. learning multiplication tables.) NOTE.2 on above "NOTE.1". I think it is important for intelligence as we conceive it that the mechanisms used should not have any theoretical upper bound to the complexity of the structures with which they can cope, though they may have practical (contingent) limits such as memory limits, and addressing limits..... (The notion of "generative power", i.e. which of a mechanism's limits are theoretically inherent in its design and and which are practical or contingent on the implementation requires further discussion. One test is whether the mechanism could easily make use of more memory if it were provided. A table-lookup mechanism would not be able to extend the table if given more space.) NOTE.3. No definition of intelligence should be regarded as final. As in all science it is to be expected that further investigation will lead to revision of the basic concepts used to define the field. Starting from a suitable (provisional) notion of what an intelligent system is, I would then define AI as the study of principles relevant to explaining or designing actual and possible intelligent systems, including the investigation of both general design requirements and particular implementation tradeoffs. The reference to 'actual' systems includes the study of human and animal intelligence and its underlying principles, and the reference to 'possible' systems covers principles of engineering design for new intelligent systems, as well as possible organisms that might develop one day. NOTE.4: this definition subsumes connectionist (PDP) approaches to the study of intelligence. There is no real conflict between connectionism and AI as conceived of by their broad minded practitioners. The study of ranges of design possibilities (what the limits and tradeoffs are, how different possibilities are related, how they can be generated, etc.) is a part of any theoretical understanding, and good AI MUST be theoretically based. There is lots of bad AI -- what John McCarthy once referred to as the 'look Ma, no hands' variety. The definition of intelligence could be tied more closely to human and animal intelligence by requiring the ability to cope with multiple motives in real time, with resource constraints, in an environment which is partly friendly partly unfriendly. But probably (B) can be interpreted as including all this as a special case! More generally, it is necessary to say something about the nature of the goals and the structure of the environment in which they are to be achieved. But I have gone on long enough. Conclusion: any short and simple definition of AI is likely to be shallow, one-sided, or just wrong as an description of the range of existing AI work. Aaron Sloman, School of Cognitive and Computing Sciences, Univ of Sussex, Brighton, BN1 9QN, England INTERNET: aarons%uk.ac.sussex.cogs@nsfnet-relay.ac.uk aarons%uk.ac.sussex.cogs%nsfnet-relay.ac.uk@relay.cs.net JANET aarons@cogs.sussex.ac.uk BITNET: aarons%uk.ac.sussex.cogs@uk.ac or aarons%uk.ac.sussex.cogs%ukacrl.bitnet@cunyvm.cuny.edu UUCP: ...mcvax!ukc!cogs!aarons or aarons@cogs.uucp
rwex@IDA.ORG (Richard Wexelblat) (08/11/89)
In article <1213@syma.sussex.ac.uk> aarons@syma.sussex.ac.uk (Aaron Sloman) writes: >kim@watsup.waterloo.edu (T. Kim Nguyen) writes: >> Anyone seen any mind-blowing (I mean, *GOOD*) definitions of AI? All >> the books seem to gloss over it... >Most people who attempt to define AI give limited definitions based >on ignorance of the breadth of the field. E.g. people who know >nothing about work on computer vision, speech, or robotics often >define AI as if it were all about expert systems. (I even once >saw an attempt to define it in terms of the use of LISP!). A semi-jocular definition I have often quoted (sorry, I don't know the source, I first saw it in net.jokes) is: AI is making computers work like they do in the movies. Clearly, this is circular and less than helpful operationally. But it's a good way to set the scene, especially with layfolks. A problem with the breadth of AI is that as soon as anything begins to be successful, it's not considered AI anymore--as if the opprobrium of being associated with the AI community were something to get away from as soon as possible. Ask someone in NatLang or Robot Vision if they're doing AI. -- --Dick Wexelblat |I must create a System or be enslav'd by another Man's; | (rwex@ida.org) |I will not Reason and Compare: my business is to Create.| 703 824 5511 | -Blake, Jerusalem |
GA.CJJ@forsythe.stanford.edu (Clifford Johnson) (08/12/89)
Here's a footnote I wrote describing "AI" in a document re nuclear "launch on warning" that only mentioned the term in passing. I'd be interested in criticism. It does seem a rather arbitrary term to me. Coined by John McCarthy at Dartmouth in the 1950s, the phrase "Artificial Intelligence" is longhand for computers. Today's machines think. For centuries, classical logicians have pragmatically defined thought as the processing of raw perceptions, comprising the trinity of: categorization of perceptions (Apprehension); comparison of categories of perceptions (Judgment); and the drawing of inferences from connected comparisons (Reason). AI signifies the performance of these definite functions by computers. AI is also a buzz-term that salesmen have applied to virtually all 1980's software, but which to data processing professionals especially connotes software built from large lists of axiomatic "IF x THEN y" rules of inference. (Of course, all programs have some such rules, and, viewed at the machine level, are logically indistinguishable.) The idiom artificial intelligence is curiously convoluted, being applied more often where the coded rules are rough and heuristic (i.e. guesses) rather than precise and analytic (i.e. scientific). The silly innuendo is that AI codifies intuitive expertise. Contrariwise, most AI techniques amount to little more than brute trial-and-error facilitated by rule-of-thumb short-cuts. An analogy is jig-saw reconstruction, which proceeds by first separating pieces with corners and edges, and then crudely trying to find adjacent pairs by exhaustive color and shape matching trials. This analogy should be extended by adding distortion to all pieces of the jig-saw, so that no fit is perfect, and by repainting some, removing other, and adding a few irrelevant pieces. A most likely, or least unlikely, fit is sought. Neural nets are computers programmed with an algorithm for tailoring their rules of thumb, based on statistical inference from a large number of sample observations for which the correct solution is known. In effect, neural nets induce recurrent patterns from input observations. They are limited in the patterns that they recognize, and are stumped by change. Their programmed rules of thumb are not more profound, although they are more complicated, raw "IF... THEN" constructs. Neural nets derive their conditional branchings from underlying rules of statistical inference, and cannot extrapolate beyond the fixations of their induction algorithm. Like regular AI applications, they must select an optimal hypotheses from a simple, predefined set. Thus, all AI applications are largely probabilistic, as exemplified by medical diagnosis and missile attack warning. In medical diagnosis, failure to use and heed a computer can be grounds for malpractice, yet software bugs have gruesome consequences. Likewise, missile attack warning deters, yet puts us all at risk.
andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head ) (08/12/89)
In article <4298@lindy.Stanford.EDU>, GA.CJJ@forsythe.stanford.edu (Clifford Johnson) writes: > [Neural nets] are limited in the patterns that they > recognize, and are stumped by change. * flame bit set * Go read about Adaptive Resonance Theory (ART) before making sweeping and false generalisations of this nature! -- ........................................................................... Andrew Palfreyman There's a good time coming, be it ever so far away, andrew@berlioz.nsc.com That's what I says to myself, says I, time sucks jolly good luck, hooray!
GA.CJJ@forsythe.stanford.edu (Clifford Johnson) (08/13/89)
In <615@berlioz.nsc.com>, Lord Snooty writes: >In <4298@lindy.Stanford.EDU>, Clifford Johnson writes: >> [Neural nets] are limited in the patterns that they >> recognize, and are stumped by change. >Go read about Adaptive Resonance Theory (ART) before making sweeping >and false generalisations of this nature! I would have thought stochastic convergence theory more relevant than resonance theory. What exactly is your point, and what, specifically, should I read?
andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head ) (08/13/89)
In article <4318@lindy.Stanford.EDU>, GA.CJJ@forsythe.stanford.edu (Clifford Johnson) writes: > >In <4298@lindy.Stanford.EDU>, Clifford Johnson writes: > >> [Neural nets] are limited in the patterns that they recognize, > >> and are stumped by change. > *flame bit set* > >Go read about Adaptive Resonance Theory (ART) before making sweeping > >and false generalisations of this nature! > > I would have thought stochastic convergence theory more relevant > than resonance theory. > What exactly is your point, and what, specifically, should I read? I refer to "stumped by change", which admittedly is rather inexact in itself. I am not familiar with "stochastic convergence", although perhaps there is another name for it? A characteristic of ART nets is that they are capable of dealing with realtime input and performing dynamic characterisations. A good start would be "Neural Networks & Natural Intelligence" by Stephen Grossberg (ed), 1988, MIT Press. Enjoy. -- ........................................................................... Andrew Palfreyman There's a good time coming, be it ever so far away, andrew@berlioz.nsc.com That's what I says to myself, says I, time sucks jolly good luck, hooray!