wm@tekchips.UUCP (Wm Leler) (12/04/85)
Here's another "off-the-cuff" definition of AI, but one which I think captures the essence of what separates AI CS from regular CS. Artificial Intelligence is the branch of Computer Science that attempts to solve problems for which there is no known efficient solution, but which we know are efficiently solvable, (typically) because some intelligence can solve the problem (often in "real time"). A side benefit of AI is that it helps us learn how intelligences solve these problems, and thus how natural intelligence works. Example: vision. We do not have any algorithms for recognizing, say, animal faces in images, but we know it must be possible, because humans (even infants) can effectively recognize faces. Solving this problem would help us understand how human vision works. wm
kay@warwick.UUCP (Kay Dekker) (01/04/86)
Sorry about this followup being a little delayed, but I haven't read the news much recently, so I'm catching up over the weekend... In article <409@tekchips.UUCP> wm@tekchips.UUCP (Wm Leler) writes: >A side benefit of AI is that it helps us learn how intelligences >solve these problems, and thus how natural intelligence works. > >Example: vision. We do not have any algorithms for recognizing, >say, animal faces in images, but we know it must be possible, >because humans (even infants) can effectively recognize faces. >Solving this problem would help us understand how human vision >works. I'm not sure that this reasoning is totally sound. Sure, we may find *solutions* to problems, but I don't see that because we produce models that fit experimental evidence, the models will *necessarily* help us to understand how the problems are solved "in the flesh". Just because I have two black boxes that produce the same combinations of outputs for the same combinations of inputs (for example) doesn't permit me to reason "They behave identically from the outside, therefore their interior natures are similar." Kay. -- This .signature void where prohibited by law ...ukc!warwick!kay
robert@epistemi.UUCP (Robert Inder) (01/06/86)
In article <2401@flame.warwick.UUCP> kay@flame.UUCP (Kay Dekker) replies to Wm. Leler's suggestion that "A side benefit of AI is that it helps us learn how intelligences solve these problems, and thus how natural intelligence works", saying: >I'm not sure that this reasoning is totally sound. Sure, we may find >*solutions* to problems, but I don't see that because we produce models >that fit experimental evidence, the models will *necessarily* help us to >understand how the problems are solved "in the flesh". Just because I have >two black boxes that produce the same combinations of outputs for the same >combinations of inputs (for example) doesn't permit me to reason "They >behave identically from the outside, therefore their interior natures are >similar." The emphasised "necessarily" is crucial here. Certainly getting "models that fit experimental evidence" does not mean we KNOW (absolutely, for sure) that the model is behaving in the same way as the original. However, as the early chapters of Chomsky's "Rules and Representations" are basically arguing, this is true, but uninterestitng. Every theory is underdetermined by evidence, and science is always a matter of believing (working with) the best model that you have got. If the model does fit the available evidence better than any other account, then (meta-theoretical considerations being equal) it deserves consideration as an account of how the "real" system behaves. Robert Inder. University of Edinburgh, Centre for Cognitive Science, 2 Buccleuch Place, Edinburgh, EH8 9LW, Scotland. ...!ukc!cstvax!epistemi!robert I wish I could come up with a good signature...
friesen@psivax.UUCP (Stanley Friesen) (01/07/86)
In article <2401@flame.warwick.UUCP> kay@flame.UUCP (Kay Dekker) writes: > >In article <409@tekchips.UUCP> wm@tekchips.UUCP (Wm Leler) writes: >>A side benefit of AI is that it helps us learn how intelligences >>solve these problems, and thus how natural intelligence works. > >I'm not sure that this reasoning is totally sound. Sure, we may find >*solutions* to problems, but I don't see that because we produce models >that fit experimental evidence, the models will *necessarily* help us to >understand how the problems are solved "in the flesh". > This is especially true given the vastly different hardware used in the two types of systems. A solutions that is effective for the linear or nearly linear processing of an electronic computer might well be *quite* different than a solution effective in the *massively* parallel system that is found in even the simplest brain. Even the most massively parallel computer design now contemplated is essentially just a glorified linear system compared to a brain. The brain effectively has a nano-processor for each *bit* (or perhaps each nibble). It is also extensively *pipelined*, having a seperated physical stage for each stage of the computation. Thus the eye itself(the retina) performs in parallel what amounts to 2d derivative of the light flux! And that is *just* the eye. Try and get a computer to do *that* for a complete multi-thousand pixel image in less than a second! And then *continue* to do it, in real time, indefinately. -- Sarima (Stanley Friesen) UUCP: {ttidca|ihnp4|sdcrdcf|quad1|nrcvax|bellcore|logico}!psivax!friesen ARPA: ttidca!psivax!friesen@rand-unix.arpa
kort@hounx.UUCP (B.KORT) (01/08/86)
Stanley Friesen comments on the connection between AI research and our understanding of human cognitive processes. I would agree that visual field processing is highly parallel, unlike most conventional computer architectures. However, it seems to me that much of the left hemispheric activity (language processing and symbolic logic) is sequential, and there may be more overlap with conventional computers in this area of mental processing. I am reminded of Marshal McLuhan's thesis in _The Medium is the Message_ in which he points out that the electronic media (especially TV) served not so much to communicate the *content* of the programs ("vast wasteland") as to demonstrate the *process* of communication. TV journalists now refer to themselves as "communicators". Similarly, AI can be viewed as a polite medium in which very bright people demonstrate to each other the very best ways of organizing cognitive processes. It is certainly true that much of the software logic running in my left hemisphere was uploaded from computers, having been placed int the machines by my predecessors who discovered and implemented successful and efficient methods for many information processing and problem solving tasks. It is my thesis that advances in AI are not only the result of human minds formalizing natural intelligence. Advances in AI also serve to expand and disseminate the collection of ideas comprising natural intelligence. -- Barry Kort ...ihnp4!houxm!hounx!kort A door opens. You are entering another dementia. The dementia of the mind.