WILKINS@SRI-AI.ARPA (12/05/83)
From: Wilkins <WILKINS@SRI-AI.ARPA> From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay> They then resort to arcane languages and to attributing 'mental' characteristics to what are basically fuzzy algorithms that have been applied to poorly formalized or poorly characterized problems. Once the problems are better understood and are given a more precise formal characterization, one no longer needs "AI" techniques. I think Professor McCarthy is thinking of systems (possibly not built yet) whose complexity comes from size and not from imprecise formalization. A huge AI program has lots of knowledge, all of it may be precisely formalized in first-order logic or some other well understood formalism, this knowledge may be combined and used by well understood and precise inference algorithms, and yet because of the (for practical purposes) infinite number of inputs and possible combinations of the individual knowledge formulas, the easiest (best? only?) way to desribe the behavior of the system is by attributing mental characteristics. Some AI systems approaching this complex already exist. This has nothing to do with "fuzzy algorithms" or "poorly formalized problems", it is just the inherent complexity of the system. If you think you can usefully explain the practical behavior of any well-formalized system without using mental characteristics, I submit that you haven't tried it on a large enough system (e.g. some systems today need a larger address space than that available on a DEC 2060 -- combining that much knowledge can produce quite complex behavior).