ed298-ak@violet.berkeley.edu (Edouard Lagache) (10/23/87)
I would like to clarify some of the aspects of Hubert Dreyfus's work that were overlooked by Stephen Smoliar in his note. I won't try to defend Dreyfus, since I doubt that many people on this SIG is really open-minded enough to consider the alternative Dreyfus proposes, but for the sake of correctness: Most of Mr. Smoliar points are in fact dealt with in his first book. His second book was intended more for the general public, thus it glosses over a number of important arguments that are in the first book. As a matter of opinion, I like the first book better, although it is probably important to read both books to understand his full position. The first book is: What Computers Can't Do, The Limits of Artificial intelligence, Harper and Row, 1979. One point where Mr. Smoliar misses Dreyfus completely is in his assumption that Dreyfus is taking about models. Dreyfus is far more radical than that. He believes that humans don't make models, rather they carry a collection of specific cases (images?) Anyone who is at all honest in this field has to admit that there are a lot of failures to be accounted for. While I feel that Dreyfus is too pessimistic in his outlook, I feel that there is value in looking at his perspective. I would hope that by reflecting on (and reacting against) such skepticism, A.I. researchers would be able to sharpen their understanding of both human and Artificial Intelligence. Edouard Lagache lagache@violet.berkeley.edu
smoliar@vaxa.isi.edu (Stephen Smoliar) (10/27/87)
In article <5601@jade.BERKELEY.EDU> ed298-ak@violet.berkeley.edu (Edouard Lagache) writes: > > > One point where Mr. Smoliar misses Dreyfus completely is in his > assumption that Dreyfus is taking about models. Dreyfus is far more > radical than that. He believes that humans don't make models, rather > they carry a collection of specific cases (images?) > An area which always seems to have engendered confusion between humanistic and scientific cultures (assuming that there are two such cultures) is the proper definition and usage of the term "model." Let me try to clear the air by stating that I wish to use the term in the same sense that Minsky proposed in his paper, "Matter, Mind and Models:" To an observer B, and object A* is a model of an object A to the extent that B can use A* to answer questions that interest him about A. In other words, if you can't answer the question directly from "inspection" of A, you build a model which may yield to inspection more readily. Such a model amounts to an abstraction of A. Dreyfus is not alone in his image-based theory of memory and/or knowledge. However, I think it is a mistake to assume that just because you have images, you can't have abstraction (or, for that matter, vice versa). What worries me about Dreyfus is that, in reading what I have read, I do not feel he is that much of a scholar when it comes to surveying the state of the art. I would refer anyone interested in this particular point of view to Chapter 12 of Alvin I. (yes, A. I.) Goldman's EPISTEMPOLOGY AND COGNITION. This is not to say that I espouse everything Goldman has to say. In a future issue of ARTIFICIAL INTELLIGENCE, I shall have a review of this book in which I rasie some objections, several of which were reasonably responded to by Goldman in a reply which will also be published. It is not that I object to skepticism towards artificial intelligence. I just feel that there are many writers, other than Dreyfus, who have been able to present their skepticism on a much firmer scholarly foundation.