ddj7203@cec1.wustl.edu (David D. Jensen) (11/14/90)
In article <1870008@hpwrce.HP.COM> kingsley@hpwrce.HP.COM (Kingsley Morse) writes: >...frankly, I don't know how to match a classification algorithm with a >representative problem. From what I can tell, almost no one does. Unless a great deal is known about how the domain is structured, it is very difficult to predict how a particular algorithm (and its representation) will do on a particular problem. Of course, what is known about a problem in advance may be largely a function of the representations and algorithms of *past* techniques. >it's a safe bet that a nearest neighbor algorithm could >approximate a multiple regression algorithm, or vica versa, given enough >training patterns... While I might be convinced to agree with the first half, I have problems with the last half. Nearest neighbor approaches appear to be very flexible in representing a variety of classification problems. Indeed, they appear to work well on some problems where multiple regression (used for classification) does *not* work well. Perhaps this is why humans seem to use nearest neighbor-like approaches for their classification (see Rosch and LLoyd, Categories and Cognition (?)). >In summary, would it make sense to develop a classification algorithm which >can partition a decision space several ways, then focus on the way that >correctly classifies new cases fastest? Designing classification algorithms that can employ multiple representations seems a very good idea. However, I think it will be difficult in practice. I certainly think that we need to make multiple tools easily accessible to human analysts. As for automonous intelligent agents, I'd put my money on nearest neighbor (also called prototype or exemplar-based) approaches. David Jensen Washington University ddj7203@CEC1.WUSTL.EDU