SHEBS@UTAH-20.ARPA (10/01/84)
From: Stan Shebs <SHEBS@UTAH-20.ARPA> (My my, people seem to get upset, even when I think I'm making noncontroversial statements...) It wasn't clear whether Tom Dietterich (and maybe others) understood my remark on induction. I was merely pointing out that "induction on one case" is indistinguishable from "generalization". Simple-minded generalization IS easy. Suppose I have as input a Lisp list (A B), (presumably the first in a stream), and I tell my machine to create some hypotheses about what it expects to see next. Possible hypotheses are: (A B) - the machine expects to see (A B) forever (?X B) - the machine expects to see 2nd element B (A ?X) - similarly (?X ?Y) - 2-element lists Since these are lists, presumably one could get more elaborate... (?X ?Y optional ?Z) ... And end up with "the most general hypothesis": ?X All of these patterns can be produced just by knowing how to form Lisp lists; I don't think there's any hidden assumptions or biases (please enlighten me if there are). I would say that in general, one can exhaustively generate all hypotheses, when the domains are completely specified (i.e. a pattern like (<or A B> B) for the above example has an undefined entity "or" which has nothing to do with Lisp lists; one would have to extend the domains in which one is operating). Generating hypotheses in a more reasonable order is completely domain-dependent (and no general theory is known). Getting back to the example, all of the hypotheses are equally plausible, since there is only one case to work from (unless one wants to arbitrarily rank these hypotheses somehow; but none can be excluded at this point). I agree that selecting representations is very hard; there's not even any consensus about what representations are useful, let alone about how to select an appropriate one in particular cases. (Have I screwed up anywhere in this? I really wasn't intending to flame...) stan shebs