mitchell@ai.toronto.edu (David Mitchell) (08/09/90)
We are looking for data from real reasoning systems for testing theorem proving strategies. The data required are actual set(s) of queries (or assertions) that a user or users posed to system. The data could be from any kind of application in which queries are posed to a theorem proving mechanism, but it should be representative of a realistic distribution of queries (ideally, but not necessarily, for individual users). A set of session transcripts/protocols would be fine - that is, list(s) of the form; user: Is X(1) true? system: Yes [the answer uses facts X1, X2 and rule Y] user: Find the W's such that f(W) holds. system: W=9, W=5 [the answers use facts f1, f2 and rules Q and R] user: Is X(4) true? system: Yes [the answer uses facts T1, T2 and T3, and rule Z] . . . Ideally, data on how often various derivation paths succeeded - ie.; how often some set of base facts are actually used to reach some result - would be available also. Trace information for each query (ie; a list of rules and facts used to generate the response) would be valuable if available. Any sources or pointers would be appreciated. Thanks, -- Dave Mitchell - mitchell@ai.toronto.edu / davidm@psych.toronto.edu.