atan@park.bu.edu (Ah-Hwee Tan) (06/14/91)
Hi! I am in the process of gathering a list of knowledge-intensive tasks for benchmarking expert system architectures. The list will include the description of the tasks, databases of training and testing cases if have been implemented in neural networks, listing of knowledgebase if have been implemented in rules. The tasks need not to be large scale but should be nontrivial. The list of tasks will be useful in evaluating expert system architectures that aim to incorporate both pattern processing and logical inferencing capabilities. If someone has some knowledge-intensive tasks at hand and wish to contribute, or can direct me to them, please send email to me. The completed list will be shared. Cheers. Ah-Hwee Tan email : atan@park.bu.edu Department of Cognitive and Neural Systems Boston University
ntm1169@dsac.dla.mil (Mott Given) (06/14/91)
From article <ATAN.91Jun13143045@park.bu.edu>,by atan@park.bu.edu (Ah-Hwee Tan): > Hi! I am in the process of gathering a list of knowledge-intensive > tasks for benchmarking expert system architectures. The appropriate benchmark depends upon the kind of tool you are talking about, eg. Prolog, or LISP, or an expert system shell, or a neural network. One approach that I have seen with expert system shells (in Paul Harmon's books) is to take a small, benchmark expert system and see how difficult it is to code in different expert system shells. This brings up another question - are you concerned with benchmarking the speed of the running application, or the speed and ease of development, or the speed in recompiling the application each time you make a change it in (some tools have incremental compilers while others do not), or what? -- Mott Given @ Defense Logistics Agency Systems Automation Center, DSAC-TMP, Bldg. 27-1, P.O. Box 1605, Columbus, OH 43216-5002 INTERNET: mgiven@dsac.dla.mil UUCP: ...{osu-cis}!dsac!mgiven Phone: 614-238-9431 AUTOVON: 850-9431 FAX: 614-238-9928 I speak for myself