[comp.ai.shells] Expert System Shells Speed Comparison

srt@aerospace.aero.org (Scott TCB Turner) (12/20/90)

I've been studying the performance of various expert system shells
lately, and I thought the results interesting enough to post.  For
each shell I wrote the simplest possible rule that would fire
repeatedly and timed the number of firings per minute.  Obviously,
this is an overly simplistic test.  It would be better to time some
kind of "realistic" set of rules, but I haven't the time or energy to
do that.  This test does, I think, give some ideas about the basic
speed of the rule interpreter.

All the timings were done on a Vaxstation 3100.  The timing for ART-IM
is estimated, since I only had an executable for a PC.  I used the
comparative speeds of CLIPS on the Vaxstation and the PC to estimate
ART-IM's times on a Vaxstation.

Shell			Approx. Rules/Minute
G2				1700
Nexpert				4000
ART-IM				5500
CLIPS			       49000

CLIPS is the obvious big winner.  This despite that fact that
incrementing a counter in CLIPS requires an assert and retract from
the database.

						-- Scott Turner

seim@tub.uucp (Kai Seim) (01/10/91)

In article <7422@uklirb.informatik.uni-kl.de> srt@aerospace.aero.org (Scott TCB Turner) writes:


>I've been studying the performance of various expert system shells
>lately, and I thought the results interesting enough to post.  For
>each shell I wrote the simplest possible rule that would fire
>repeatedly and timed the number of firings per minute.  Obviously,
>this is an overly simplistic test.  It would be better to time some
>kind of "realistic" set of rules, but I haven't the time or energy to
>do that.  This test does, I think, give some ideas about the basic
>speed of the rule interpreter.

I think, this kind of measure isn't a very realistic one. I have only some
experience with a hybrid expert system shell, named babylon (which is implemeted
by the GMD and VW Gedas, a german CS-Foundation and the software-house of VW). 
And I think, it's not enough, to fire a rule, to measure the performance of
a ruleinterpreter.

This shell uses frame-types as predicates, slots with frametypes as predicates,
freetext (self defined) predicates and so on. My problem with your kind of 
measure is: These different kinds of predicates will execute with different 
speed (i presume). How can you measure these thing? CLIPS, as far as i read
or heard, for example, is a very primitive rule-based shell. I presume, that
it needs more rules to get the same results as a hybrid shell?

I would like to discuss these things.
>
>Shell			Approx. Rules/Minute
>G2				1700
>Nexpert				4000
>ART-IM				5500
>CLIPS			       49000
>
>CLIPS is the obvious big winner.  This despite that fact that
>incrementing a counter in CLIPS requires an assert and retract from
>the database.
>
So I can't agree with your interpretation of your measures.
>						-- Scott Turner

With regards

Kai Seim

Kai Seim			email:	seim@opal.cs.tu-berlin.de
Taborstr. 14a			phone:	+ 49 30 6125451
D 1000 Berlin 36		organisation: University of Technology Berlin