[net.math.symbolic] System Comparisons, Test Suites

lseward@randvax.UUCP (Larry Seward) (11/16/84)

A short report:  On the Design and Performance of the Maple System is
available from University of Waterloo giving a brief overview of the
architecture and comparing Maple, Macsyma and Reduce.  Comparisons are
given for computing determinants, finding GCD's, solving systems of
equations, and for a set of miscellaneous problems.  Not suprisingly Maple
outperforms Macsyma and Reduce on this self selected suite of problems, in
some areas perhaps justifiably so.

What is more important are the reasons why the systems vary.  For example
on the GCD suite REDUCE does poorly because a) the flag for the comparable
algorithm (EZGCD) was not turned on, and b) the set of problems used all
had trivial (although random) factors for which the default algorithm in
REDUCE is known not to perform well.  It is the default because it is
robust, not because it is efficient.  The performance of Maple illustrates
the strength of it's multple algorithm architecture.  But the problems with
the report also illustrate the problems with relying on benchmarks: it is
hard to evaluate the benchmarks without understanding the reasons why the
systems differ, and then deciding if those reasons are applicable to the
problems one intends to work with.  For example for many (most?)
applications trivial GCD's do not exist, and the GCD suite is not
representative.

Using test suites to evaluate systems is only valid to the extent that the
tests themselves contain documentation about the features they are probing
for.

The report can be obtained from:
  Symbolic Computation Group
  Department of Computer Science
  University of Waterloo
  Waterloo, Ontario
  CANADA N2L 3G1

Larry Seward