[comp.benchmarks] Mathematica Benchmarks

jack@linus.claremont.edu (01/04/91)

Hi,

I just ran some interesting Mathematica benchmarks on a NeXTstation, a
Mac IIci (with Math coprocessor), and a DECStation model 3100.  The
results are a little surprising.  The Mathematica benchmarks are a
suite of 41 different tests from a number of different sources.  The
test took a total of anywhere from about 2.5 minutes for the NeXT to
22.5 minutes for the Mac IIci.

A summary of the tests follows (the tests are too long to post here).

NeXTStation vs. DECStation 3100

Slope of Best Fit of time for each test     	1.35
  (NeXT as x, DEC as y)
Correlation Coefficent				0.98

Mean of ratio's of each test			1.20
Standard Deviation of ratio's			0.3


NeXTStation vs. Mac IIci with coprocessor

Slope of Best Fit of time for each test     	9.51
  (NeXT as x, Mac as y)
Correlation Coefficent				0.97

Mean of ratio's of each test			8.94
Standard Deviation of ratio's			3.52

The NeXTStation was about 20-30% faster on this group of benchmarks
than the DECStation 3100.  This is surprising since the DECStation is
rated and 13.9 Mips and the NeXTstation benches at 15.0 Mips.  You
would expect from the Mips rating only about a 10% increase.  I wonder
if it has anything to do with RISC vs CISC architectures ?(The
NeXTstation is CISC --- The DECStation is RISC.)

The Mac IIci was surprisingly slow.  A 68030 NeXT (same CPU as the
IIci ) is only roughly 3 times slower that the NeXTstation.  I wonder
if the performance difference might have to compiler optimization or
perhaps hardware (caching?)?

The complete tests and the Mathematica benchmark are available via
anonymous FTP from FENRIS.CLAREMONT.EDU.  I will try to benchmark a
Mac IIfx over the next few weeks (I think there is one on campus with
Mathematica).  If anyone has Mathematica running on a different
architecture please feel free to grab the Benchmark and run it.  If you
do so, please put the results back into /pub/submission on FENRIS.  I
am really interested in seeing the results vs a SPARC, IBM
PowerStation, and a DECStation 5000.

The Mathematica benchmark is by no means perfect. It does not (nor is
intended to) replace LINPACK, SPEC, Mips or any other widely used
benchmark.  There is a fair deal of variation of the results depending
on the specific test.  But I do like the test because it is fairly
quick and portable (assuming the demo system has Mathematica).  I also
like the fact that it is a benchmark that uses a "real life"
application.  System vs. System performance tends to vary with the
application.  I also like the fact that, as far as I know, no
manufacturer has written a compiler to optimize this benchmark.

Comments?

---Jack

Jack Stewart        		Jack@Hmcvax 		  (Bitnet)
User Support Coordinator,       jack@hmcvax.claremont.edu (Internet)
Harvey Mudd College,            jack@fozzie.claremont.edu (NeXT-Mail)
Claremont, Ca. 91711            714-621-8006