[comp.parallel] SLALOM Benchmark

carter@iastate.edu (Carter Michael Brannon) (12/20/90)

In article <12330@hubcap.clemson.edu Bernard Bauer writes:
> I want to build up a short clear table with informations about the fastest
> processors/machines (in comparison with parallel systems). I haven't found
> any comparable overview until now.

     I have just what you're looking for!  Here at the Ames Laboratory at
Iowa State University, we have developed a new benchmark: SLALOM (Scalable,
Language-independent, Ames Laboratory, One-minute Measurement)  This bench-
mark runs for a fixed amount of time (one minute), not for a fixed-size
problem.  Consequently, it can fairly benchmark any computer from a lowly
laptop portable to the most powerful supercomputer (and well beyond).

     The first public announcement of the benchmark appeared in the November
1990 issue of Supercomputing Review.  Since then, we have received floods of
mail from individuals wishing to run the benchmark on their machines.  A
comprehensive, and ever-changing, list of performances is available from our
ftp site, as well as source code for many versions of the code.  Our ftp
site's name is "tantalus.al.iastate.edu" and has an IP address of
129.186.200.15.

     The result of a SLALOM run is a single number expressing the "size" of
problem the machine was able to solve in one minute.  Machines are classed
according to this problem size rather than FLOPS for a number of reasons.
First and foremost, floating-point operations are hardly the only work that
a computer does in order to solve a problem.  One must fairly evaluate the
work a machine must do in order to solve a given problem.  One way of doing
this is to pose a standard problem to be solved, and allow people to solve
it in whatever way is most appropriate to the computer in question.  To
quote from the article:

     Most "benchmarks" are simply excerpts from programs that
     have been run on multiple machines, or "synthetic" combin-
     ations of sample operations that correlate poorly with
     entire problems.  Building a good benchmark is a challenge,
     since it has to be unprejudiced toward any machine or
     programming environment, free from undiscovered "shortcuts,"
     and capable of self verification.  Most challenging of all
     is to create a single program that captures salient features
     of scientific computing generally.

     SLALOM can be run on scalar, vector and parallel machines of
     all kinds.

     SLALOM solves a radiosity problem, which amounts to solving a large
system of linear equations, just like LINPACK.  The similarity stops
here, though.  SLALOM also times the I/O required to read in the
problem parameters and write the answer, and the matrix setup.  Further-
more, a sufficiently large problem is solved to allow the machine to
run for one minute.  This allows all machines to show off what they're
made of.

     We currently have a list of more than 50 machines which have run
SLALOM spanning over six orders of magnitude in performance and prices
ranging from $1,000 to $25M.  This list is also available via anonymous
FTP.

     Specific comments and questions about SLALOM can be sent to
slalom@tantalus.al.iastate.edu.

Michael B. Carter
carter@iastate.edu

gc@sp12.csrd.uiuc.edu (George Cybenko) (12/21/90)

carter@iastate.edu (Carter Michael Brannon) writes:
>In article <12330@hubcap.clemson.edu Bernard Bauer writes:
>> I want to build up a short clear table with informations about the fastest
>> processors/machines (in comparison with parallel systems). I haven't found
>> any comparable overview until now.

>     I have just what you're looking for!  ....

The PERFECT Benchmark data (obtainable by sending email to
Cathy Warmbier - warmbier@csrd.uiuc.edu) is another possible source
of numbers.  There is data about virtually all supercomputers of the
1980's and a new Fujitsu/Siemens model (no NEC SX3 numbers yet though).
There are performance figures for each of 13 codes in the benchmark
suite (fluid dynamics, seismic migration, circuit simulation, etc)
but you are on your own when trying to reduce the data to a single
number for each machine because of anomolies (rankings of machines
based on different codes are different).  The SLALOM approach
is novel and meaningful if you are doing radiosity problems but
what does it tell you about structural mechanics or molecular
dynamics applications performance?

That's the problem with trying to reduce performance down to one number.
It's more complicated than that.


George Cybenko
Center for Supercomputing Research and Development
University of Illinois at Urbana
gc@csrd.uiuc.edu   (217) 244-4145