[comp.benchmarks] Approximate MFLOPS

mccalpin@perelandra.cms.udel.edu (John D. McCalpin) (11/22/90)

> On 21 Nov 90 16:35:17 GMT, mfinegan@uceng.UC.EDU (michael k finegan) said:

michael> Does anyone have some reasonable way of measuring MFLOPS (Linpack
michael> or otherwise :-)) ? Could they email/post the code, or mix used ?
michael> 					mfinegan@uceng.uc.edu

I would normally reply by e-mail, but the word "reasonable" caught my
attention and I would like to know if my idea of "reasonable" matches
anyone else's.

First, the LINPACK 100x100 test is a fairly exact and repeatable
measure of MFLOPS.  It is only "fairly" exact since the vendor is
allowed to re-write the BLAS-1 (Basic Linear Algebra Subroutines)
however s/he wants in order to improve performance.  On the other
hand, the vendor is *not* allowed to modify the source at all -- not
even the comments!  Presumably this is intended to mimic the situation
of running large dusty decks with no time or expertise available for
detailed optimization.  On the down side, the test is awfully small.
The array has only 10,000 elements, which means it can be fully
contained in a 128kB cache for the 64-bit problem and in a 64kB cache
for the 32-bit problem.  For machines with vector units or pipelined
FPU's, the overhead of calling the BLAS routines can be large compared
with the time required to process the average vector length of 67
elements.   

(By the way, it has never been clear to me if producing an
"unsafe" BLAS is legal for this test.  By that I mean a BLAS which
assumes only strides of 1 and which removes all the silly "IF" tests
in the original source.  If the compilation system were smart enough
to use the "unsafe" BLAS only for the LINPACK library (probably by
inlining) and the "safe" BLAS for direct user calls, then this should
be workable.  I have seen significant performance improvements on
vector machines (ETA-10 and Cray Y/MP) by following this approach.)

There are two other LINPACK tests of interest.  The LINPACK 300x300
test was devised to use the level-2 BLAS (Matrix-Vector ops) and to
allow a slightly larger problem.  This produced huge speed
improvements on the Cray machines, but was a disaster for the
memory-to-memory Cyber 205/ETA-10 because of the non-unit strides
employed.   In any event, this test is the least popular, and has
subsequently been dropped from the report.

The LINPACK 1000x1000 test is often called the "anything goes" test.
The vendor is allowed to solve the system of equations in any way s/he
sees fit, including hand-coding the entire solver in assembly
language.  The only requirements are that the original driver code be
used and that the MFLOPS calculation be based on the number of
operations required for the original LU-decomposition code.  Almost
all vendors of high-performance machines have been able to achieve
something close to their hardware peak performance on this test.  In
the interests of politeness, I will not mention any names here of
those who could *not* do so well -- they are all in the report (see
below).

My experience has been that the LINPACK 1000x1000 test (using the
vendor's best technique) is a good estimate of the *real* peak
performance of a computer.  *Very* seldom is it possible to write user
code that performs at a significantly higher MFLOPS rate.  I have also
noticed that for code with very simple vector constructs (simple dyads
and triads, with few library functions) the LINPACK 100x100 case gives
a surprisingly good estimate of the performance attainable by "real"
codes.   By "surprisingly good", I mean that my real codes almost
always run at speeds within a factor of 2 of the LINPACK 100x100
results.  The largest differences are with those machines whose cache
refills are slow (like my Silicon Graphics 4D/25) which run close to a
factor of two slower on large codes than on the LINPACK 100x100.

Enough rambling.  The LINPACK codes and the paper tabulating the
results are available from the netlib server.  Send an e-mail message
to netlib@ornl.gov, with the text:
send index for benchmark
and the server will send an e-mail message back listing the names and
descriptions of the benchmark codes and other related material.  To
get the LINPACK 100x100 single-precision code, for example, send a
message with the text:
send linpacks from benchmark

Have fun....
--
John D. McCalpin			mccalpin@perelandra.cms.udel.edu
Assistant Professor			mccalpin@brahms.udel.edu
College of Marine Studies, U. Del.	J.MCCALPIN/OMNET

schreiber@schreiber.asd.sgi.com (Olivier Schreiber) (11/27/90)

In <MCCALPIN.90Nov22090957@pereland.cms.udel.edu> mccalpin@perelandra.cms.udel.edu (John D. McCalpin) writes:
>First, the LINPACK 100x100 test is a fairly exact and repeatable
>measure of MFLOPS.  It is only "fairly" exact since the vendor is
>allowed to re-write the BLAS-1 (Basic Linear Algebra Subroutines)
>however s/he wants in order to improve performance.  On the other
>hand, the vendor is *not* allowed to modify the source at all -- not
>even the comments!  Presumably this is intended to mimic the situation

Reading the Dongarra CS-89-85 report, my understanding is that
for the LINPACK 100x100 test, one is not allowed to modify the 
BLAS-1 Basic Linear Algebra Subroutines either as they are
contained in the source code obtained from netlib.
The Dongarra CS-89-85 report may be obtained by
mail netlib@ornl.gov
send performance from benchmark

        A post script copy of the paper by J. Dongarra, `Performance
        of Various Computers Using Standard Linear Algebra Software
        in a Fortran Environment'
--
Olivier Schreiber      schreiber@schreiber.asd.sgi.com        Tel(415)335 7353
                       Advanced Systems Division              MS 7L580
Silicon Graphics Inc., 2011 North Shoreline Blvd. Mountain View, Ca 94039-7311

mccalpin@perelandra.cms.udel.edu (John D. McCalpin) (11/27/90)

>>>>> On 26 Nov 90 19:13:42 GMT, schreiber@schreiber.asd.sgi.com (Olivier Schreiber) said:

Olivier> In <MCCALPIN.90Nov22090957@pereland.cms.udel.edu> mccalpin@perelandra.cms.udel.edu (John D. McCalpin) writes:
>[...] It is only "fairly" exact since the vendor is
>allowed to re-write the BLAS-1 (Basic Linear Algebra Subroutines)
>however s/he wants in order to improve performance. [....]

Olivier> Reading the Dongarra CS-89-85 report, my understanding is that
Olivier> for the LINPACK 100x100 test, one is not allowed to modify the 
Olivier> BLAS-1 Basic Linear Algebra Subroutines either as they are
Olivier> contained in the source code obtained from netlib.

Dongarra certainly seemed to have intended that the vendors produce
optimized BLAS for their machines.  The only "requirement" seemed to
me to be that the source/type of the BLAS be disclosed.  

In the early days, the report contained separate sections for the "All
Fortran" cases and the "Coded BLAS" cases.  I do not believe that this
distinction is maintained in the current (and much abbreviated)
report.
--
John D. McCalpin			mccalpin@perelandra.cms.udel.edu
Assistant Professor			mccalpin@brahms.udel.edu
College of Marine Studies, U. Del.	J.MCCALPIN/OMNET