[comp.lang.postscript] benchmarking postscript processors

chaim@nsc.nsc.com (Chaim Bendelac) (11/26/89)

Is there a "standard" and reasonably repeatable way to measure the
real performance of a postscript processor? Are there any "standard"
(public-domain) benchmark used for this purpose? (I know about the
"blue book" and other standard graphic examples, but they do not make
any claim of being "representative" of real usage, as whetstones and 
dhrystones try to be, in the scientific world).
How is performance measured? Stopwatch including page-eject? "usertime"?
Are pages separated by "quit"s? How does the usertime encapsulation
ensures objectivity? How do you separate I/O from processing from imaging
from printing times? How are caching and other "system" parameters isolated?

Anyone with some knowledge on postscript benchmarking care to expand?
Thanks!

woody@rpp386.cactus.org (Woodrow Baker) (11/28/89)

In article <13436@nsc.nsc.com>, chaim@nsc.nsc.com (Chaim Bendelac) writes:
> Is there a "standard" and reasonably repeatable way to measure the
> real performance of a postscript processor? Are there any "standard"
> (public-domain) benchmark used for this purpose? (I know about the
> "blue book" and other standard graphic examples, but they do not make
> any claim of being "representative" of real usage, as whetstones and 
> dhrystones try to be, in the scientific world).
> How is performance measured? Stopwatch including page-eject? "usertime"?
> Are pages separated by "quit"s? How does the usertime encapsulation
> ensures objectivity? How do you separate I/O from processing from imaging
> from printing times? How are caching and other "system" parameters isolated?
> 
> Anyone with some knowledge on postscript benchmarking care to expand?
> Thanks!

Well, I used to sell QMS-PS800' and 800+'s against Apple Laserwriters.  Let
me share with you some things that I learned.  The amount of memory in a
postscript printer is significant, as is the version number.  The version
change from 38 to 41 brought about a 30% or so increase in speed.  The
memory has an accute impact on printing text.  My favorite benchmark if you
will, involved formatting 6 paragraphs of 4 lines each, consisting of 
upper and lowercase alphabet, and doing each paragraph in a diffrent
size font.  The poor old laserwriter didnot have enough memory to cache
it's fonts.  The results:  When freshly turned on, the laserwriter took
something like 2 min 30 sec to produce the page.  When the QMS printer
was freshly turned on, it took 2 min 45 seconds.  BUT, since most printing
is done using the same set of fonts over and over, I then would go
rearrange the order of the paragraphs, without disturbing the formatting
to prove that I was not just printing the same page twice.  Sending the
new page to the Laserwriter that HAD NOT been turned off, took 2 minand
30 seconds, sending the same page to the QMS took 17 seconds.  The QMS
had enough memory to cache it's fonts.  The point I am trying to make, is
that Postscript is an interpreter, and as such, it's effeciency is directly
related to the memory constraints, and to the techniques used to code the
interpreter.  Benchmarking at the machine language level is somewhat meaningful
but not with an interpreter.  For example, just eliminating say a stack
check or choosing a diffrent line of 'C' within the interpreter mainloop
can cause diffrent code to be generated ( var=var+1 v.s. var++) if put
in the main loop of the interpreters scanner, can have a dramatic impact
on program speed...

Cheers

Woody

larry@csccat.UUCP (Larry Spence) (11/29/89)

In article <13436@nsc.nsc.com> chaim@nsc.nsc.com (Chaim Bendelac) writes:
>Is there a "standard" and reasonably repeatable way to measure the
>real performance of a postscript processor? Are there any "standard"
>(public-domain) benchmark used for this purpose?

There was a set of "job-level" benchmarks in the Seybold Report a few
months ago.  For example, one just had lots of body text, another was
heavy on halftones and clipping, etc.  They ran the tests on a whole
bunch of PostScript imagesetters, but they would be applicable to any
PS output device.  There were some really strange results, like an 
Adobe interpreter that gave different times each time one of the bench-
marks was run (from scratch!).  Some of the interpreters choked on very
large images, others displayed memory management bugs, ad nauseam. 

Someone else asked this question a while back, and I suggested calling
Seybold to see if they'd send a copy of the benchmark files.  I don't 
know whether that happened or not.

-- 
Larry Spence
larry@csccat
...{texbell,texsun,attctc}!csccat!larry