[comp.windows.x] Benchmarking X "stuff"

percy@mips.COM (Percy Irani) (12/30/88)

(I may be opening a can of worms - but here goes..)

How does one "benchmark" (note the quotes please before you flame..)
X software/terminals?

I know this has been asked before (Ill add the article numbers at the 
end). I will also put my favourite quote "lies, lies, damn lies and then
there are benchmarks" - yet like statistics, people do quote them. A
good example is VUMPS, VAX-MIPS etc... What about the X world? Is there
a yard-stick which would give someone an "idea" of how the X system 
behaves?

Some possible scenarios:
  With so many X terminals/software vendors today, who has the 
  best system (to run the X software) and the best hardware (in terms
  of terminals)?

  One scenairo would be to install X software on (say) Suns or MIPS or
  HP or Apollos, connect a few NCD or Visual (etc...) terminals to
  the Ethernet, and run ---------???--------- command that would be
  a good indicator of:
     1) Good TCP implementation 
	   1-1) Number of packets transferred for the benchmark
	   1-2) Average size of packet
	   1-3) Actual data v/s overhead (TCP and X stuff bundled/
	        unbundled?)
	   1-4) Other..
     2) X code performance
	(This may sound silly, as the code is the same, yet when people
	 want to decide as to which system/terminal to buy to run 
	 X stuff, could make a difference.....)
	   2-1) Run some "demo" which checks out some (which?) functions
	        of X software
	   2-2) Others (may be people from Purdue+ can contribute here..)
	   2-3) Known bugs/shortcomings tests (regression tests??)
     3) X terminal performance
	   3-1) How may terminals can one connect running ----????----
		before the system "dies" (please note the quotes again 
	        before you flame..)
	   3-2) How many windows can you open on one terminal before the
	        terminal dies
	   3-3) Memory/processor specs...
     4) Other
	   4-1) Interconnectivity (X-Connectathon?)
	   4-2) Known problems/border case tests
	   4-3) Graphics - bitmaps, GKS, PHIGS, PHIGS+,.....
	   4-4) Other ideas...?

PLEASE, I'M SEARCHING FOR IDEAS -- COMMENTS ARE WELLCOME.
It may that after all it may not a good idea after all to do "benchmarks"
in the X world - but at least let's know WHY!!

Comments wellcome...

I will prefer replies back to the net and *NOT* to my personal mailbox.
Thanks.

----------------- Some other articles related to this -----------
 I have copies of those in case you may have missed them I also do ot
 want to increase the net traffic, so Ill only post 
    article number 
    Author
    Organization
 There may be some I missed - so for those, apologies in advance..
----------------- Some other articles related to this -----------

Article 7057 of comp.windows.x:
>From: scott@applix.UUCP (Scott Evernden)
Organization: APPLiX Inc., Westboro MA


Article 7120 of comp.windows.x:
>From: burzio@mmlai.UUCP (Tony Burzio)
Organization: Martin Marietta Labs, Baltimore, MD

Article 7135 of comp.windows.x:
>From: dshr@SUN.COM (David Rosenthal)
Organization: The Internet


Article 7403 of comp.windows.x:
>From: lewin@savax.UUCP (Stuart Lewin)
Organization: The Internet

Article 6937 of comp.windows.x:
>From: tomc@dftsrv.gsfc.nasa.gov (Tom Corsetti)
Organization: Advanced Data Flow Technology Office

harry@hpcvlx.HP.COM (Harry Phinney) (12/31/88)

percy@mips.COM (Percy Irani) writes:

> (I may be opening a can of worms - but here goes..)
> How does one "benchmark" (note the quotes please before you flame..)
> X software/terminals?

With appropriate benchmark programs :-) More seriously, there are some
benchmarks used within the X testing working group of the X Consortium.
These benchmarks, while not comprehensive, do offer some gauge of
performance.  I do not know if (or when) these benchmarks may be made
public (Bob, could you comment?).

>     2) X code performance
>	(This may sound silly, as the code is the same, yet when people
>	 want to decide as to which system/terminal to buy to run 
>	 X stuff, could make a difference.....)

This doesn't sound at all silly.  If you think all server implementations
on the MIT distribution tape are equal in performance, look again.  Some
of the servers there have had a lot of work done to optimize them for the
hardware they run on, and their performance may surprise you.  In addition,
you would want to test the commercially available servers and libraries
which may differ from the MIT distribution.

>	   4-1) Interconnectivity (X-Connectathon?)

An X-Connectathon is a highly visible, but very imprecise test of
conformance to the X protocol.  There are test suites being developed
by the above-mentioned working group which test all Xlib calls, and
test servers for protocol conformance.  These test suites are not yet
complete.

> It may that after all it may not a good idea after all to do "benchmarks"
> in the X world - but at least let's know WHY!!

I think it's a _great_ idea !

Harry Phinney

rws@EXPO.LCS.MIT.EDU (Bob Scheifler) (01/01/89)

    More seriously, there are some
    benchmarks used within the X testing working group of the X Consortium.

The X Testing Consortium is not (currently) part of the MIT X Consortium
(since it actually started before the X Consortium did) although it has
been agreed that they will "disband" and reformulate under the X Consortium
once their current work (on server and Xlib test suites) has been completed.
The benchmarks referred to are being developed by CalComp; an early version
was presented in a talk at last year's X Conference.

    I do not know if (or when) these benchmarks may be made
    public (Bob, could you comment?).

Good question, and one that I don't have complete control over.  The current
position of the X Testing Consortium, as I understand it, is that none of
the software will be made generally available until the entire effort is
complete, which will be sometime in 1989.  However, the X Consortium has a
policy that MIT will not itself distribute benchmarks until after they have
been reviewed within the Consortium (this of course does not place any
restrictions on others distributing benchmarks).  We've been doing some
benchmark development of our own (and giving opinions to CalComp on their
benchmarks), and we understand that some folks at DEC have also been
developing some (and that DEC may be interested in getting theirs reviewed).