[comp.windows.x] X benchmarking

bowbeer@APOLLO.COM (Joe Bowbeer) (05/05/89)

I have a question concerning the synchronization of input and
output that is motivated timing of input events vs
output requests. 
motivated by the need to measure X graphics
performance.
Also a related question about the timing of output and input.

How can a benchmark client tell when the graphics requests
have really finished? I saw one benchmark client that used
a XQueryPointer() call for this purpose. According to my
understanding, XQueryPointer() will flush Xlib's buffer
and cause the server to process the graphics requests, but
if, say, the graphics hardware is itself buffered, the
graphics requests may not have actually finished at the
time the server is responding to the XQueryPointer request.

Right? If so, how can a benchmark client tell when the
graphics requests are really finished?

Joe Bowbeer

  bowbeer@apollo.com
  {attunix,decvax,mit-eddie}!apollo!bowbeer
-------

jim@EXPO.LCS.MIT.EDU (05/05/89)

> How can a benchmark client tell when the graphics requests
> have really finished?

Only by doing a request that uses the pixels in question to generate a reply,
such as GetImage.

Of course, doing even a single pixel GetImage will have a major impact on the
timing statistics, so the benchmark needs to be very careful to identify
exactly what it is trying to test (server response time, actual graphics
rendering time, etc.). 


> I saw one benchmark client that used
> a XQueryPointer() call for this purpose. According to my
> understanding, XQueryPointer() will flush Xlib's buffer
> and cause the server to process the graphics requests, but
> if, say, the graphics hardware is itself buffered, the
> graphics requests may not have actually finished at the
> time the server is responding to the XQueryPointer request.

Yup.  You'll see this in any multithreaded server (graphics coprocessors are
a "simple" form of multithreading).

This is one of many reasons why benchmarking X servers is really hard to do in
a general way.  A real benchmark will provide an analysis of every single
request series, describing:

	o  What situations is the test is trying to model?

	o  Why is it relevant to various types of applications (in other words,
		when might real applications do this same operation)?

	o  What "environmental" factors (e.g. transport type and loading, 
		client loading, application load on the server,
		length of time since server was started, length of time since
		client and server were booted, etc.) might affect the results?

	o  What efforts does the test make to explain different results on 
		different platforms and to avoid special cases (e.g. hardware 
		vs. software cursors being in the way, window size, depth, and
		location, alignment of graphics operations, particular bit 
		patterns in pixel values, special clipping, etc.)?


Even testing a single operation is an amazing amount of work.  


						Jim Fulton
						MIT X Consortium

stroyan@hpfcdc.HP.COM (Mike Stroyan) (05/06/89)

> Right? If so, how can a benchmark client tell when the
> graphics requests are really finished?

> Joe Bowbeer

Firing from the hip, it seems that XGetImage of a pixel would have to
wait for rendering to complete.  The time required for the XGetImage
might distort measurements a bit.  A benchmark could compare a set of
operations followed by XGetImage with lone XGetImage.  The difference
should be close to the true time for all but the most bizarre server
implementations.

The time based on XQueryPointer should be of interest as well.  If a
pipelined graphics processor can perform rendering in parallel, then the
shorter time required to put data in the pipe will reflect the actual
performance of some applications.

Mike Stroyan, stroyan@hpfcla.hp.com