davidh@dent.Berkeley.EDU (David S. Harrison) (12/18/88)
[X11]
A co-worker and I recently had an argument about appropriate use of
graphics contexts in X11. I dimly recall a similar discussion in this
news group about this subject but I cannot recall the final
recommendation.
Should an application maintain many graphics contexts and swap between
them when appropriate or maintain a few graphics contexts and change
fields when appropriate? My co-worker advocates the first approach.
1. The application will run well on hardware that supports multiple
graphics contexts.
2. Even if the hardware does not support multiple graphics contexts,
the server could optimize the swap between contexts by comparing
the currently loaded context with the new context.
3. Traffic between client and server is reduced. The (rather large)
amount of information in the graphics context structure is shipped
only once for each context. From that point on, the contexts are
maintained on the server and no further modifications are required.
4. In some sense, the complexity of the application is reduced. No
code has to be written to change existing graphics contexts.
However, I vaguely remember that Robert Scheifler posted a message
encouraging applications to maintain a small number of graphics
contexts and change them when necessary. The reasons I came up with
are:
1. Currently, very few hardware platforms truly support multiple
graphics contexts. Those that do will often have very few. This
means updates to hardware contexts will occur anyway.
2. In a graphics intensive application (like the one we are porting
to X11), changes in graphics contexts are not frequent. Thus,
the increased client-server communication caused by the
occasional request to change a graphics context is insignificant
compared to the number of graphics requests issued by the
application.
3. It is possible for the server to optimize changes to the current
graphics context by noting the fields that change in the request
from the application. No comparison of all graphics context
fields is required.
4. Using many graphics contexts consumes resources in the server.
Many servers may have very strict limits on resources. The best
example is X terminals. These devices may have very little extra
memory for storing many graphics contexts.
Can those with greater knowledge of X11 design shed some light on this
issue?
David Harrison
UC Berkeley Electronics Research Lab
(davidh@ic.Berkely.EDU, ...!ucbvax!davidh)rws@EXPO.LCS.MIT.EDU (Bob Scheifler) (12/20/88)
You seem to have laid out the issues fairly well. I still believe that using a small number of GCs is the right choice, and the CLX interface is one attempt to make this reasonable on the client side. The Intrinsics mechanism of read-only GCs can be viewed as in conflict with this choice, and it would be interesting to give some more thought in this area, but in practice is does result in a small number of GCs.
grogers@m.cs.uiuc.edu (12/20/88)
Within our XGKS library we found two significant speedups related to GCs.
The first was to use more of them. At first we used one GC for all five
gks output primitives. This meant that we were changing the color, pattern,
etc. values in the GC with just about every gks output function call. We
later changed to using one GC per output primitive type. Although we have
no benchmark evidence, the speedup was visually apparent for all programs.
The second speedup was to peek at the Xlib's cached GC to avoid making Xlib
calls to set GC values to what they currently are. This again made a
noticeable difference. This is because Xlib does not filter out unnecessary
nonchanges to the GC.
Based on our experience I would recommend using GCs to avoid GC changes and
modifying the Xlib implementation to filter out the nonchanges.
Greg Rogers
University of Illinois at Urbana-Champaign
Department of Computer Science
1304 W. Springfield Ave.
Urbana, IL 61801
(217) 333-6174
UUCP: {pur-ee,convex,inhp4}!uiucdcs!grogers
ARPA: grogers@a.cs.uiuc.edu
CSNET: grogers%uiuc@csnet-relayrws@EXPO.LCS.MIT.EDU (Bob Scheifler) (12/21/88)
The second speedup was to peek at the Xlib's cached GC to avoid making Xlib
calls to set GC values to what they currently are.
"Peeking" is not portable, the internal implementation of GCs is not part
of the Xlib spec.
This is because Xlib does not filter out unnecessary
nonchanges to the GC.
Completely wrong. The main reason Xlib does client-side caching is
precisely to filter out such changes. Either you are using a very
broken Xlib, or you haven't really understood what's going on in
your application. Perhaps you could be more explicit about which
GC values and which function calls you think don't do filtering.
Perhaps you could tell us where you got your Xlib implementation.
Based on our experience I would recommend using GCs to avoid GC changes and
modifying the Xlib implementation to filter out the nonchanges.
Of course you failed to state anything about what range of hardware platforms
and server implementations you have run XGKS on, and whether the speedups were
uniform across that entire range. Based on my experience with people's
experience, I would recommend people go slow on this until there are more facts
on the table.