Buehring@Intellection.COM (Walt Buehring) (05/05/91)
Question (in a nutshell): How do *you* go about writing an X application that displays windows on many X servers, is insensitive to remote server crashes, provides reasonable interactive response time to all users, and does not consume oceans of virtual memory? Possible answers: 1. Within a single process open a separate connection to each display and use XtAppPending, XtAppProcessEvent and friends to dispatch events from each display. Advantage: - Single process space consumes least memory. - UI in same process as application eases access to functions and data. - No context switching. Disadvantage: - If a single display crashes Xlib is hosed (right???). - Any event that triggers a lengthy computation causes all displays to wait (perhaps a LONG time). 2. Split UI and application into separate processes and fork a UI process for each display. Communicate with application code thru IPC channel via some home-grown command protocol. Advantage: - Application is insensitive to remote server crashes. - UI and application may run on separate machines if needed. - During lengthy calculations by the application code, user can still refresh windows, pulldown menus, etc. Disadvantage: - You may end up context switching your guts out. - Given the size of Xt and Motif libraries, each UI process may become *huge* and impose severe paging penalties. - Access to application code and data is made cumbersome. This question probably comes up often but I've not been tuned in to this group. I'd appreciate any advice, past articles of interests, pointers to examples, etc. I'm going nuts looking for the "right" answer to this problem -- it'd sure be nice to have some company! :-) A few other details about our application in case it matters. It must support 10-40 users (displays), may embark on calculations that require 10-15 minutes to complete, and must run on Unix and VMS (using DECwindows/Motif). Thanks, \/\/ Walt Buehring Intellection, Inc. Internet: Buehring@Intellection.COM UUCP: uupsi!intell!buehring
mouse@lightning.mcrcim.mcgill.EDU (der Mouse) (05/07/91)
> Question (in a nutshell): > How do *you* go about writing an X application that displays windows > on many X servers, is insensitive to remote server crashes, provides > reasonable interactive response time to all users, and does not > consume oceans of virtual memory? Well, to be blunt about it, I don't. I have applications that display on multiple servers, but any connection loss causes the whole application to die. If I had to, I would probably take the tack I talk about under "Advantage" of possible answer 2. > Possible answers: > 1. Within a single process open a separate connection to each > display and use XtAppPending, XtAppProcessEvent and friends to > dispatch events from each display. > Disadvantage: > - If a single display crashes Xlib is hosed (right???). Good question. The Xlib documentation says that if you return from an I/O error handler function, the process exits. The MIT source code does not appear to do this, as far as I can see, meaning that the I/O error handler can close the display and return and the process should survive. I don't know whether other Xlib implementations conform to the documentation or the code. > - Any event that triggers a lengthy computation causes all > displays to wait (perhaps a LONG time). If your OS has threads (lightweight processes, whatever), they may alleviate this somewhat. > 2. Split UI and application into separate processes and fork a UI > process for each display. Communicate with application code thru > IPC channel via some home-grown command protocol. > Advantage: > - UI and application may run on separate machines if needed. If you split them that far apart, it amounts to the application being a server of its own, with each user running a program which functions as a client of both the application server and the X server. This may actually be the cleanest way to do it. > Disadvantage: > - You may end up context switching your guts out. I think this will probably not turn out to be a problem. As I type this, two machines are involved (well, three, but one is just an IP packet router), with a total of four processes. Every time I type a key and see it appear on my screen, here's what happens: (The machines involved are xt3, a Sun-3/60 on my desk running an X-terminal setup, and lightning, a Sun SPARCserver 470.) - my keystroke generates an interrupt for xt3; kernel queues something somewhere and awakens the X server. - X server reads keystroke, prepares event, and writes it to a network connection. - xt3's kernel spits out an Ethernet packet. - lightning's Ethernet hardware receives an Ethernet packet and interrupts the kernel. - lightning's kernel reads Ethernet packet and wakes up the process waiting for something on that connection, which is an xconns. - xconns wakes up, reads the data, and stuffs it into a local pipe. - kernel wakes up mterm because data has arrived on the pipe. - mterm reads the keystroke event and stuffs a character into the pty. - kernel wakes up emacs, which was blocked reading from the pty. - emacs reads the keystroke and generates an echo (for plain text, this is just the typed character; for editor commands, it will be longer), and writes this to the pty. - kernel wakes up mterm because something is readable on the pty. - mterm reads the stuff from the pty, decides what it wants to do on the display, makes Xlib calls to do so, and flushes the generated requests to its pipe to xconns. - kernel wakes up xconns because stuff has arrived on the pipe. - xconns reads the data and stuffs it into the network connection back to xt3's X server. - lightning's kernel spits out an Ethernet packet. - xt3's Ethernet hardware receives a packet and interrupts the kernel. - xt3's kernel reads Ethernet packet and wakes up the X server. - X server wakes up and displays things on the screen. All that, and not only is it usable, it's fast enough I can't even detect the delay, which means it's very short (perhaps .01 second?). The above scenario includes 9 switches between kernel mode and user mode on xt3, with only one user process involved. On lightning, there are 21 switches between kernel mode and user mode, with a minimum of 4 switches from one user-level context to another. Yes, you may context-switch a lot. But don't worry about it unless it proves to be troublesome. > A few other details about our application in case it matters. It > must support 10-40 users (displays), may embark on calculations that > require 10-15 minutes to complete, and must run on Unix and VMS > (using DECwindows/Motif). Some kernels don't allow more than 32 file descriptors per process, so your solution 1 is out if you have to be portable to such environments; the application wouldn't be able to open more than some 24-27 displays (it will need a few file descriptors of its own, plus one per connection). Such systems are becoming rarer, but there are still plenty of 'em around. I would expect having everybody's interface lock up for 10-15 minutes is also unacceptable, but I could be wrong; if not, this would be another reason you couldn't use solution 1. If you overtly separate the application crunch server and the front-ends (what I mentioned above), you should have no trouble. If not, you may find it difficult to fork() on VMS; I haven't hacked VMS recently, but what I recall is that it's difficult to do a UNIX-style fork(). der Mouse old: mcgill-vision!mouse new: mouse@larry.mcrcim.mcgill.edu
klute@tommy.informatik.uni-dortmund.de (Rainer Klute) (05/08/91)
In article <WALT.91May4173316@arrisun3.utarl.edu>, Buehring@Intellection.COM (Walt Buehring) writes: |> 2. Split UI and application into separate processes and fork a UI |> process for each display. Communicate with application code thru IPC |> channel via some home-grown command protocol. |> |> Disadvantage: |> - You may end up context switching your guts out. |> - Given the size of Xt and Motif libraries, each UI process |> may become *huge* and impose severe paging penalties. |> - Access to application code and data is made cumbersome. Another disadvantage: The application won't run under operating systems without fork(). -- Dipl.-Inform. Rainer Klute klute@irb.informatik.uni-dortmund.de Univ. Dortmund, IRB klute@unido.uucp, klute@unido.bitnet Postfach 500500 |)|/ Tel.: +49 231 755-4663 D-4600 Dortmund 50 |\|\ Fax : +49 231 755-2386
mouse@lightning.mcrcim.mcgill.EDU (der Mouse) (05/09/91)
>> 2. Split UI and application into separate processes and fork a UI >> process for each display. Communicate with application code thru >> IPC channel via some home-grown command protocol. > Another disadvantage: The application won't run under operating > systems without fork(). Under "advantages", the poster listed the possibility of having the application code and the UI code on separate machines. This brings up the possibility of splitting the application into two: an "application server" and a "user interface". The UI program can then be considered a client of both the X server and the application server. If you do this, you no longer need to depend on fork().... der Mouse old: mcgill-vision!mouse new: mouse@larry.mcrcim.mcgill.edu
thp@westhawk.UUCP ("Timothy H Panton.") (05/12/91)
> der Mouse <mouse@lightning.mcrcim.mcgill.edu> >> Buehring@intellection.com >> Disadvantage: >> - You may end up context switching your guts out. > > I think this will probably not turn out to be a problem. ... EMACS example elided .... > The above scenario includes 9 switches between kernel mode and user > mode on xt3, with only one user process involved. On lightning, there > are 21 switches between kernel mode and user mode, with a minimum of 4 > switches from one user-level context to another. And all that is for one keystroke! If the events are coming from another program like a simulation engine producing a few hundred new values every 1/10th of a second you have to be a little more careful. I found that in a particular case we lost about 20% of the CPU to context switching. I had to rework the frontend - backend link so that it was polled every 1/2 second rather than using a signal based fast turnaround method, this got us back below 10% overhead on most machines. It seemed to cripple an early Mips machine (I'm still not sure why, neither were MIPS UK.). It sounds like your situatuion is nearer mine than Mouses', in which case I advise you to be very careful when you design the IPC stuff, separate process _is_ the way to go, but it isn't problem free. Tim. +----------------------------------------------------------------------------+ |Tim Panton, Westhawk Ltd. "Do not meddle in the affairs of Wizards, for | |Phone: +44 928722574 they are subtle and quick to anger." | |Email: thp%westhawk.uucp@ukc.ac.uk The Lord of the Rings. | |Paper: Westhawk Ltd. 26 Rydal Grove, Helsby, Cheshire, WA6 OET. UK. | +----------------------------------------------------------------------------+
aba@crione.UUCP (Katie Boyle's Problem Page) (05/15/91)
In article <WALT.91May4173316@arrisun3.utarl.edu> Buehring@Intellection.COM (Walt Buehring) writes: >Question (in a nutshell): > >How do *you* go about writing an X application that displays windows on >many X servers, is insensitive to remote server crashes, provides >reasonable interactive response time to all users, and does not >consume oceans of virtual memory? > > Could you try splitting the application into separate processes and then communicating via the Motif Clipboard? Or perhaps use "raw" property communication without the Clipboard. I've used properties for Inter Client Communication successfully, but only within the same display. If the system has shared libraries, or will have them in future, then the problems with the sizes of Xm, Xt and X11 are not (will not be) so bad. If the application is distributed across many displays, then the memory usage is also distributed. ABA