kenm@sci.UUCP (04/05/87)
I have a problem with the X server thinking my program has died just because it has temporarily become compute intensive and stops processing input events. It reacts by disconnecting. Is there some way I can configure things so that any events after the first 16 or so without response by the client are redirected to the bit bucket? If this fails, is there some way I can asynchronously disable input events without flushing the output queue. I have tried this with a timer interrupt but everything that plays with the input queue flushes the output queue with disasterous consequences if the program was in the middle of XLine or somesuch. I can't tell when the program is going to become compute intensive because it might just be a heavy load. Thanks Ken McElvain decwrl!sci!kenm
tsf@theory.cs.cmu.edu.UUCP (04/07/87)
In article <3645@sci.UUCP> kenm@sci.UUCP (Ken McElvain) writes: >Is there some way I can configure things so that any events >after the first 16 or so without response by the client are >redirected to the bit bucket? I have a hack that does exactly this. It involves creating a process between the X server and the application. The process routes messages from the server to the application, and from the application to the server. When buffer associated with the pipe from the middleman to the application becomes full, the middleman starts discarding output from the server. This is a somewhat heuristic hack, but I don't know of an instance where it has failed us yet. I will post source if anyone is interested in something this ugly. Tim Freeman Arpanet: tsf@theory.cs.cmu.edu Uucp: ...!seismo!theory.cs.cmu.edu!tsf