[comp.windows.x] 128k request limit

garfinke@hplabsc.UUCP (04/10/87)

>	Currently, the X 10 server will close a connection to a client if
>the client sends a request with more than 128 kilobytes of data.  This
>seems unnecessarily restrictive, e.g. for an 8 bit color display, the
>largest pixmap that can be displayed (sent) is only 128x128.  Will this
>restriction be removed in X 11?

Why wait for X11?  Just go to the Read_segment routine in main.c and
change it. 


        if (size + sizeof (XReq) > bufsize[client]) {
            /* must increase the buffer to accomodate what's coming */
#ifndef NOLIMIT
            if (size <= MAXBUFSIZE) {
#endif
                ptr = Xalloc (bufsize[client] = size + sizeof (XReq));
                bcopy (bufptr[client], ptr, bufcnt[client]);
                free (sockbuf[client]);
                sockbuf[client] = bufptr[client] = ptr;
#ifndef NOLIMIT
            } else
                Close_down (client);
#endif
        }

BTW, all of you out there who run the HP server don't have this limit.
Also removed is the silly restriction which disallows the use of ZPixmaps
on monochrome displays.

------
Dan Garfinkel (hplabs!garfinkel)

mlandau@diamond.bbn.com.UUCP (04/11/87)

In comp.windows.x (<69500005@hplabsc.UUCP>), garfinke@hplabsc.UUCP (Dan 
Garfinkel) writes:
>>	Currently, the X 10 server will close a connection to a client if
>>the client sends a request with more than 128 kilobytes of data.  
>>...Will this restriction be removed in X 11?
>
>Why wait for X11?  Just go to the Read_segment routine in main.c and
>change it. 

I've heard this answer ("just go in and change it") a lot where restrictions
in the current version of X are concerned.  Please note that it is *not* a
solution.  Your local copies of X may then allow whatever feature you need,
but how do you propose that other people with unmodified X servers (and maybe
without X sources at all, if they have a vendor-supplied X) run applications
that depend on these hacks?  (Extensible window servers?  Hmmm....)
--
 Matt Landau				    mlandau@diamond.bbn.com

serge@ucbarpa.Berkeley.EDU.UUCP (04/13/87)

In article <870410093512.1.RWS@KILLINGTON.LCS.MIT.EDU> RWS@ZERMATT.LCS.MIT.EDU (Robert Scheifler) writes:
>
>	    Currently, the X 10 server will close a connection to a client if
>    the client sends a request with more than 128 kilobytes of data.
>
>Yes and no.  At connection setup the server states the maximum request size
>that it will accept; the maximum value it can state is 256Kb.

	Why is there a limit at all, e.g. why not allow up to whatever the
server's (operating system/environment/hardware imposed) limit is?

						Serge
						serge@ucbvax.berkeley.edu
						...!ucbvax!serge

RWS@ZERMATT.LCS.MIT.EDU.UUCP (04/13/87)

    >		  At connection setup the server states the maximum request size
    >that it will accept; the maximum value it can state is 256Kb.

	    Why is there a limit at all, e.g. why not allow up to whatever the
    server's (operating system/environment/hardware imposed) limit is?

In the V11 encoding, each request/reply includes a uniform length field
giving the total length of the request/reply.  This makes it much easier
to handle I/O in the normal case (and also makes it possible to write
certain kinds of IPC filters).  The request length field is 16-bits,
expressed in units of 4 bytes.  This is perfectly adequate for
essentially everything but sending huge images; making the length field
32-bits would mean wasted space in nearly all requests.  (The reply
length field is 32-bits, since replies are rare, and since it is useful
to be able to get back a large image in one piece.)  As I have said, I
believe sending large images in chunks is acceptable (and more efficient
in most server implementations, which must buffer an entire request
before executing it).  If one is serious about sending huge images, the
client will probably be running on the same machine as the server, and
one would do well for performance to define a protocol extension to pass
the image directly using a pointer to shared memory, in which case the
length restriction again doesn't matter.

RWS@ZERMATT.LCS.MIT.EDU (Robert Scheifler) (04/14/87)

	    At the risk of invoking your wrath, I would like to ask one
    final question about the maximum request size.  You mentioned that

No wrath from me, I'm happy to answer questions as time permits.

    > Libraries (and perhaps applications) will have to be designed to deal
    > with a run-time controlled maximum size.  Large images will have to be
    > sent in chunks.

    Does that mean that X11's equivalent of XBitmapBitsPut and similar
    functions will automatically break up the data into chunks that are
    smaller than the (server's) maximum request size, or will the users
    that call X11's equivalent of XBitmapBitsPut's (and similar functions)
    have to do that, and if so, will a function be provided that will
    return the maximum request size?

There will certainly be a function that returns the maximum.  As to the
rest, I'll leave the definitive word to the Xlib designer (and prod him
for a response), but I certainly believe there should be routines for
the image requests that break data into chunks, so that callers don't
have to keep reinventing this wheel.   It's unclear whether the
"standard" interface should generate an error on oversize data or
decompose, since the latter does change the semantics with respect to
indivisibility.  Note that the same sorts of issues can also arise in
other requests (e.g., most of the Poly graphic requests).

jg@jumbo.dec.com (Jim Gettys) (04/14/87)

In article <870414083107.2.RWS@KILLINGTON.LCS.MIT.EDU> RWS@ZERMATT.LCS.MIT.EDU (Robert Scheifler) writes:

>There will certainly be a function that returns the maximum.  As to the
>rest, I'll leave the definitive word to the Xlib designer (and prod him
>for a response), but I certainly believe there should be routines for
>the image requests that break data into chunks, so that callers don't
>have to keep reinventing this wheel.   It's unclear whether the
>"standard" interface should generate an error on oversize data or
>decompose, since the latter does change the semantics with respect to
>indivisibility.  Note that the same sorts of issues can also arise in
>other requests (e.g., most of the Poly graphic requests).

My current thinking on the topic is to break image requests up into chunks
behind the back of the caller (not yet implemented as of this instant, but
on the agenda for the next month) for XPutImage.  

There are at least three strategies for the graphics requests (BTW, it is
very unfriendly to other clients to make single huge requests unless
unavoidable, as you will be locking other clients out for the duration;
one might consider an application sending tens of thousands of vectors
as a single poly request broken in the first place).

	1) return an error while suppressing the request, making the client
fully responsible for breaking up the request into more bite size chunks.
	2) split the poly request up into smaller requests. This has
semantic difficulties at the break point for some of the graphics
operations (in particular join semantics, or some fill operations).
	3) truncate the request silently.

The problem with 1) is that I suspect most programs won't bother to
check for the error and handle it, 2) sometimes gets somewhat wrong answers
on large requests, and 3) wimps out entirely, giving you very wrong
answers.  I currently lean toward  2, but am very open to suggestions
at this point.

RWS@ZERMATT.LCS.MIT.EDU.UUCP (04/14/87)

    From: jumbo!jg@decwrl.dec.com  (Jim Gettys)

								   BTW, it is
    very unfriendly to other clients to make single huge requests unless
    unavoidable, as you will be locking other clients out for the duration;

"will" should be "might"; not all servers will have single-threaded
execute-to-completion implementations.