SRA@XX.LCS.MIT.EDU (Rob Austein) (09/30/87)
I was going to say something about that, but decided to wait to see if anybody had in fact read Dave's message. You did, so here goes. Yes, SUPDUP as specified in the RFC is character at a time. However, I believe that a very minor enhancement to the protocol would handle that problem. The big advantages of SUPDUP (sales pitch) are that (1) it works in a heterogenous environment (thus is better than rlogin), and (2) has a wider view of the terminal than just the print head of a hardcopy TTY (thus is better than TELNET). In particular, there are a whole set of useful options under the heading of "The Intelligent Terminal Protocol". Not all of these are documented in the SUPDUP RFCs, for a full explanation of the ITS terminal system see the file "MC: INFO; ITSTTY >" on MC.LCS.MIT.EDU. It's a bit long, so if you're not up for a lot of reading, you want the parts on "Control of the TTY" and "The Intelligent Terminal Protocol". The model I'm using for the local/remote echo and wakup problem is the TOPS-20 TEXTI% JSYS, which was mentioned obliquely a few messages ago when somebody referred to TOPS-20 EMACS enhancements. For those who aren't familiar with TOPS-20, one of the arguments to TEXTI% is a break mask, a 128 bit vector indicating which characters should cause the TEXTI% call to return. I believe that the EMACS extentions that were mentioned were based on an extension to TEXTI% which would cause any character with the meta bit (octal 200) turned on to act as a wakeup. I may be wrong about this, I've never seen the code. Presumably the entity that decides what the break mask should be is the server (where applications programs are running) while the entity that implements the break mask is the client (where the physical display terminal is). So presumably the "change the break mask" sequence would begin with a %TDxxx code. I can't think of any reason why the client would want to tell the server about break masks, but if so the process would be identical except for the escape character (a 30x code, presumably). Henceforth I'll refer to the entity sending the break mask as the "sender" and the entity receiving the break mask as the "recipient". For the 12 bit character set SUPDUP permits, a complete break mask would be rather cumbersome, but there's a natural way to compress this. Make the first data byte a flag byte, with one flag per bucky bit, one flag for characters with no bucky bits, and two unused bits. The flag bits indicate which bucky bits the sender wants the client to try to optimize; if a flag bit is set, a break mask is supplied, if a flag is cleared, no break mask is supplied and the receiver should fall back to the default behavior (wake on every character). The most common message would presumably be one with the no-bucky-bit and control-bucky-bit flags set and all others cleared, indicating that any meta, super, or hyper characters are wakeups. In general, if a program doesn't expect to see a class of characters, it should probably wake up on them so that it can tell the user about typing errors ASAP. The flag byte is followed by a series of break masks (128 bits in 16 bytes, presumably). For completeness, this would have to be separate break masks for each case that the sender has indicated in the flag byte; ie, just because the sender wants to break on CONTROL-A and META-A doesn't mean the server wants to break on CONTROL-META-A. This is part of the reason for the flag byte, so that the sender needn't send a lot of masks that are all ones. All SUPDUP connections would still start out in character at a time remote echo mode. Setting the break mask requests local echo of any characters that are not breaks. Break characters are still handled remotely. Setting the break mask with a zero flag byte (and thus no following masks) would put the connection back in the default character at a time mode. One extension of this idea would be incremental changes to the break mask; if anybody cares enough to do it, there's always the two unused bits in the flag byte. But the above covers the basic scheme. Yes, a similar mechanism could be used in TELNET, without having to think about 12 bit characters and bucky bits, but TELNET is really not a very good model for a display terminal. SUPDUP (and the abstract model of terminals and capabilities that underlies it) is a much better model. I think the existing software speaks for itself. --Rob