MRC@PANDA.COM (Mark Crispin) (06/12/87)
All Telnet servers should have a command that a network user can issue that puts the network terminal in binary mode. PANDA TOPS-20's have the command TERMINAL [NO] NETWORK-BINARY [INPUT | OUTPUT | BOTH]. All Telnet clients should have a command that a Telnet user can issue that puts the Telnet connection in binary mode. TOPS-20 TELNET has an escape sequence and a command to put the connection into so-called "transparent mode." Binary mode should never, ever, be done automatically by either a host or a client unless it is damned sure that 8-bit I/O is what is wanted. It must never, ever, be a side effect of some data that a user outputs (e.g. an FFh character output by a user program must be doubled by the operating system and not allowed to be interpreted as a Telnet protocol command). Binary mode should never, ever, be tied to the host's concept of a terminal in binary mode (Unix calls this "raw" as opposed to "cooked" mode). Certain inferior losing versions of Unix and TOPS-20 both do this -- most of the turkey TOPS-20 implementations have been exterminated but there are still many turkey Unix implementations out there. Being a turkey leads to the infamous "new Telnet rubout performance problem" of the late 1970's plus confusion Unix newline confusion. Implementation details of the TOPS-20 code on request; I wrote it. The only TOPS-20's with correct servers are SIMTEL20, STL-HOST1, and DREA-XX; all other TOPS-20's have either completely broken servers or servers with half-assed fixes (e.g. Stanford TOPS-20's). -------
JBVB@MX.LCS.MIT.EDU.UUCP (06/17/87)
All Telnet servers should have a command that a network user can issue
that puts the network terminal in binary mode....
All Telnet clients should have a command that a Telnet user can issue
that puts the Telnet connection in binary mode....
I do not think this is the right approach. In particular, given the
current state of the TN3270 world, there are a number of implementations
in the field that would automatically respond by switching to EBCDIC.
This is not sacred, but it is no more sacred than the tradition of
various clients and servers generating parity over Telnet.
What I want is RFC 854, with a clarification of the sentence:
"All remaining codes do not cause the NVT printer to take any action."
I would prefer a reading which guarantees that all 8-bit codes can be
transmitted over a Telnet connection without any negotiation (with
appropriate IAC stuffing and removal), but does not guarantee that
the receiver (client or server) will do anything in particular with
them. As I see it, this allows a simple TT_TYPE negotiation (or manual
terminal-type control) to open up 8-bit extended functionality on a
case-by-case basis. I feel this is appropriate, since there isn't any
universal interpretation of the codes above 128.
In most of the operating environments I know, a character can have 8
bits. Here, implementing my recommendation is trivial. The host software
has some mechanism for determining what sort of device to generate output
for, and if the user doesn't want to care what kind of terminal he is
using (or emulating), he should be using supdup anyway. Automatically,
or manually, the terminal type is agreed upon, and if that implies 8-bit
I/O, so be it. IAC stuffing continues, but no masking occurs, and the
8th bit is never set unintentionally.
In cases where the OS treats the sign bit as out-of-band data, masking is
required by default. I don't know the history of the "rubout performance
problem", but I can see that a special call is needed for programs that
want only the masking disabled, without the other side effects of 'raw'.
Requiring 'binary' be negotiated before 8-bit data can be passed over the
connection smacks of the 'telnet-randomly-lose' option: This uses 'binary'
to indicate "I am a conforming implementation, and I won't send you any
parity or the like". This ought to be dealt with by fixing the offenders,
and leaving the user something like a "4BSD/RFC-959 switch" in an FTP
client, so he can request masking during the transition period.
The one widely-implemented use of 'binary' as of this date is as an
indication that "we are in EBCDIC now". I submit that there is a
worthwhile distinction between "NVT ascii with possible extensions",
and "something completely different" (16-bit chars?), and that this
distinction ought to be preserved, whether by my proposal, or at least by
using 'option a' to differentiate between "NVT" and "extended NVT", and
'option b' for larger distinctions (TN3270).
jbvb
James B. VanBokkelen
FTP Software Inc..MRC@PANDA.COM.UUCP (06/17/87)
James -
Your message leave me quite confused, and I am the author of several
different Telnet client and server programs as well as part of the Telnet
protocol (Logout and Supdup).
I would assume that any user who issues a command to put the
connection into binary mode would know what the impact of that command will
be, and I would be quite irritated at a Telnet implementation that didn't
leave the choice up to me. Perhaps in the IBM world Telnet binary mode is
used for EBCDIC, but binary mode sure as hell is used for a helluva lot of
other things than that!
Telnet is a 7-bit protocol. Binary is the only general means of
transmitting 8-bit data. There are other means of transmitting specific
8-bit data (e.g. Supdup). Binary should not be interpreted as having any
other semantics (e.g. Unix "raw" mode, TOPS-20 terminal binary mode,
EBCDIC).
It is absolutely not trivial to implement your "recommendation"; there
are a whole slew of issues involved in 8-bit transmission, particularly
with regards to terminals which expect or use parity as opposed to
terminals which do not (or have user control of the 8th bit as in an "EDIT"
or "META" key).
I for one will firmly resist any attempt to change the semantics of
the Telnet protocol. EBCDIC is NOT, repeat, NOT the only user of Telnet
binary mode. If EBCDIC is really getting extensive usage, then it should
be set up as its own option. Binary mode should be used for EBCDIC only
between consenting hosts!! I never heard of the TN3270 developers claiming
the right to take over existing protocol!
There is an ISO recommendation for large character sets; it involves
14-bit characters limited to the characters between 21h and 3Eh (that is,
the printing ASCII characters). All control characters and delete are
always interpreted as a single bit. I'm most familiar with their usage in
Japanese kanji terminals; in those, an escape code is used to shift between
ASCII and 14-bits. This could be a Telnet protocol option, but all
terminals use the same escape code so there doesn't seem to be much of a
point to doing so.
You refer to the "Telnet Randomly-Lose Option", but do you know what
it was? I know because I wrote it; it was an April Fool's Joke ten years
ago having to do with user control over system crashes. It has nothing to
do with your discussion that I can tell.
-- Mark --
-------JBVB@AI.AI.MIT.EDU.UUCP (06/18/87)
Mark: There is obviously some history I am not familiar with here. I look at the Telnet specification, and I see a perfectly good 8-bit transport. I see global definitions for slightly more than half of the possible characters. I see a 'binary' option defined as 'escape to mutually agreed-upon 8-bit datastream'. I know of one fairly well-defined use of 'binary' (TN3270, with 5 different server implementations and 6 or 7 clients). You indicate there are other uses, but I have no details. You cite your TOPS-20 implementations as examples of a manually-controllable 'binary' option. My interest in this is to improve the real-world usability of the Telnet protocol, and I realize I am proposing a change. I respect the interest of the authors of the original documents (I was referring to them last night, although I admit I hadn't been checking the authors' names). The problem: there are a growing number of applications which wish to exchange extended-ascii data with appropriate display terminals. In every case I am aware of, the application, the server O/S, the client O/S and the display all understand 8-bit, and the terminal type information is available to all components. The obstacle: Telnet clients and servers universally mask off the 8th bit of a received data stream, in spite of the sentence in the RFC which says that codes above 128 will have no effect. Why? Because of a number of clients and servers which take it upon themselves to generate parity to send over the network (which appears a total waste of time, but maybe I'm missing something?). 'Binary' mode was designed to solve the problem, but few implementations handle it, and yours are the only ones I am aware of where it is possible to get 'binary' without side effects. Side issue: Serial-line Parity. Are tty drivers that pass received parity to applications programs common? Are there many OSs where writing 8-bit data to the tty driver confuses its output parity generation? As long as the answers to those questions are 'no', the presence of serial-line parity seems to be reduced to another kind of terminal-type mismatch peculiar to 8-bit datastreams. Solution 1: Require that all clients and servers negotiate 'binary' before passing an 8-bit datastream. Advantages: The feature need only be dealt with on specific systems that understand it. Of course, the rest of the world must continue to mask the parity off, anyway... Disadvantages: The user and/or the 8-bit application must request 8-bit mode, (presumably the user, manually), and the client and server programs both need hooks, and some more code. Any useful automatic operation (which you don't seem to like) requires 'binary' and 'xx', so extended ascii can be differentiated from 3270, etc. TN3270 has this, although not all implementations conform completely. TN3270 is already fully automatic in most implementations. Do the other 'binary' dialects have similar behavior? Solution 2: Require the elimination of implementations that generate meaningless 8-bit, and allow clients and servers to pass all 8 bits between the application and the tty driver by default. Advantages: In the best case (possibly a PC client), implementation consists of removal of 1 line of code. This is the case wherever the applications and the tty driver can take care of themselves (no more bothered by 8-bit than by random control characters). If the OS can't hack it, then leave the existing masking code in, and write a hook to bypass it when requested by the application or user. Amenable to integration with automatic or semi-automatic terminal type selection, which can take place in the client and server OS, without hooks into the Telnet (which is likely to be a layered product, not always available from the OS vendor). The 8-bit application is less likely to have to understand about networks, too. Disadvantages: Everybody needs the mask operation, and the hook to bypass it, until the parity-senders get upgraded. Of course, by my reading, they're already in violation of the RFC. Requires issuance of new RFC and Mil-Std. Tinkers with something that's already working (except for people like the original poster). Axe-to-grind: I want to provide better functionality so more people will use TCP/IP and give me more money. Better functionality is nothing without interoperability, so I am motivated to push for what looks to me like the solution that requires less work from the average developer or vendor. I recognize that you've already done your part of your solution, but your posting implied that few others have done it (or done it right, anyway). I wish to become better informed, in part because I am participating in the effort to standardize 3270-mode. I would like to know details of other uses of 'binary' mode, and options used in conjunction with 'binary' mode. I would also like to hear other implementors' approaches to solving the original poster's problem, and if I have misunderstood yours (request binary manually, whereupon client and server do some checking, and agree upon it, and then the user can start the 8-bit application), please clarify. jbvb James B. VanBokkelen FTP Software Inc.
MRC@PANDA.COM.UUCP (06/18/87)
James -
There are several uses of Telnet binary mode, all of which
predate TN3270. Here are some I can think of off the top of my
head, at half past midnight after a long long work day:
1) many terminals have an EDIT key which allows applying the high
order bit to a character, and several display editors, most
notably various implementations of EMACS, use this facility to
have commands (e.g. CTRL/F means move forward character, but
EDIT/F means move forward word) way up there. At least two
display editors, TVEDIT and E, are severely crippled without
this facility.
2) some terminals have an 8-bit character set, most notably DEC
VT220's, and there are a helluva lot of people who use the
characters up there!
3) from time to time, many users want to transmit 8-bit data as
part of a download or upload sequence, e.g. using the Kermit
or Modem protocols running the Kermit/Modem server on a remote
host.
4) from time to time, users want to run other 8-bit protocols
such as UUCP over a Telnet link (don't laugh, in a US/Japan
link I am quite familiar with two Unix systems mail to each
other through a DEC-20, and two DEC-20's mail to each other
through a Unix, for various ugly technical reasons I won't go
into here).
5) from time to time, users want to experiment with enhanced
protocols at a level above Telnet, and use Telnet as a base to
support these protocols. TN3270 probably had its start in
this fashion.
However!! There are many terminals which generate (or
require) parity when dealing with a host. A user who wants to
send 8-bit data over a Telnet link generally knows what he's
doing and can give a command (whether it's CTRL/^ T, TRANSPARENT,
@ B I S, or whatever is unimportant). A user who has problems
with parity on his terminal generally doesn't know WHAT the hell
is going on!
I don't know what you are talking about when you say "a
number of clients and servers which take it upon themselves to
generate parity to send over the network." Parity is NOT sent
over the network. That is what the whole point of requiring
7-bit ASCII in non-binary mode is all about! If you put your
Telnet connection into binary mode, you're essentially saying
that you are NOT using parity on that direction(s) of the
connection, and that the high order bit is a meaningful data bit.
Telnet binary mode has nothing whatsoever to do with any
host OS software concept of "binary." Implementation that have
such a binding without the consent of the user are in error.
Telnet binary mode only impacts the transmission of 8-bit data.
Let me also re-emphasize that you can do everything you want
in Telnet WITHOUT making incompatible changes to the existing
protocol or implementations. All you have to do is define a new
option. For example, you can add a TN3270 option. Nothing in
the protocol says that binary mode is the *only* way to transmit
8-bit data. Already there are two options, Supdup and Supdup
Output, which cause 8-bit I/O independently of binary mode.
Think of binary mode as "untyped 8-bit I/O" as opposed to
whatever other options you define as being "typed 8-bit I/O."
This also has the advantage of making damn sure both sides agree
to the interpretation of the I/O mode you are proposing.
I suspect this conversation is boring a lot of people, so I
suggest it be taken off-line from TCP-IP unless people want to
hear it.
-- Mark --
-------braden@BRADEN.ISI.EDU (Bob Braden) (06/18/87)
The one widely-implemented use of 'binary' as of this date is as an indication that "we are in EBCDIC now". Sorry, but I don't think this is true. The one widely-implemented use of binary is tn3278, which is NOT (I recall saying this, I think on this same mailing list within the last 6 months!) repeat, NOT, EBCDIC. It is a structured glop containing bit fields and (yes) some character strings encoded in EBCDIC. But the only reasonable representation is BINARY. Bob Braden
Rudy.Nedved@H.CS.CMU.EDU.UUCP (06/20/87)
James, There are enough popular programs that assume the top bit is off that by assuming otherwise will put you in trouble. On the other hand, if you can write off those people or can get the software "fixed" then you win. CMU CS/RI cheats and uses the 8-bit. We have been yelled at several times but because the people involved are using wimpy IBM PCs, we win since we do not care about PCs (and the software people use is locally distributed with CMU mods). We have lost in our situations where the specification did not define who was right and who was wrong...and making the world continue to work was a higher priority then cleaner and faster local functionality. However there are lots of PCs out there and they do not hang off of a network like CMU, Stanford or MIT's. Therefore, if you go the way you are doing...you will find lots of unhappy people....these are the people that recomend your software (and they hate changing software that seems to work so changing the spec in retrospec is not going to get the software fixed). The result is still the same as what Mark Crispin says. You should have a command that gives them the option of a full 8-bit connection and then you allow them to have a work-around. People will complain that it is not automatic....but they can still get their work done and your customer service people can actually still give them an answer. If you force binary on the sly....well, you will hit implementations that also do weird binary action on the sly....different then yours....and you can not complain since you are wrong too (and they will undoubtedly have the same type of arguments as you since the speicfication in the long term was a design judgement). All and all the end is still the same, the TOPS-20 implementation is the best one around (in terms of correctness, I will argue about performance) and you can not change the specificatiion....too many installed bases of software that obey it close enough. You should use the TOPS-20 telnet server as part of your certification process...and you should provide work arounds for the clients who are talking to systems that are not quite on the ball. -Rudy