[comp.protocols.tcp-ip] TELNET and 8-bit ascii codes

grn@stl.stc.co.UK (Gary Nebbett) (02/16/89)

Should a computer and terminal communicating via TELNET and capable of
exchanging 8 bit ascii codes (say a VT220 and a VMS system) negotiate BINARY
TRANSMISSION mode?

The Draft Host Requirements RFC states that the high order bit should not be
set in NVT mode.

Regards,
        Gary Nebbett           STL, London Road, Harlow, Essex  CM17 9NA, UK
        Voice +44-279-29531  Email grn@stl.stc.co.uk | PSI%234237100122::GRN

mrc@SUMEX-AIM.STANFORD.EDU (Mark Crispin) (02/19/89)

In a word, the answer is yes, if you want to transmit 8-bit ASCII
codes then you should use network binary mode.  Note that network
binary mode also turns off the special handling of CR; the concept of
the "Telnet newline" only exists if binary mode is off.  Also,
remember that binary mode is unidirectional and if you want a
bidirectional binary stream (e.g. for using the international
character set on a VT220) you must negotiate binary mode in both
directions (IAC DO BINARY + IAC WILL BINARY).

An example of operating systems which support this are WAITS (the
system at SAIL.STANFORD.EDU), ITS, and TOPS-20.  The most important
use is to support 8-bit keyboard controls (the so-called "EDIT" key)
or to turn off local controls for people coming in from a TAC.  The
PANDA version of TOPS-20 also supports VT220 characters as one of
its extensions.

Beware!!  Many Unix Telnet servers tie network binary mode to the
internal Unix concept of "raw" vs. "cooked" terminal modes.  Arguably,
this is a bug or at least a misfeature, but it's much too late to hope
for a fix.  At least some Unix Telnet servers will process 8-bit
characters even if the stream is not in binary mode.  Also, many
terminals send parity in the high order bit.  For this reason, it is
not safe for a Telnet client to enter binary mode on its own without
explicit direction from the human user or from the Telnet server,
since there is no "right" setting that is guaranteed to work on all systems.

-------

dnwcv@dcatla.UUCP (William C. VerSteeg) (02/21/89)

One important thing to note about this discussion is its impact on user
transparency. Ideally, a user wants to not know what underlying 
mechanism is used to connect him to his host. It should appear to be
a direct cable link while in data transfer. Telnet, as implemented under
many Un*x systems, causes problems in this respect. For instance,
take a user who is using his PC as an ASCII terminal (YECH) and connecting
to a number of hosts using a multiplexer that uses a number of protocols.
This user is going to expect to be able to do file transfers over 
his async connection. If this user gets a real async hookup
to his UN*X box, this works fine. If, however he happens to be sent to a 
Telnet session, this doesn't work. 

In the days when a user knew that his connection was going to be via
Telnet, this may have been acceptable. But, in these days of 
heterogeneous networks, a non-technical user doesn't want to hear
about the details of implementations. This is a call to end the days
of "Well, UN*X is supposed to have these quirks. It is a programmers'
environment." I think we need to re-think our attitudes and implement 
our systems with the non-technical end user in mind. The days of 
saying that UN*X bugs are part of the deal should be over.

Lets face it, UN*X should not say that it will send 8 bit data when asked
to do so, and then send 7 bit data. (NOTE- this is what SUN 3.5 does).


Wishfully speaking
Bill VerSteeg