[comp.protocols.tcp-ip] telnet SUPRESS-TELNET option...

BILLW@MATHOM.CISCO.COM (William Westfield) (07/13/88)

Well, the conversation has died down somewhat, so its time to
respond to several peoples comments.

braden@VENERA.ISI.EDU complains:

    There are Telnet controls (eg IP, AO, AYT...) that can appear at any
    time in the stream, and a correct implementation must check for them.

Yes, Yes, I understand all that.  What I am proposing is that a telnet
server/client be able to say "I promise that I will never send any more
option negotiations OR telnet controls.  In effect, I am turning off the
telnet protocol, and from now on will be sending you just data."  Most
servers certainly never send IP, AYT, etc, and my experience says that
AO is not very effective.  Most clients do not locally trap signals locally
and send the equivalent telnet control (though this may change ala the
Cray local edit option).  The actual options would look something like:

    DO SUPPRESS-TELNET requests that the receiver no longer use the telnet
	protocol.

    WILL SUPPRESS-TELNET offers to no longer use the telnet protocol.

    DON'T SUPPRESS-TELNET demands that the receiver continue to use telnet.

    WONT SUPPRESS-TELNET demand to continue using the telnet protocol.

The only strangeness is that once the option is successfully negotiated,
there is no way for it to be turned off.


mcc@ETN-WLV.EATON.COM (Merton Campbell Crockett) asks:

    Are you suggesting that one eliminate line mode and only use binary
    mode?  Or are you suggesting that there be an integration of TELNET
    and the local systems terminal driver?

No, line mode would still be there.  If line mode was in effect before
the SUPPRESS-TELNET option was negotiated, it would remain in effect.
I was suggesting that SUPPRESS-TELNET would provide the biggest gain
in binary mode, since in addition to no longer scanning the datastream
for IACs, you would also no longer have to handle special End of line
conditions (eg CR-NULL --> CR).

The suggestion has nothing to do with integrating telnet into the
local terminal driver.  However, one of the spinoff features of the
option would be that this would become much easier.  (A program could
do the initial option negotiation, and then just "attach" the tcp
connection to the terminal driver.  The terminal driver would no
longer have to know anything about the telnet protocol.)


    From: Doug Nelson <08071TCP%MSU.BITNET@cunyvm.cuny.edu>

    Why use Telnet if you don't want it?  It certainly isn't that difficult
    to keep scanning for IAC, and Telnet is certainly not a very complex
    protocol.  And you always have the option of rejecting any changes in
    options if you don't want to deal with them.

The reason for using telnet that it is a standard protocol, and has the
capability of negotiating things (eg local echo) that I need to
negotiate.  The reason for getting rid of the telnet protocol after I am
done with negotiating these options is that scanning for IACs is more
difficult than you think.  The telnet protocol requires that the telnet
process look at every character.  While this may not be much to a telnet
server whose operating systems eventually have to look at every character
anyway, it can make a big difference to something like a terminal server.
One of the motivations for my original message was that we have recently
been working on improving performance in the cisco Terminal Server.  After
this work, I noticed that our TTY-daemon process (this is the process that
would feed a printer from a tcp connection, for example) used a factor of
50 (fifty!) less CPU time on a TCP stream than it did on a telnet connection
(using straight-forward implementations of each - the stream dealt with
things a segment at a time, and the telnet dealt with them a character at
a time.  Admittedly, there are obvious improvements we can make to the
telnet process, but the fact that the obvious implementation is so bad
points strongly to a place in the protocol where improvements can be made.

    In what circumstances would you want to use this feature?  In a general
    interactive environment, it would seem like you'd want to keep your
    options open.

At cisco, we have half a dozen or so 38.4bps graphics terminals that spend
their time connected to a CAD package running on a DEC20.  Once the connection
has been set up, no telnet options are negotiated, and no telnet escapes are
sent.  We also have two laserwriters running at 38.4kbps.  Although one
usually talks tcp streams to these, telnet can be used for interactive
purposes.  Many of our customers use our terminal servers in "milking
machine" mode, where telnet sequence are never sent after initial
negotiations.

    What concerns me, though, is that some Telnet implementations apparently
    assume that no more options will be negotiated after the startup, and then
    stop working when they encounter software that sends them, such as echo
    toggles for password suppression.

Really?  I didn't know of any vendors that did this.  Care to name names?
The SUPPRESS-TELNET option is clearly more valuable to systems that operate
in remote-echo, character-at-a-time mode.  This is most systems, however.
It would be an OPTION that could be refused, and it would operate
independently in each direction.


    From: Mark Crispin <MRC@PANDA.PANDA.COM>

    The performance problem you refer to (2 process switches/character) is
    an artifact of the design of the Telnet server and operating system and
    not a problem in the Telnet protocol itself.

    In WAITS, Tenex, and TOPS-20, the Telnet server is in the same context
    as the terminal driver (that is, it is part of the operating system).

Which is not to say that tops20 virtual terminals could not be made much
more efficient if they didn't have to look for IACs and such.  The tops20
terminal driver is not exactly an example of efficiency - every character
is carefully examined and massaged several times on both input and output.
If it had to do extra process wakeups in addition to this, it would be
even worse.

Bill Westfield
cisco Systems
-------

mcc@ETN-WLV.EATON.COM (Merton Campbell Crockett) (07/15/88)

While I can see some justification for using TELNET for your CAD/CAM termin-
als, I don't see why you would want to use TELNET for a print server.  I
would think a preferable procedure would be to use FTP to transfer files to
the print server and let the print server print the files at its leisure
using any algorithms it wishes to set priorities on the jobs submitted--unless
it is *not* a print server but just a common printer attached to a terminal
port.

Willett@SCIENCE.UTAH.EDU (Lon Willett) (07/16/88)

William Westfield <BILLW@MATHOM.CISCO.COM> proposes a SUPPRESS-TELNET
option as follows:

>    DO SUPPRESS-TELNET requests that the receiver no longer use the telnet
>	protocol.
>
>    WILL SUPPRESS-TELNET offers to no longer use the telnet protocol.
>
>    DON'T SUPPRESS-TELNET demands that the receiver continue to use telnet.
>
>    WONT SUPPRESS-TELNET demand to continue using the telnet protocol.
>
>The only strangeness is that once the option is successfully negotiated,
>there is no way for it to be turned off.

There is another strangeness here: it is (almost) impossible to turn off
TELNET protocol in both directions, since once it has been negotiated in
the one direction, negotiation can't be completed the other one (The
exception to this isn't worth mentioning).  The correct strategy is to
turn it off in both directions once it has been negotiated in a single
direction.

He then makes a case for greatly improved efficiency, which I don't
quite buy.  He writes:

>Admittedly, there are obvious improvements we can make to the
>telnet process, but the fact that the obvious implementation is so bad
>points strongly to a place in the protocol where improvements can be made.

Does the necessity of checking for an IAC really account for much of the
overhead?  I was under the impression that the primary overhead involved
in TELNET is the amount of network servicing that must be done.  For
example, on this machine, the telnet server will receive the data,
usually a single character, wrapped in a TCP packet wrapped in an IP
packet wrapped in an ethernet packet.  So the steps in processing are:

	1 -- receive the ethernet packet from the ethernet interface,
	and determine that it is meant to go to the IP driver, and turn
	it over to the IP driver.

	2 -- the IP driver determines the packet is valid (destination
	host is this one, header checksum is OK, etc), and that the
	packet is meant for the TCP driver, and turns it over to the
	TCP driver.

	3 -- the TCP driver does all its processing (sequencing,
	checksum, etc.), then turns it over to the TELNET driver.

	4 -- the TELNET driver checks for IACs, etc, and (assuming it is
	not an option negotiation) passes it to the terminal driver.

	5 -- the terminal driver does its stuff.

Obviously, it is preferable to do all this with no process context
switches.  But as long as the TELNET user process sends data one (or a
few) character(s) at a time, it seems like it will be very inefficient.
All the TCP sequencing, IP processing, and calculations of 2 checksums
would seem to dwarf the time spent scanning for IACs.

So yes, the protocol has flaws.  The main one is that TCP/IP (or any
protocol which is based on sending fairly large, complex packets) is not
a good way to handle terminal I/O.  The *right* way to deal with this is
to do the editting locally, so that the data can be sent a block at a
time, instead of a character at a time.  Look at the plethora of TELNET
options designed to do just that.

Finally:
>The suggestion has nothing to do with integrating telnet into the
>local terminal driver.  However, one of the spinoff features of the
>option would be that this would become much easier.  (A program could
>do the initial option negotiation, and then just "attach" the tcp
>connection to the terminal driver.  The terminal driver would no
>longer have to know anything about the telnet protocol.)

True, this option would make it easier to integrate the SUPPRESS-TELNET
TELNET processing into the terminal driver.  This strikes me as being
the major justification which could be given for such an option.  But as
long as your terminal driver is being modified to handle a TCP
connection at all, it seems that it wouldn't be difficult to also
include a simple minded TELNET implementation.

--Lon Willett (Willett@Science.Utah.Edu)
-------

hedrick@athos.rutgers.edu (Charles Hedrick) (07/17/88)

The reason search for IAC's is a performance issue is because without
it one need not look at individual characters.  Billw works for cisco.
They make terminal servers.  Their terminal servers are not bound by
performance limitations in the Berkeley implementation.  I don't know
how far Billw has gone in optimization, but in principle, they could
get the Ethernet interface to DMA a packet into memory, do a bit of
header processing, and then hand the packet to a DMA serial output
device, without ever looking at the characters at all.  Obviously this
isn't an issue for echoing single characters.  But for screen
refreshes, output on graphics terminals, output to printers, etc., a
reasonable TCP should be able to produce full packets.  Even for the
Berkeley code, processing characters individualy could matter.  It is
true that with a straight-forward implementation, there are a number
of context swaps.  But for large amounts of output data, you should be
able to get the whole screen refresh, or at least a substantial
portion of it, in one activation of telnetd.  In that case, the
efficiency with which it can process the characters may matter.  We
did see noticable differences in CPU time used by telnetd vs rlogind
before we put all of that stuff in the kernel.  In principle, rlogind
can simply do a read from the pty into a buffer and a write from the
same buffer to the network, where telnetd must look at the characters.

I'm not saying whether I think it's a good idea to have an option that
disables IAC checking.  But I can certainly see why Bill believes
there are performance implications.  My guess is that a carefully
tuned implementation can minimze those implications.  (e.g. one could
use something like index(buffer,IAC) to see whether there is an IAC
present, and then work at tuning index in assembly language.)  It's
always a matter of judgement as to whether it is a better idea for the
protocol design to encourage that kind of trickiness or not.  The
tradeoff of course is that to prevent it we complicate the protocol,
and make it likely that implementors won't bother to tune the case
where IAC's are left enabled.

karn@thumper.bellcore.com (Phil R. Karn) (07/20/88)

In my PC implementation, the Telnet receive code first scans a receive
buffer for the IAC character using the memchr() C library routine. This
is a fast binary search routine implemented in assembler; it is
analogous to the strchr() (aka index) routine for ascii strings. If the
search fails, the entire buffer is written directly to the screen
driver. If the search succeeds, then the usual character-by-character
processing is done, and stdio output buffering keeps the number of
output driver calls to a reasonable minimum.

This buys a little, but not all that much since the PC's screen output
routine probably accounts for most of the CPU cycles anyway.

Phil

gc@EWOK.AMD.BNL.GOV (Graham Campbell) (07/20/88)

	From: thumper!karn@faline.bellcore.com  (Phil R. Karn)
	Organization: Bell Communications Research, Inc
	Subject: Re: telnet SUPRESS-TELNET option...
	To: tcp-ip@sri-nic.arpa
	
	In my PC implementation, the Telnet receive code first scans a receive
	buffer for the IAC character using the memchr() C library routine. This
	is a fast binary search routine implemented in assembler; it is
	          ^^^^^^^^^^^^^
Nooo, binary search works on ordered lists, you don't sort the buffer first
do you? :-)
	analogous to the strchr() (aka index) routine for ascii strings. 
	 ...............
Graham

karn@THUMPER.BELLCORE.COM (Phil R. Karn) (07/20/88)

Sorry, my wording was misleading. I meant to say that memchr() is a
linear search function that operates on arbitrary binary data, as
opposed to strchr() or index() which operates only on C style ascii
character strings.

Phil