[comp.protocols.tcp-ip] TCP/IP terminalservers and BREAK

whna@cgcha.uucp (Heinz Naef) (10/25/88)

Assume a VT-100-type terminal accessing a UNIX host in the following ways:
(1) via a TCP/IP terminalserver networked with the target host
(2) via an asynchronous line interface integrated in the target host

There is a significant difference in how a BREAK (CTRL-C) condition is
handled:
In case (1) the terminalserver (3Com/Bridge LS/1, Cisco xSM) continues to
empty its buffer towards the terminal.
In case (2) the output to the terminal stops immediately.

On a UNIX system, try to cat /usr/dict/words with the two attachments
described above. In case (1) tens, hundreds of pages will be displayed
after hitting BREAK (or ^C), which is considered a problem of acceptance.

What is the reason of this different behavior? Would there be no way to
"rollback" the current buffer's worth of packets upon receiving a BREAK
and just flush the buffer?

Thanks in advance for any comments.

Regards,
Heinz Naef, c/o CIBA-GEIGY AG, R-1032.5.58, P.O.Box, CH-4002 Basel, Switzerland
UUCP: cgch!whna - Internet: whna%cgch.uucp@uunet.uu.net
BITNET: whna%cgch.uucp@cernvax.bitnet

hedrick@athos.rutgers.edu (Charles Hedrick) (10/27/88)

whna@cgcha.uucp (Heinz Naef) asks why ^C doesn't stop output when you
are logged into a system via a terminal server.  The most likely
answer is that one or more of your host or terminal server doesn't
implement telnet sync.  Output keeps coming because it has been
buffered.  In order to get good performance, data is aggregated into
1.5K segments.  Several KBytes of such segments may be sent at a time.
Meanwhile the terminal server is parcelling data out at a measly 9600
baud or whatever.  If the host stops sending stuff, the terminal
server may not get anything new, but there's this 10K or so of data
already in the pipeline (both on the terminal server and in the host).
There is a way to stop this.  What is supposed to happen is the
following: When a host wants to flush output, it sets the "urgent" bit
in the TCP headers.  This bit will be set in the next packet sent from
the host.  It takes effect on the terminal server as soon as such a
packet arrives.  It is not necessary to wait until the terminal server
gets to that packet in the course of dumping data to your terminal.
Its effect occurs "out of band."  As soon as the terminal server sees
a packet with this bit, it is supposed to stop output, throw away all
output in its buffers, and start ignoring new packets.  This continues
until it sees a special "sync" signal.  The sync is put into the data
stream where new data starts and output is supposed to resume.  If
both ends implement this properly, you will still get a bit of overrun
when you type ^C, because it does take some time for the ^C to get to
the host and the response to get back. But it will no longer go on for
pages.  I am reasonably sure that both the Bridge CS/1 and cisco ASM
implement this, so the problem is most likely in your host TCP/IP
implementation.  4.2 didn't do sync at all.  The initial 4.3 did it at
only one end (I think user telnet but not server, though I may have it
reversed).  I believe the latest 4.3 gets things right, but probably
most vendor implementations haven't updated to that release yet.
/usr/dict/words is a worst-case test, because it is very short lines.
So a delay of a couple of seconds can result in 10 pages going by.

jam@RADC-LONEX.ARPA (10/27/88)

> From: mcvax!cernvax!cgch!cgcha!whna@uunet.uu.net  (Heinz Naef)
> 
> Assume a VT-100-type terminal accessing a UNIX host in the following ways:
> (1) via a TCP/IP terminalserver networked with the target host
> (2) via an asynchronous line interface integrated in the target host
> 
> There is a significant difference in how a BREAK (CTRL-C) condition is
> handled:
> In case (1) the terminalserver (3Com/Bridge LS/1, Cisco xSM) continues to
> empty its buffer towards the terminal.
> In case (2) the output to the terminal stops immediately.
> 
> On a UNIX system, try to cat /usr/dict/words with the two attachments
> described above. In case (1) tens, hundreds of pages will be displayed
> after hitting BREAK (or ^C), which is considered a problem of acceptance.
> 
> What is the reason of this different behavior? Would there be no way to
> "rollback" the current buffer's worth of packets upon receiving a BREAK
> and just flush the buffer?

	This is something that probably you will never see fixed.
	On an async line the ^C gets back to the host almost at
	once.  On slow lines you can sometimes see a few extra
	characters pumped out before everything stops.

	Terminal servers use telnet (in most cases) to handle the
	connection to the host computer.  Since the data is flowing
	over the network a lot more of it can be queued up in packets
	while your ^C is trying to get back to the host.  The telnet
	protocol does not explicitly recognize ^C, and has no
	mechanisim for flushing following incomming packets after
	it is typed.

	But think about this:

	What if the user changed the interrupt character?  The protocol
	would have to be informed of the character change.  Also, special
	characters (like ^C) often depend on the operating system.  What
	types of special characters exist for different systems might
	make telnet a nightmare!

	But more important, if telnet would start flushing packets after
	it saw a ^C, what would tell it to stop?  Now the telnetd on
	the host would have to notice the interrupt, and send some kind
	of trailing sequence to the the remote telnet to stop flushing.

	I'm not saying it can't be done.  In fact, it might be a good
	idea (re: the rlogin v.s. telent discussion).  But it probably
	won't be anytime in the near future.


Sorry to be discouraging!
Joel

dab@opus.CRAY.COM (Dave Borman) (10/28/88)

To fix a telnet client so that you can flush output when you type ^C
is really not very hard, and can be done without modifying the telnet
server.  When the client sends the IAC IP (sent when the user types ^C,
or whatever the interrupt character is (What? your telnet doesn't send
IAC IP? Well, since you're changing it anyway, add a user option to say
what character should be translated to IAC IP)), also send a IAC DO
TIMING-MARK.  You then proceed to throw away all data until you see a
IAC WILL/WONT TIMING-MARK, at which point you resume output.  All telnet
servers should respond to the DO TIMING-MARK, regardless if they support
it or not (if they don't, then you have a real brain-dead telnetd
implementation, even 4.2 responds).

The only problem with this fix is that depending on how long the pipe
is, you may sit there for several seconds while the output is being
discarded, but at least it won't scroll off the screen.  A good telnet
implementation should allow you to do both TIMING-MARK and SYNC, so
that you can choose whatever works best.

			-Dave Borman, Cray Research, Inc.

BILLW@MATHOM.CISCO.COM (William Westfield) (10/28/88)

The primary problem is that a terminal server has no way of knowing
which characters are magic.  For example, pretend you come from a DEC
background, and think that ^O should start to flush output.  If you type
^O to the terminal server, and it blindly starts discarding output and
send telnet abort-output to the host, this might be fine.  On the other
hand, you might be in EMACS, and now expect ^O to create a new line. oops.
(I actually used a telnet that handled ^O locally.  It was a pain.)

There is an RFC being written that provides for "local signal handling",
and negotiation of signal characters.  When this rfc goes into effect,
and is implemented by vendors (both host and terminal servers), things
should get a lot better...

Bill Westfield
cisco Systems.
-------

mcc@ETN-WLV.EATON.COM (Merton Campbell Crockett) (10/28/88)

I think the original discussion was more concerned with the continued receipt
of data after a ^C (IAC IP).  Few operating systems will do anything about
flushing data from the transmit queue on receipt of a ^C (IAC IP) although
many will flush data from the receive queue on receipt of the ^C.  If the
intent is to stop the display of data on the local terminal, then ^O (IAC AO)
should be used.  The TELNET server should then toss the received data into
the bit bucket until a SYNC is received from the remote system.

mcc@ETN-WLV.EATON.COM (Merton Campbell Crockett) (10/28/88)

Just a curiosity?  Why does EMACS use a ^O to create a new line?  It would
seem that the existing ENTER and RETURN keys should work fine; however, it
does point out a need for the host and terminal servers to have some mechanism
for establishing what are significant keys and their implications.

map@GAAK.LCS.MIT.EDU (Michael A. Patton) (10/28/88)

   Date: Thu, 27 Oct 88 15:51:29 CDT
   From: dab%opus.CRAY.COM@uc.msc.umn.edu (Dave Borman)

   To fix a telnet client so that you can flush output when you type ^C
   is really not very hard, and can be done without modifying the telnet
   server.  When the client sends the IAC IP (sent when the user types ^C,
   or whatever the interrupt character is (What? your telnet doesn't send
   IAC IP? Well, since you're changing it anyway, add a user option to say
   what character should be translated to IAC IP)), also send a IAC DO
   TIMING-MARK.  You then proceed to throw away all data until you see a
   IAC WILL/WONT TIMING-MARK, at which point you resume output.  All telnet
   servers should respond to the DO TIMING-MARK, regardless if they support
   it or not (if they don't, then you have a real brain-dead telnetd
   implementation, even 4.2 responds).

Except that the client end can't possibly have enough information
about what is and isn't going to cause output to be flushed, only the
server end (and maybe the user if they know the insides of the server)
can know that.  You can't expect a user to type magic commands
frequently so for this scheme to work you need to add new options in
TELNET to allow the server to tell you every time this changes.  These
will have the same implementation problem that GA had (see parallel
discussion) where the server process can't get at the information.
All in all, I think this suggestion only makes the problem harder to
solve right.  Given the underlying TCP functionality, I think the
original Abort Output (AO) is the best we're going to do, except for
the problem of implementing it in some cases.

	Mike Patton, Network Manager
	Laboratory for Computer Science
	Massachusetts Institute of Technology

Disclaimer: The opinions expressed above are a figment of the phosphor
on your screen and do not represent the views of MIT, LCS, or MAP. :-)

SOL@SRI-NIC.ARPA (Sol Lederman) (10/28/88)

In emacs, ^O puts the end-of-line character(s) into your editing buffer and 
leaves the cursor at the same place it was in before the ^O was pressed. 
Hitting Return puts the same end-of-line character into your buffer but
the cursor is repositioned to be after the end-of-line character. The
difference might appear to be subtle but it's handy to have the option of
where you want your cursor to be after hitting Return.

Sol

Disclaimer: This is the way ^O seems to work on TOPS-20 emacs. I haven't
checked other versions.
-------

bzs@BU-CS.BU.EDU (Barry Shein) (10/30/88)

From: mcc@ETN-WLV.EATON.COM (Merton Campbell Crockett)
>Just a curiosity?  Why does EMACS use a ^O to create a new line?  It would
>seem that the existing ENTER and RETURN keys should work fine; however, it
>does point out a need for the host and terminal servers to have some mechanism
>for establishing what are significant keys and their implications.

Well, I won't go into defending the choice of key (I believe the
mnemonic was "Open line", perhaps that's satisfying? Its actions are
usually not identical to ENTER/RETURN, typically moving the cursor to
the beginning of the line and opening above or below, ENTER would just
break the line wherever the cursor was at the moment into two lines,
not open a "fresh" line.) Hmm, guess that was a defense :-)

As far as "significant keys", one could adapt something like
'termination masks' from other OS's (TOPS-20 and VMS both have
facilities sort of like this I believe but less ambitious.)

Say you list all significant functions as values:

define	TEXT		00	# Echo locally (or might be an or'd in bit)
define	TERMINATE	01	# Send everything buffered or just this char
define	INTERRUPT	02	# like ^C
define	STOPFLOW	03	# like ^S
define	STARTFLOW	04	# like ^Q
define	TOGGLEOUTPUT	05	# like ^O toggle
	..etc..

and so on and decide there are about 16 or less of them (whatever) so
they can be encoded in a 4-bit nibble. Then you create a table of 256
of these (128 bytes) with an entry from the above in each position.

Then the client side can use this table to drive how to handle each
character and the server side can pass updated tables whenever
something changes.

Although passing 128 bytes is more or less like passing any packet
(that is, due to packet overhead making the table much smaller doesn't
save much bandwidth, I could argue that with a few of the following
efficiencies you may as well make it 256 bytes and make the management
and lookup easy on byte oriented machines) one can imagine a few
commands like "just change position 27 to a TERMINATE" or
"TERMINATE+NOECHO ON ALL" to exploit, only a few of those would be
needed to drastically cut down on having to exchange the masks very
often I would guess since most O/S's have similar global commands (go
into RAW mode.)

Now the main challenge left would be to define a set of mask values
everyone can live with (it should be do-able since at worst you send a
TERMINATE+NOECHO ON ALL and have what you have today.)

	-Barry Shein, ||Encore||

slevy@UC.MSC.UMN.EDU ("Stuart Levy") (10/31/88)

Michael Patton <map@gaak.lcs.mit.edu> writes...
> Except that the client end can't possibly have enough information
> about what is and isn't going to cause output to be flushed, only the
> server end (and maybe the user if they know the insides of the server)
> can know that.  You can't expect a user to type magic commands
> frequently so for this scheme to work you need to add new options in
> TELNET to allow the server to tell you every time this changes.

I surely agree with this...

> These will have the same implementation problem that GA had (see parallel
> discussion) where the server process can't get at the information.

But not with this.  Yes, with many present implementations a telnet server
can't get the necessary information from the user process or terminal driver
or whatever.  But once there's a mechanism for using the information,
mechanisms will be developed (operating systems will be modified) to make it
available where necessary.  Cray and Encore at least, and probably others,
have already done this in different ways.  4.3BSD "almost" does it (by providing
a way to switch off the terminal driver in a pseudo-tty, but don't think they
have any way to notify the server of application-initiated changes;
I haven't seen 4.3-tahoe though).

> All in all, I think this suggestion only makes the problem harder to
> solve right.  Given the underlying TCP functionality, I think the
> original Abort Output (AO) is the best we're going to do, except for
> the problem of implementing it in some cases.

Aw, give it a chance!  This is a classic chicken vs. egg problem, might as
well allow a few species of birds to evolve before predicting we'll never
taste an omelet.  Though I'll agree to the extent that we need to leave
room in our diet for those reptilian operating systems that lack paths
for the necessary information... but that's what negotiations are about.

	Stuart Levy, Minnesota Supercomputer Center
	slevy@uc.msc.umn.edu

chris@GYRE.UMD.EDU (Chris Torek) (10/31/88)

In regard to telnet `virtual terminal driver' mode:

Michael Patton <map@gaak.lcs.mit.edu>:
>... implementation problem ... where the server process can't
>get at the information.
	
Stuart Levy <slevy@uc.msc.umn.edu>:
>... once there's a mechanism for using the information, mechanisms will
>be developed (operating systems will be modified) to make it available
>where necessary.

Agreed; I think that a `virtual terminal driver' would work well, provided
(alas!) it matches well with existing terminal drivers (and there are so
many that this is likely to be difficult).

>4.3BSD "almost" does it ... don't think they have any way to notify the
>server of application-initiated changes; I haven't seen 4.3-tahoe though.

The only way for a server to find out about changes would be for it to
poll the pty driver continuously, which is not workable for the obvious
(I hope it is obvious!) reason.  But fixing this would take only a few
lines of code.

>This is a classic chicken vs. egg problem, might as well allow a few
>species of birds to evolve before predicting we'll never taste an omelet.

But will we end up with the electronic equivalent of hardening of the
arteries?  :-)

Chris

hedrick@geneva.rutgers.edu (Charles Hedrick) (11/02/88)

Unless I'm misreading him, Chris is saying that telnetd can't tell
when you've cleared the output buffer on the pty.  This can't be true.
Rlogin can tell, by using the funny packet mode pty with select.  Our
telnetd does the same thing.  As far as I can tell, we do find out
when the output buffer has been cleared, and we do issue the
urgent/sync at that time.  At least I see a SYNC whenever I type ^C.
On the Pyramid, I do the "inner loop" of telnet and rlogind inside the
kernel.  There of course there's no trouble getting access to the
information, so things may work a bit better.  (I understand that
Pyramid is going to be distributing that code sometime after the
release of OSx 4.4.)

slevy@UC.MSC.UMN.EDU ("Stuart Levy") (11/05/88)

No, it's not that the daemon can't detect when the pipe is empty.
What it can't detect is when the user program does an ioctl to set
the mode on the tty/pty.  To be a player in a smart telnet etc.
protocol it needs to recognize when the application switches tty modes.

chris@GYRE.UMD.EDU (Chris Torek) (11/07/88)

	From: hedrick@rutgers.edu  (Charles Hedrick)

	Unless I'm misreading him, Chris is saying that telnetd can't tell
	when you've cleared the output buffer on the pty.

You misread me.  You can flush *output*; you cannot flush *user typeahead*.
You should be able to, but some critical information---namely, just how much
to flush---is missing.

Chris