[comp.protocols.tcp-ip] TELNET Buffering Woes

rnicovic@polyslo.CalPoly.EDU (Ralph Nicovich) (05/02/89)

I have a buffering problem that I hope someone can help me with,
or perhaps point me in the right direction.

We have several systems running BSD 4.3 unix. A Pyramid is one
that I will use to illustrate the problem, however this occurs
on others. Also for purposes of example I will use the program
"cat" although this is not an exclusive problem of "cat", it is
just a simple example.

A user with a dumb terminal connected to a terminal server (in this
case a U.B. NIU-180) or with a PC connected directly to the Ethernet
TELNETs into the Pyramid which is connected to the Ethernet.
The user then "cat"s a large file to the screen, immediately tries to
abort with a control-c. The output keeps coming for a long time.

I have narrowed down the problem with a lan-analyser. The Pyramid
simply does not stop sending packets. It is my feeling that the "cat"
process has already sent all this data to "TELNET" and although
the control-c stops "cat" immediately, there is no way to flush the
buffers.

This is very disconcerting to the users who are use to a "rs-232"
connection. They don't wish to use "more" as it would not be applicable
in some other scenarios.

Does anyone out there have any Ideas or solutions, or perhaps this
is a "feature" of layered protocols ?
Perhaps a way of limiting the TCP segment size.

BTW- I do not administer the unix machine, only the network.

Ralph Nicovich
Network Engineering
Cal Poly State University, SLO

clynn@BBN.COM (Charles Lynn) (05/02/89)

There are sugestions in the soon-to-be Host Requirements RFC which
deal with this problem. Another technique which was implemented in
a telnet server was to provide a hook so that TCP knew the baud
rate of the user's terminal.  I.e., when the user said something
like "terminal speed 9600", the telnet server was notified by the
OS, and in turn notified TCP.  TCP then generated packets (er..
"segments") at the "right" rate. Maybe such techniques are now
covered by "slow start".

dab@opus.cray.com (Dave Borman) (05/02/89)

> A user with a dumb terminal connected to a terminal server (in this
> case a U.B. NIU-180) or with a PC connected directly to the Ethernet
> TELNETs into the Pyramid which is connected to the Ethernet.
> The user then "cat"s a large file to the screen, immediately tries to
> abort with a control-c. The output keeps coming for a long time.

The problem here is not with the remote machine, but with the local
telnet implementation.  The way that things should work is:
	User types ^C
	Local telnet translates that to, and sends, IAC IP,
		and then sends IAC DO TIMING-MARK and begins to
		flush all output.
	Local telnet receives IAC WILL/WONT TIMING-MARK, and
		resumes terminal output.

The problem is that many telnet implementations are very dumb.  They
are not doing local recognition of the interrupt character, and thus
they don't know when to send the DO TIMING-MARK and start output flushing.

The 4.3 telnet has an option "localchars" which when enabled causes
telnet to do the above stated procedure.

			Dave Borman, dab@cray.com

rnicovic@POLYSLO.CALPOLY.EDU (Ralph Nicovich) (05/02/89)

Dave,
I apreciate your responce about my buffering problems.
You seem to have the best insite of those that responded.

My terminal server (an ungerman -bass NIU card) is configurable
to understand special Telnet commands. i.e. I can set it up
so that when the user types a ^C the server will send a interupt
process with a "PUSH" or urgent flag. This is not to say that
the terminal server knows to stop output until it receives another
flag from the server.

I observe with a packet monitor that the INTERUPT PROCESS is sent
and data keeps coming (for miniutes). Would this be considered
normal ? Does the terminal client have to handle all this or
should the server function on the host at least flush a few characters ?

Ralph Nicovich
Cal Poly State University, SLO

dab@opus.cray.com (Dave Borman) (05/03/89)

> My terminal server (an ungerman -bass NIU card) is configurable
> to understand special Telnet commands. i.e. I can set it up
> so that when the user types a ^C the server will send a interupt
> process with a "PUSH" or urgent flag. This is not to say that
> the terminal server knows to stop output until it receives another
> flag from the server.
>
> I observe with a packet monitor that the INTERUPT PROCESS is sent
> and data keeps coming (for miniutes). Would this be considered
> normal ? Does the terminal client have to handle all this or
> should the server function on the host at least flush a few characters ?
>
> Ralph Nicovich
> Cal Poly State University, SLO


Let's take the simple case, a terminal connected to a machine, running
telnet to another machine.  When data is being sent from an application
through a telnet connection, there are several places where output can
be buffered:
	1) in the terminal driver on the server machine, waiting
	   for the telnet deamon to read it.
	2) In the telnet deamon itself
	3) In the kernel, on the socket queue for the telnet connection,
	   waiting to go out.
	4) On the client, in the kernel, on the socket queue, waiting
	   for the client telnet to read the data
	5) In the terminal output queue on the client.

Probably the most complete and quickest method of flushing data in a
TELNET stream would be:
	1. Client sends IAC IP, IAC DM, IAC AO, IAC DO TIMING-MARK.
	   The IAC DM is the Telnet "Synch", and is sent in urgent mode.
	   The IAC AO is sent just for good measure.
	2. Client begins to throw away output.  The terminal output queue
	   is flushed (#5 above), and we then continue reading from the
	   TCP connection and throwing away data (#4 above) scanning the
	   data looking for TELNET commands.
	3. Servers gets put into urgent mode, and starts flushing input,
	   looking for interesting commands, or end of urgent data.
	4. Server get IAC IP, and generates an interrupt.  The interrupt
	   should cause the operating system to flush the data in the
	   output queue of the terminal (#1 above).  The deamon should
	   toss any data it has read from the terminal but not processed
	   yet (#2 above), and toss any data that it has buffered up to
	   write to the socket, but has not written yet. (Also #2 above.
	   Flushing the output side could be tricky, because there may
	   be TELNET commands already imbedded in the output buffer that
	   we still need to send...)  Possibly the telnet deamon could
	   tell the kernel to toss any data that it has buffered up but
	   has not sent yet over the telnet connection, (#3 above) but
	   this would not be wise because you would lose any TELNET
	   commands that happened to be in that section of data.
	5. Server gets IAC DM, and goes out of urgent mode.
	6. Server gets IAC AO, and does most of the stuff already
	   mentiond in step 4.
	7. Server gets IAC DO TIMING-MARK, and sends IAC WILL/WONT
	   TIMING-MARK.
	8. Client goes into urgent mode, scanning for TELNET special
	   characters.  Since we are already throwing away data, this
	   doesn't really change anything.
	9. Client gets IAC DM, and goes out of urgent mode.  Client
	   still throws away output.
	10.Client gets IAC WILL/WONT TIMING-MARK and resumes normal
	   terminal output.

So, the bottom line is that the server side should flush any data
data buffered in #1 and part of #2 before it ever gets sent. You
can't do anything about #3, that data has to go across the telnet
connection.  Item #4 and #5 should be cleaned out by the client
side.

Hopefully this answers the questions about flushing the output
stream on a Telnet connection.

		-Dave Borman, dab@cray.com

BILLW@MATHOM.CISCO.COM (William Westfield) (05/03/89)

Dave Borman says:

    The problem here is not with the remote machine, but with the local
    telnet implementation.  The way that things should work is:
	    User types ^C
	    Local telnet translates that to, and sends, IAC IP,
		    and then sends IAC DO TIMING-MARK and begins to
		    flush all output.
	    Local telnet receives IAC WILL/WONT TIMING-MARK, and
		    resumes terminal output.

This would work just great except when you are talking to an ITS system
and running EMACS, in which case you want ^C to be passed transparently
through the connection, so that it means "control-meta".  Or when you
are trying to upload a file using the MODEM2 protocol, or etc, etc
This is why the "Telnet Line Mode" specificaion (soon to be an RFC) is
so important - it lets you negotiate which characters are special, and
exactly how they behave, and it lets you turn things on and off...

On the brighter side, I have heard that the "skid" on terminals connected
to properly behaving, well implemented terminal servers/unix hosts is
typically less than 1/2 screenfull.  Unfortunately, properly operating
terminal servers are rare, and properly oeprating hosts are even more
rare. (The case that seems to work well is a cisco terminal server talking
to the rutgers kernal based telnet on a pyramid...)

Bill Westfield
cisco Systems.
-------

dab@VAX.FTP.COM (05/03/89)

> From: dab@opus.cray.com (Dave Borman)  [The other dab.]
> 
> The problem here is not with the remote machine, but with the local
> telnet implementation.  The way that things should work is:
> 	User types ^C
> 	Local telnet translates that to, and sends, IAC IP,
> 		and then sends IAC DO TIMING-MARK and begins to
> 		flush all output.
> 	Local telnet receives IAC WILL/WONT TIMING-MARK, and
> 		resumes terminal output.
>

I disagree; that's not the way things should work.  Typing ^C is likely to
mean any of a number of things depending on what program I'm running at the
time.  The classic example is, of course, Emacs.  To see a much better
solution to this problem see the SUPDUP Local Editing Protocol.  Its
problem is that it's more complicated than anyone wants to implement (and
I'm not sure it's doable at all under UNIX as PTY's work now).  But
everything less that I've heard about is pretty unsuitable.  The protocol
at least needs the ability for the server to set the clients interrupt
characters and to be able to turn it off when the program switchs out of
COOKED mode into RAW or CBREAK mode (to use UNIX terminology).

If someone knows how to pull enough information to do this out of the back
side of a PTY on UNIX then let me know.  I'd like to put the code in my
SUPDUP server (and client too but that's easy, or at least straightforward)
to make it effectively run in line-at-a-time mode for programs running in
COOKED mode.  Then, if I can find someone to hack up GNU Emacs to
understand too...

						David Bridgham

hedrick@geneva.rutgers.edu (Charles Hedrick) (05/03/89)

We've all noticed the problem where you type ^C and output keeps
coming for pages.  You suggest that the local telnet should interpret
the ^C and use timing mark to resynchronize.  That sounds plausible,
but it has a problem that makes many of us unenthusiastic about using
it: the local telnet has to know what interrupt character to use and
when to disable it.  If you run a program on the host system that uses
"raw I/O", you don't want ^C translated into IAC IP.  You want the ^C
interpreted as a normal character.  The proposed linemode telnet will
implement the right handshaking between the host and user telnets to
make this kind of thing work.  But at the moment I don't consider it
safe.  We do what the telnet spec suggests, which is to use the telnet
sync mechanism.  I'll describe this for Unix, but it should work
similarly on other OS's.

  you:            type ^C
  local telnet:   pass ^C to host like any other character
  host telnet:    pass ^C as input to the job like any other character
  OS:		  notice that ^C is an interrupt character. (1) issue
		   interrupt to job (2) notify host telnet that output
		   should be flushed.  (This version of telnetd runs
		   the pty in packet mode, so such notification is
		   actually done.)
  host telnet:    flush all output that has been buffered but not yet
		   sent.  Put telnet sync into the output.  Telnet
		   sync involves an urgent notification, which moves
		   out of band.  The next packet sent from the host
		   to the local telnet should have that bit on.
  local telnet:   when it sees urgent, stop output and flush any
		   pending output.  This should happen out of band.
		   That is, as soon as a packet with urgent arrives
		   from the network, output queued to the tty driver
		   should be flushed, and new output coming from
		   the network should be ignored.  Ignoring continues
		   until the in-band portion of the telnet sync
		   arrives.

The disadvantage compared to your proposal is that you will get a bit
more extra output after you type ^C.  The amount of delay obviously
depends upon your network speed.  If you are working over a satellite
link, you could see *lots* of extra output.  For campus environments
it's not too bad.  The advantage is that the local telnet doesn't have
to know how your OS interprets characters.  On many OS's there are
several characters that should flush output, e.g. ^C, ^O, and ^Y.  And
programs can disable one or more of them.  (Also, it is possible for
programs to invoke the clear output function themselves, without any
particular character being typed.)  Those of us who use Emacs would
certainly not want to see the local telnet turn all three of those
characters into an interrupt!  My guess is that in a campus
environment, your method would work best for half duplex, and this
method for full duplex.  All of the terminal servers I've seen do
implement the local side of telnet sync, so you have to get the host
telnet daemon to handle things right.  I think we've convinced the
right people to get this in 4.4.  (In fact it's possible that it is in
4.3 Tahoe.)

barmar@think.COM (Barry Margolin) (05/04/89)

In article <8905021526.AA03195@oliver.cray.com> dab@opus.cray.com (Dave Borman) writes:
>The problem here is not with the remote machine, but with the local
>telnet implementation.  The way that things should work is:
>	User types ^C
>	Local telnet translates that to, and sends, IAC IP,
>		and then sends IAC DO TIMING-MARK and begins to
>		flush all output.

How is the local telnet supposed to know that ^C is the appropriate
interrupt character for the remote machine?  Many systems permit the
interrupt character to be set by the user or a program.  I don't think
the TELNET protocol specifies a way to transmit this change to the
client.

Also, the above assumes there's only one kind of interrupt.  On Unix,
there's ^C (interrupt), ^Z (suspend), and ^\ (quit).  They all need
output flushed, but they can't all send IAC IP.

What's really needed is a way to send out-of-band data to the telnet
client, telling it to ignore output until it reads a mark.  Is it
possible to use the URGent flag to implement this?

Barry Margolin
Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

chris@GYRE.UMD.EDU (Chris Torek) (05/04/89)

What Chuck Hedrick described (flush when the OS sees a flushing
character) is what rlogin does now (and has done since 4.2, although
it was buggy then) in BSD.  It does work and does not cause trouble
for Emacs users, nor for those who like the Rand editor, or run
various modem protocols over the network login.  But it is not
as efficient as it might be.

One should be able, with a (relative) minimum of hassle, to emulate
the terminal driver state at the far end of a network link.  This
has been possible in other systems, but not in BSD, because the
PTY driver refuses to cooperate by telling user processes about
state changes.  This is not too hard to fix, and one of these days
it will be fixed.  (As of now it is possible to see state changes,
but only by asking continually---not the nicest thing to do on a
multiprocessing system.)

Chris

jqj@HOGG.CC.UOREGON.EDU (05/04/89)

The usual objection to having ^C send an IAC and cause output flushing
via timing marks is that ^C, like any ascii character, has multiple
context-dependent meanings on the server end.  What I think this
discussion is missing is the observation that much telnet traffic these
days originates not from terminals and RS232-connected devices but from
PCs and workstations running user telnet.  In such environments, it is
fairly easy to have a local key sequence (Hyper-ctrl-C, ^]s iac^M,
PF34, mouse on the interrupt button in your window, or whatever) that
is assigned to generate a telnet interrupt character.  In such an
environment there is no particular need for the new telnet options,
since a user can generate either ^C or IAC; presumably the USER knows
what state her program on the server is in, and so knows what she wants
to send!

My question:  how many server telnets "correctly" (as defined by DAB)
handle receipt of IAC with timing marks?  How many client telnets correctly 
handle generation of timing marks when the user sends IAC?

dcrocker@AHWAHNEE.STANFORD.EDU (Dave Crocker) (05/05/89)

It's always nice to do some homework, so I finally did some of mine.

The April 17 draft of the Host Requirements document essentially places
the burden onto the server, since different operating systems need
different details.  The client cannot know which choice is correct.

The HR doc specifies that client Telnet SHOULD have the option of
flushing output, after an IP.  (Personally, I believe the SHOULD should
be a MUST; the impact of not having this feature is enormous.)

Unfortunately, the client is the one that must choose what sequence to
send, so the HR recommends that the sequence be user-configurable.

The Choices listed are:

1.  Urgent(IP, AO, DM); that is, Interrupt Process, Abort OUtput, and
Synchronize the output, via the Data Mark;

2.  Urgent (IP, DM), DO TM; the Abort Output is not sent from the server
telnet to its own operating system, but the Data Mark does a degree of
buffer flushing and the Timing Mark request synchronizes.

3.  Both

Seems to me that this is too important an area to leave this fuzzy.  The
pain of having output continue is significant and greatly reduces Telnet's
credibility.  On the other hand, I don't have any suggestion for how
to improve the spec.

Dave

dab@opus.cray.com (Dave Borman) (05/05/89)

> How is the local telnet supposed to know that ^C is the appropriate
> interrupt character for the remote machine?  Many systems permit the
> interrupt character to be set by the user or a program.  I don't think
> the TELNET protocol specifies a way to transmit this change to the
> client.

The whole idea of the IAC IP and the NVT is that you don't need to know
what the interrupt character on the remote side is.  You type the
interrupt character that is most natural to use on the client side of
the connection, it gets turned into IAC IP, and the server translates
the IAC IP into whatever is appropriate on the server side to cause an
interrupt.  Hence, if you are on your Unix machine, and you telnet to
some machine running foobarOS, you can type ^C and know that you will
interrupt the process on the remote side, regardless of what the
interrupt character on the remote side is.

> Also, the above assumes there's only one kind of interrupt.  On Unix,
> there's ^C (interrupt), ^Z (suspend), and ^\ (quit).  They all need
> output flushed, but they can't all send IAC IP.

Bingo.  You win the prize.  The 4.3BSD telnet uses BRK to send ^\, and
IP to send ^C.  It also has an option on the client side so that you
can tell the client what each character is.

There is a new option, LINEMODE, which add the capablity for both sides
to be in sync as to what character should be used for telnet code.  It
also adds three new new codes, ABORT, SUSP, and EOF, which on a Unix
machine would map to ^\ (quit), ^Z (suspend) and ^D (end-of-file).

You can probably expect to see the new RFC out sometime later this
month, a BSD implementation will also be made available (which I am
currently working on finishing up).  A draft version is available
for anonymous ftp from uc.msc.umn.edu, in pub/linemode.draft  (This
is NOT the final version.  The SLC area is changing).  Watch for
future announcements.

> What's really needed is a way to send out-of-band data to the telnet
> client, telling it to ignore output until it reads a mark.  Is it
> possible to use the URGent flag to implement this?
>
> Barry Margolin
> Thinking Machines Corp.
> 
> barmar@think.com
> {uunet,harvard}!think!barmar

It already exists.  It's called the "Synch" signal in RFC854, see
page 8.  A "Synch" consists of an IAC DM sent in urgent mode, and
causes the reader(client) to discard input (server output) until
it reads the IAC DM.


			-Dave Borman	dab@cray.com

barmar@THINK.COM (Barry Margolin) (05/05/89)

    Date: Fri, 5 May 89 09:22:00 CDT
    From: dab@opus.cray.com (Dave Borman)

    > How is the local telnet supposed to know that ^C is the appropriate
    > interrupt character for the remote machine?  Many systems permit the
    > interrupt character to be set by the user or a program.  I don't think
    > the TELNET protocol specifies a way to transmit this change to the
    > client.

    The whole idea of the IAC IP and the NVT is that you don't need to know
    what the interrupt character on the remote side is.  You type the
    interrupt character that is most natural to use on the client side of
    the connection, it gets turned into IAC IP, and the server translates
    the IAC IP into whatever is appropriate on the server side to cause an
    interrupt.  Hence, if you are on your Unix machine, and you telnet to
    some machine running foobarOS, you can type ^C and know that you will
    interrupt the process on the remote side, regardless of what the
    interrupt character on the remote side is.

That is a long-obsolete view of TELNET.  In most cases the user wants a
transparent connection to the remote machine; the local system's
keyboard conventions should be ignored and the user should have the
illusion of being connected directly to the server.  This is the
generally-accepted interpretation of BINARY option when the client is a
terminal emulation application.  If the TELNET client application wants
to be able to permit the user to invoke these IAC operations, that can
be done using some local escape (e.g. ~<char> on Unix, <Network> on
Symbolics Lispms, menus on windowing systems).

    > Also, the above assumes there's only one kind of interrupt.  On Unix,
    > there's ^C (interrupt), ^Z (suspend), and ^\ (quit).  They all need
    > output flushed, but they can't all send IAC IP.

    Bingo.  You win the prize.  The 4.3BSD telnet uses BRK to send ^\, and
    IP to send ^C.  It also has an option on the client side so that you
    can tell the client what each character is.

I thought BRK was obsolete (i.e. "old TELNET").

    > What's really needed is a way to send out-of-band data to the telnet
    > client, telling it to ignore output until it reads a mark.  Is it
    > possible to use the URGent flag to implement this?

    It already exists.  It's called the "Synch" signal in RFC854, see
    page 8.  A "Synch" consists of an IAC DM sent in urgent mode, and
    causes the reader(client) to discard input (server output) until
    it reads the IAC DM.

I thought I remembered it from somewhere.  But since
seemingly-knowledgeable people were proposing more complex things, I
thought I might have been wrong.

                                                barmar

braden@VENERA.ISI.EDU (05/06/89)

	The Choices listed are:

	1.  Urgent(IP, AO, DM); that is, Interrupt Process, Abort OUtput, and
	Synchronize the output, via the Data Mark;

	2.  Urgent (IP, DM), DO TM; the Abort Output is not sent from the server
	telnet to its own operating system, but the Data Mark does a degree of
	buffer flushing and the Timing Mark request synchronizes.

	3.  Both

	Seems to me that this is too important an area to leave this fuzzy.  The
	pain of having output continue is significant and greatly reduces Telnet's
	credibility.  On the other hand, I don't have any suggestion for how
	to improve the spec.

	Dave

Dave,

Unfortunately, there seemed to be cogent arguments for each of the 3 
choices!

   Bob Braden

hedrick@geneva.rutgers.edu (Charles Hedrick) (05/09/89)

It certainly seems reasonable to use a special function key or a
two-key sequence to generate telnet IP and do the timing mark dance,
as you have suggested.  That way, you haven't preempted ^C, and it can
be used in Emacs and other programs as normal.  However at least in
BSD Unix, the telnet interrupt option may not always work.  It simply
stuffs a ^C into the terminal input.  If you're in a raw-mode program,
this may well not have the desired effect.  However it's still a
sufficiently useful thing that it makes sense to implement.

braden@VENERA.ISI.EDU (05/09/89)

	    The whole idea of the IAC IP and the NVT is that you don't need to know
	    what the interrupt character on the remote side is.  You type the
	    interrupt character that is most natural to use on the client side of
	    the connection, it gets turned into IAC IP, and the server translates
	    the IAC IP into whatever is appropriate on the server side to cause an
	    interrupt.  Hence, if you are on your Unix machine, and you telnet to
	    some machine running foobarOS, you can type ^C and know that you will
	    interrupt the process on the remote side, regardless of what the
	    interrupt character on the remote side is.

	That is a long-obsolete view of TELNET. 

barry,
	
Oh, really?  That comes as a surprise to me, at least.

   Bob Braden 
   

mcc@ETN-WLV.EATON.COM (Merton Campbell Crockett) (05/09/89)

Gentlemen:

I believe the initial query was how to stop output to the terminal given that
the terminal server interpreted a ^C as an IAC IP.  The fact of the matter is
that a request to interrupt a process at the remote station has little to do
with stopping or terminating the display of data at the local station.

The appropriate action is to enter the sequence, perhaps a ^O, that would be
interpreted by the terminal server as an IAC AO.  The IAC AO is a command to
the TELNET client and server to abort output which is the desired operation.

mcc

milne@ics.uci.edu (Alastair Milne) (05/17/89)

dab@opus.cray.com (Dave Borman) writes
>The problem here is not with the remote machine, but with the local
>telnet implementation.  The way that things should work is:
>	User types ^C
>	Local telnet translates that to, and sends, IAC IP,
>		and then sends IAC DO TIMING-MARK and begins to
>		flush all output.
>	Local telnet receives IAC WILL/WONT TIMING-MARK, and
>		resumes terminal output.
>
>The problem is that many telnet implementations are very dumb.  They
>are not doing local recognition of the interrupt character, and thus
>they don't know when to send the DO TIMING-MARK and start output flushing.

    Sorry if this has already been covered, but: Does anybody know if Sun's
    version of telnet for PC-NFS 3.0 on the IBM PC/PS-2 has such problems?
    While I haven't tried breaking a long unwanted stream with ^C (haven't had
    to yet, fortunately) this telnet is often subject to sudden delays of from
    1 to maybe 8 seconds, during which there is no activity at all, and then a
    rush to catch up.  In our whole department, we seem to be the only ones
    suffering this delay, and I think we're the only ones using PC-NFS.

    Any insights or prior experience with this problem would be welcome.


    Thanks,
    Alastair Milne,
    Educational Technology Center,
    Dept of ICS
    UCalif. Irvine