[comp.protocols.tcp-ip] telnet and Nagle's algorithm

ljm@FTP.COM (leo j mclaughlin iii) (05/18/91)

>
>...Based on the responses I received, I would suggest the following
>additions to RFC1122/1123:
>  
>    1. Where a terminal emulator is running on a workstation directly
>       on the Internet, the system SHOULD ensure that escape sequences are
>       transmitted in a single TCP packet....
>

The basic problem (and the reason folk warned you away from the algorithm
you are now using) is that there is no such thing as a TCP packet.  A TCP
connection just transfers streams of bytes with no record boundries.  Every
so often this list gets a question along the lines of 'I do a write of N bytes
but sometimes my read doesn't get all N bytes'.  The answer is that any
assumptions made about how TCP traffic is handled are eventually doomed
to failure.

>
>The writers of RFC 1122 (Requirements for Internet Hosts) were
>apparently aware of this kind of problems, because they gave SHOULD
>status to the Nagle Algorithm, and MUST status to letting an application
>turn it off....
>
>...I would remind implementors that RFC1122 says they MUST allow an
>application to turn off Nagle.  Does anyone know how this is done in
>practice?  I don't think a site configuration option really meets this
>requirement, but perhaps I misinterpret the spec.
>

Yet more bad news.  The primary reason for allowing Nagle's algorithm to be
turned off was X-Windows.  Nagle's algorithm is very bad for any application
(such as X-windows) with short bursts of high priority traffic.  For these
sorts of applications, most (all?) vendors who do Nagle allow 'no Nagle' as
an option when opening a connection.

Thus, it is the *application* which can turn off Nagle, not the user.  Even
worse from your point of view, the one application you would have a very
tough time convincing people should be un-Nagled, the one application which
generates the overwhelming majority of 'wasteful' small packets in the
Internet, the one application for which Nagle was designed, is telnet.

enjoy (or more accurately, sorry),
leo j mclaughlin iii
ljm@ftp.com

mcdonald@aries.scs.uiuc.edu (Doug McDonald) (05/19/91)

In article <9105172007.AA28847@ftp.com> ljm@FTP.COM (leo j mclaughlin iii) writes:
>Yet more bad news.  The primary reason for allowing Nagle's algorithm to be
>turned off was X-Windows.  Nagle's algorithm is very bad for any application
>(such as X-windows) with short bursts of high priority traffic.  For these
>sorts of applications, most (all?) vendors who do Nagle allow 'no Nagle' as
>an option when opening a connection.
>
>Thus, it is the *application* which can turn off Nagle, not the user.  Even
>worse from your point of view, the one application you would have a very
>tough time convincing people should be un-Nagled, the one application which
>generates the overwhelming majority of 'wasteful' small packets in the
>Internet, the one application for which Nagle was designed, is telnet.
>

I simply don't underestand this. If there is ONE application for which
instantaneous sending of individual bytes is important, it is telnet.
Otherwise user interaction could simply go to hell. Consider what
would happen if, for instance, you tried to do something like an
arcade game over Telnet. Or cursor control in an editor.

Besides, if you want a Telnet with Nagle, can't you simply have the
Telnet program ask for no Nagle???


Doug McDonald

dab@BERSERKLY.CRAY.COM (David Borman) (05/21/91)

> From tcp-ip-RELAY@NIC.DDN.MIL Sat May 18 16:53:34 1991
> Date: 18 May 91 20:00:09 GMT
> From: sdd.hp.com!news.cs.indiana.edu!ux1.cso.uiuc.edu!usenet@ucsd.edu  (Doug McDonald)
> Organization: University of Illinois at Urbana
> Subject: Re: telnet and Nagle's algorithm
> References: <9105172007.AA28847@ftp.com>
> Sender: tcp-ip-relay@nic.ddn.mil
> To: tcp-ip@nic.ddn.mil
> 
> 
> In article <9105172007.AA28847@ftp.com> ljm@FTP.COM (leo j mclaughlin iii) writes:
> >Yet more bad news.  The primary reason for allowing Nagle's algorithm to be
> >turned off was X-Windows.  Nagle's algorithm is very bad for any application
> >(such as X-windows) with short bursts of high priority traffic.  For these
> >sorts of applications, most (all?) vendors who do Nagle allow 'no Nagle' as
> >an option when opening a connection.
> >
> >Thus, it is the *application* which can turn off Nagle, not the user.  Even
> >worse from your point of view, the one application you would have a very
> >tough time convincing people should be un-Nagled, the one application which
> >generates the overwhelming majority of 'wasteful' small packets in the
> >Internet, the one application for which Nagle was designed, is telnet.
> >
> 
> I simply don't underestand this. If there is ONE application for which
> instantaneous sending of individual bytes is important, it is telnet.
> Otherwise user interaction could simply go to hell. Consider what
> would happen if, for instance, you tried to do something like an
> arcade game over Telnet. Or cursor control in an editor.
> 
> Besides, if you want a Telnet with Nagle, can't you simply have the
> Telnet program ask for no Nagle???
> 
> 
> Doug McDonald

Actually, the lack of the Nagle algorithm is what can cause user interaction
to be degraded.  The whole point of the Nagle algorithm is that it reduces
the number of tiny packets being sent/received, thus reducing both system
and network overhead.

From RFC 1122, section 4.2.3.4:

	A TCP SHOULD implement the Nagle Algorithm [TCP:9] to
	coalesce short segments.  However, there MUST be a way for
	an application to disable the Nagle algorithm on an
	individual connection.
	...
	The Nagle algorithm is generally as follows:
		If there is unacknowledged data (i.e., SND.NXT >
		SND.UNA), then the sending TCP buffers all user
		data (regardless of the PSH bit), until the
		outstanding data has been acknowledged or untill
		the TCP can send a full-sized segment (Eff.snd.MSS
		bytes; see section 4.2.2.6)

In BSD networking code, it is more or less:
	1) On the first write, send the data, even if it is
	   less than the MTU.
	2) Queue up successive writes until:
		a) we have MTU bytes of data to send
		b) an ACK comes back from the previous data
		c) a timer goes off.
	3) Send as much data as we can in one packet, and go
	   back to step 2.

The Nagle algorithm uses the fact that tiny packets are going out,
and the other side will be acking the data immediatly, because there
will be data coming back from the remote side and it can piggyback the
ACK on the data.  In rlogin/telnet, when doing single character, remote
echo, you know that the ACK will be coming back fairly quickly, because
the other side will be sending back a packet with the data to echo to the
terminal.  So, rather than dribbling out packets with one byte of data
in each packet, the Nagle algorithm allows the packets to be clumped
together.  When you are logged in across a long delay network, and you
see your typed characters being echoed in clumps of three or four
characters, that is usually the Nagle algorithm at work.

If you have an application that is sending out small packets, and
the remote side can't do anything when it gets the first packet because
it needs the data in the next packet, then  the Nagle algorithm will
hurt you.  (This is a sitution when user level stdio is helpful, or
using one writev() call instead of several write() calls to write non-
contiguous data (e.g., headers seperate from data) will avoid the problems
that the Nagle algorithm can cause.)

			-David Borman, dab@cray.com