[fa.tcp-ip] Floods of tinygrams from telnet hosts

tcp-ip@ucbvax.ARPA (06/04/85)

From: imagen!geof@BERKELEY


The worst offenders of the tinygram explosions seen by Dave Mills are
probably Unix systems running server telnet.  Every Unix TCP
implementation I've seen (save one) has the characteristic that packets
are sent whenever a unix WRITE(I) call is made.  In many applications,
this happens once per character.  Unix also makes it very difficult to
heuristically combine this flood of characters into larger packets
(except on retransmission), through a combination of factors including
interfaces that were designed for blocking system calls, and incredibly
poor resolution of application-level timers.

The one implementation I've seen that solves this problem uses the
algorithm that John Nagle of Ford Aerospace developed.  My
understanding of it is (John, please correct any innaccuracies) that a
TCP implementation will emit a new packet only under the following
situations:
	- Its internal buffers are full (i.e., it is ready to send a
	  full sized packet)
	- All outstanding data it has sent has been acknowledged by the
	  remote TCP.
When communicating over a local net, this algorithm works fine for
telnet, since the intercharacter time is typically less than the
connection round trip time.  Whenever the intercharacter time of the TCP
client becomes greater than the round trip time, the algorithm
naturally divides the data to be sent into equally sized packets, based
on the ratio of intercharacter time to round trip time.

In the case of FTP-like connections, the algorithm degenerates into the
current behavior, since the internal buffers are always full.

Will someone please implement this on a 4.2 unix?  Then maybe I'll be
able to get decent response from APL when I telnet into the 4.2 machine
on the local ethernet!

- Geof

tcp-ip@ucbvax.ARPA (06/05/85)

From: Mike Muuss <mike@BRL.ARPA>

Berkeley's latest TCP (which will eventually be released as 4.3 BSD)
is improved in this regard;  it also listens to ICMP source quenches.
	-M

tcp-ip@ucbvax.ARPA (06/06/85)

From: MILLS@USC-ISID.ARPA

In response to the message sent  Monday,  3 Jun 1985 11:23-PDT from  imagen!geof@shasta

Geof,

What implementation are you referring to? John and I discussed this a long
time ago. Subsequently, a send policy similar to this was introduced
along with a companion ack policy in the fuzzball TCP, but so far as I
know, not in any other implementation, although Berkeley claim to be doing
that with 4.3bsd.

The mechanism works as follows:

1. Arriving data from the user is queued temporarily at the transmitter. If
   the size of this queue is at least the MSS for the connection, it is
   packetized and sent immediately (subject to the usual window controls,
   of course). If not, the data are held until all previously sent data
   have been acked. Note that data arriving while a previously sent wadge
   is in flight simply pile up in the queue.

2. The receiver acks incoming data immediately if the amount of data passed on
   to the user (i.e. removed from reassembly buffers) since the last ack
   is at least the MSS for the connection. In addition, the receiver acks
   if some arbitrary time (here, about 500 milliseconds) has elapsed since
   new data arrived and no ack was sent.

As reported previously, these policies dramatically improved the performance
of TELNET over mismatched paths, while sustaining good performance of FTP.
We have been using it for about six months now in the fuzzballs over
the raunchiest of paths (would you believe Amateur packet-radio, which uses
CSMA at 1200 bps at 145 MHz?).

A major caution using these policies is the interaction between the send and
ack policies with respect to the receiver ack delay, which increases the
apparent delay for TELNET tinygrams. The delay is a major factor in improving
performance with remote echo, since the acks normally piggyback on the echo
segment, improving the apparent response time. However, in cases where
no end-end traffic is wandering backwards over the path and segments less
than the maximum are involved (e.g. many SMTP connections), a lot of dead air
results. The issues need to be studied a bit more.

Dave
-------