[fa.tcp-ip] Retransmission policies

tcp-ip@ucbvax.ARPA (06/13/85)

From: MILLS@USC-ISID.ARPA

Mail-From: MILLS created at 13-Jun-85 10:22:22
Date: Thu 13 Jun 85 10:22:22-EDT
From: The Mailer Daemon <Mailer@USC-ISID.ARPA>
To: MILLS@USC-ISID.ARPA
Subject: Message of 13-Jun-85 10:21:51

Message failed for the following:
shasta!tcp-ip@SRI-NIC.ARPA: 550 No such local mailbox as "shasta!tcp-ip", recipient rejected
	    ------------
Date: 13 Jun 1985 10:21:51 EDT
From: MILLS@USC-ISID.ARPA
Subject: Re: TCP Timer
To:   imagen!geof@SU-SHASTA.ARPA, shasta!tcp-ip@SRI-NIC.ARPA
cc:   MILLS@USC-ISID.ARPA

In response to the message sent  Wednesday, 12 Jun 1985 16:19-PDT from  imagen!geof@shasta

Geoff,

Having as much experience as anybody with noisy TCP paths, I can certainly
confirm that the best strategy is to retransmit just the head end of the
retransmission queue. However, I gather from the tone of your note that
you are considering only the first packet on that queue, which is what
the TOPS-20s do. We have found much better performance allowing several
combined up to MSS in total length when a retransmission is necessary. This greatly
reduces the gateway loading when TELNET traffic is involved and also helps to
control damage when the retransmission is due to congestive losses in the net.

Another thing we have done is to carefully count outstanding packets and block
further transmission if the total is greater than a magic number (currently
eight). The magic number is reduced if an ICMP Source Quench arrives and
returns to its original value in a controlled way. The bookeeping to accomplish
this is fairly complicated, since segments can be combined, retransmitted, etc.
The fuzzballs used to use this mechanism exclusively to control packet fluxes, but recently
switched to the send/ack policy I described recently to this list. However,
that send/ack policy leads to severly suboptimal performance in many cases.
We are planning to integrate both the old and new policies to see if performance
can be maintained even in these cases.

Dave
-------
-------
-------

tcp-ip@ucbvax.ARPA (06/17/85)

From: Robert Cole <robert@ucl-cs.arpa>

Dave,
We do something similar here.
Except that when we re-transmit the head of the queue (or any
subsequent part) we make the packet size 200 bytes of TCP data.
Since a large part of our troubles arise from missing IP
fragments.
The point to note is that each site may have to think about its own
situation and problems, then apply its own (unique) solution to this
problem.

Robert
(from across the pond).

tcp-ip@ucbvax.ARPA (06/18/85)

From: MILLS@USC-ISID.ARPA

In response to your message sent      Mon, 17 Jun 85 9:27:20 BST

Robert,

Yes, you do have a point. It's interesting that the issue of how big to
make a glob of data impacts performance so critically in both the
initial packetization (viz my earlier comments) and repacketization
policies. It would be instructive to test your TP-4 implementation with
respect to these policies, in view of the constraints of the protocol.

Dave
-------