[comp.protocols.tcp-ip] TCP rate control?

karn@KA9Q.BELLCORE.COM (Phil Karn) (03/06/88)

Van,

It was fun to talk to you in San Diego.  Now I'm sorry I didn't go to
the earlier IETF meetings. 

Many thanks for the patient explanations of your TCP congestion control
techniques; I'm continuing to implement them in my own TCP.  However, I
still think something further may still be required, at least on the
(admittedly pathological) packet radio links I use. 

My reasoning goes as follows.

You never allow the congestion window to decrease below one MSS.  You
argue that letting it go lower would increase header overhead, wasting
link (and network) bandwidth when it can least be spared.  I agree. 
However, this limits your ability to throttle the connection's bandwidth
further should even one MSS-sized packet per connection be too much. 

To see this, consider the case of 10 simultaneous FTP/TCP transfers
taking place through a common bottleneck link.  The gateway feeding this
link has only 5 packet buffers.  Even if the ten TCPs have their
congestion windows permanently set to 1 packet, there will be room for
only half of their packets in the buffer queue at any one time, and
there will be a steady-state 50% packet loss.  True, the link itself
will be efficiently utilized, but the upstream links will be carrying
lots of retransmissions.  Even if these links aren't themselves
congested this seems undesirable. 

Here's another way to look at the problem.  Assume the link bandwidth is
10 packets/second.  Then the maximum queuing delay that can be imposed
on the sending TCPs by the gateway is 5 packets / 10 packets/sec = 1/2
sec.  Since the bandwidth required by any one TCP is the window size (1
packet in this case) divided by the round trip time (1/2 sec -- we'll
assume that the smaller acks encounter negligible delay) each of the ten
TCPs will still attempt to generate two packets per second, for a total
offered load of 20 packets per second.  This is twice the link's
capacity, so half of the generated packets will be dropped. 

So it seems that some sort of rate control timing is necessary.  In
other words, if packets are still being lost even with a 1-packet
window, then the sending TCP should insert increasing amounts of delay
between incoming ACKs and outgoing data packets.  I suppose there are
lots of ways this could be done (I'll probably have the retransmission
timer do double-duty) but as you point out it's important to adjust
the delays so as to produce linear changes in the offered load. 

If I heard correctly, certain gateways on the Internet can only buffer
two packets at a time, so perhaps this scheme would help in places other
than my pathological 1200 baud packet radio channels.

Comments? Rebuttals? (I'd be happy if you proved me wrong so I could
avoid implementing this!)

Phil