[mod.protocols.tcp-ip] TCP Send buffer constraints destroy datarate

GEOF@MIT-XX.ARPA (Geoffrey H. Cooper) (01/14/86)

The other day I was playing with Imagen's TCP with a collegue, and we discovered
an interesting thing.  If we decreased the printer box' offered window from 6KB
to 2KB, the source-to-sink datarate transferring from a Vax 4.2 to the printer
increased dramatically (from 30 Kbit/s to 300 Kbit/s).

The reason for this behavior is apparently that the vax allocates only 2K of
buffer space to an outgoing TCP connection.  Thus it can only send 2K before
receiving an Ack.  In the printer box, ACKs are carefully delayed for
1/2 a second to increase piggybacking unless the window is opened.  The window
is opened when it is half consumed (i.e., after about 3KB of data is transferred).
In the vax's case, this allows 2KB of data every 1/2 second, or 32Kbit/s.  This
was just the figure I had been getting.

The general problem is that the window flow control reflects the receiver's buffer
constraints, and there is no way for the sender's constraints to be tr)n{mitted
across the connection.  In the XNS Sequenced Packet Protocol, an ACK bit in a
packet performs this function; it allows the sender to explicitly request an
immediate acknowledgement for the last packet in the "send window."

I've been trying to figure ways to fix the problem.  One algorithm would be to
send the ack after a dally or immediately if the connection's packet input queue is
empty after processing an input packet.  This would allow an entire string of
packets to be received properly and would not tend to cause excessive ack's.
It works well if the TCP process was dequeuing packets at a lower priority
than the internet process was enqueuing them.

This doesn't work in systems like mine which do not have separate processes with
queues between them for TCP connections.  In my case, the TCP module is upcalled
with each packet as a pseudo-interrupt, and all processing takes place either
during the upcall, during the client's downcall to get data, or during a timer
interrupt.  I can check the interface to see if additional packets have arrived,
but it is not assured that they are from the right connection.  Another possibility
is to notice that an ack is being sent after a dally, and decrease the offered window.
This would work (especially since the printer's only application is for bulk data
input), but I am concerned about the lack of generality in the scheme.  For one
thing, there is no way to increase the offered window once it is decreased -- sounds
like a perfect way to develop silly window syndrome.

I'd like to see some other thought on this problem.  How do other implementations
of TCP deal with this situation?  Do they set tight timers? Do they get trapped
by it (be honest...)?

Please respond to the list and sign all messages (otherwise I can't tell where
they come from on usenet-news), my incoming mail is not working well.

- Geof Cooper
  IMAGEN
-------

karn@MOUTON.ARPA (Phil R. Karn at mouton.ARPA) (01/14/86)

Delaying acknowledgements with timers has always bothered me.  Either the
application will send something in reply almost immediately (e.g., Telnet
echoing) or it will take so long that the acknowledgment timer will expire
with nothing to send, increasing the effective round trip time and limiting
throughput when the windows are small.

In an upcall environment, it would seem better to just note the fact that an
ack is owed and upcall the user, giving him a chance to send some data. Once
the user returns, you immediately invoke the tcp output routine which sends
the ack plus reply data, if any.  This avoids having to set a short timer,
which is often hard to do efficiently in an operating system environment.
Of course, the user cannot be permitted to block.

Another approach is to look at the incoming PSH bit. Since PSH from the
remote TCP usually means "I think my client doesn't have any more data to
send for a while", you could start the acknowledgement delay timer only if
PSH is clear; otherwise you would immediately return an ACK. This would
have the additional advantage of allowing you to acknowledge a burst of
segments with a single response.

Phil Karn