[comp.protocols.tcp-ip] TCP/IP slowdown and window field of packet

tom@litle.litle.com (tom hampton) (07/06/90)

We are having a slow-down problem between two TCP/IP programs.

Both programs do fine for a while, transferring some 50,000 bytes/sec,
until all of a sudden, they appear to hang and transfer no more than
1/100th of that.  According to a crude packet monitor we have, the 
problem seems to occur when the window field of the packet gets set
down (we don't know by whom) to 0.

1) Has anyone had this problem before?  We fear it might have something 
   to do with an interacticion between Stratus (VOS) TCP/IP and ISC
   2.2 Unix for the 386.

2) Are we right in thinking that it has something to do with the 
   window field?

3) Any suggestions on how might complete our diagnosis, or, better yet
   treat the situation?


Thanks for your replies.

-- 
===============================================================================
 Tom Hampton, Mgr. New Technology, Litle & Co. | POB A218, Hanover, NH 03755
 603 643 1832 
-------------------------------------------------------------------------------
 Design is about figuring out what you won't be able to do.
-------------------------------------------------------------------------------
tom@litle.com  tom@litle.uucp  {backbone}!dartvax.dartmouth.edu!litle!tom
===============================================================================

solensky@interlan.Interlan.COM (Frank Solensky) (07/07/90)

In article <499@litle.litle.com> tom@litle.litle.com (tom hampton) writes:
   We are having a slow-down problem between two TCP/IP programs.

   Both programs do fine for a while, transferring some 50,000 bytes/sec,
   until all of a sudden, they appear to hang and transfer no more than
   1/100th of that.  According to a crude packet monitor we have, the 
   problem seems to occur when the window field of the packet gets set
   down (we don't know by whom) to 0.

Two things come to mind that might explain what you're seeing --

a) a "zero window" problem:
  When the receiver of the data has no space left in its receive window,
the sender stops transmitting, waiting for some indication that it is safe to
resume.  See if the packet trace indicates that:

  1) the receiver sends out another ACK out with a non-zero window size
     shortly after some space opens up in the receive window (does your packet
     monitor display time intervals between the packets?).

  and/or

  2) if the sender sends a "zero-window" probe packet.  The sender should be
     allowing for the possiblity that the above ACK is lost and sends a single
     byte into what it believes to be a full window, expecting that the
     receiver will respond with an ACK with a non-zero receive window.  This
     should occur in about a single retransmit timer interval (see Host
     Requirements for Communications Layers [RFC 1122], section 4.2.2.17 --
     about halfway down page 92).

b) a variant of the "silly window" problem [which I'll refer to as the
   "weird window syndrome"].

   This will show up on your packet monitor as a significant number of "odd"
   sized packets.  Once the receive window has opened up a bit, the window
   size announced back to the sender is some non-intuitve value (eg: not a
   multiple of some power of 2 [or, in the true "silly window" case, some value
   that is very small relative to the receive window size]).  The sender
   transmits enough to fill up the receive window again, and allows the
   transmitting process to unblock until it tries to send at least this amount
   of data again.  This is illustrated below:  "W" is the TCP window size,
   "A" is the ack number, "L" is the length of the packet, "S" is the sequence
   number.

	sender  receiver
        -----   ------

	    <-- W=0, A=5000000
            <-- W=2048, A=5000000    say that a zero-window has just reopened.

 --> send(1024)             S=5006144 is put near end of an 8 KB send window
        L=1024,S=5000000 -->
 --> send(1024)             S=5007168, filling send queue again.
        L=1024,S=5001024 -->

	    <-- W=0, A=5002048
            <-- W=1662 (??), A=5002048

        L=1024,S=5002048 -->
        L= 638,S=5003072 -->   the first part of a queued 1 KB buffer is sent

            <-- W=3092, A=5003710

                     User's send window is now opened only to 1662 bytes.

        L= 386,S=5003710 -->   the end of 2nd buffer is sent but not yet acked.

 --> send(1024)      New requests from host application allow 1024-byte and
 --> send(1024)      638-byte buffer to be pulled into send queue.  The tail
                     of the buffer is pulled down after some space opens up
                     again in the send window.

           . . .
        L=1024 -->
        L=1024,S=5006144 -->  first send() from above
        L= 638,S=5007168 -->  second send()
        L= 386,S=5007806 -->  the remainder of second, once sender was
				unblocked.


     Soon afterwards, the system falls into a self-perpetuating cycle of
filling its send window with a weird-sized packet, transmitting and getting
an ACK back that might not correspond to the boundary of yet another buffer.
The sender's performace suffers as a result of trying to keep packets in synch
with both the announced window sizes and the implied boundaries between each
send() request in the transmitting application program (since the boundaries
of each entry on the send queue closely resemble the send requests of the
initiating process).

     I've seen packet traces a few months ago where this occurs with one of
the vendors you mentioned, but don't recall offhand what rev of software was 
being used at the time.  The lower bound of the performance degredation was
approximately where "silly window" conditions were matched -- about 35% of
the receive window.  I had brought their announced window size to their
attention at the time, so maybe the problem will be fixed Real Soon Now
(though, obviously, I'm not in a position to announce what their plans are).

     The way we worked around this is that when some but not all of
the data in the send queue is acknowledged, reduce the bytes-acked count
in tcp's input so that a queued buffer isn't "partially" acknowledged.  This
forces the send routine to hold off pulling weird-sized packets into memory
in the first place:  here, the second 638-byte packet shouldn't occur, since
the space in the send window won't be released until the rest of the data in
the first "fragmented" packet packet is acknowledged, thereby preventing the
send() request that had created the perpetuating fragment from starting.

     Hope this helps..

--
					-- Frank Solensky / Racal InterLan
Red Sox magic number: 80