[comp.protocols.tcp-ip] Why Fewer Buffers

raj@hpindwa.cup.hp.com (Rick Jones) (02/11/91)

>In a different string, it was asked why cisco might suggest using
>fewer buffers on slower links...

There is probably a more fundamental reason for cisco's statement
about using fewer buffers on slow-links:

Keeping response time (RTT/SRT/etc) small.

If you give a TCP the space, it will use it in an effort to maximize
thruput - perhaps not 'consciously' ;-) but by always sending as many
packets at a time as it can/is 'allowed.' While this is desirable for
your file transfers, it might very well make the telnet/vt users a
triffle peeved ;-)

Of course, in this case, 'truth' is a relative thing. For example, if
you 'know' that you never get a burst of traffic through your router
greater that N packets, you might want to configure that many buffers
so as to avoid retransmissions. However, that value of N is not only
difficult to determine, it is also unlikely to remain static. 

As congestion controlled TCP's are becomming all the rage ;-),
configuring fewer buffers is probably a good thing - the TCP's will
still try to use 'all' the bandwidth/buffers, but you won't allow them
to build-up such an enormous delay. File transfers go through, and
interactive goes through.

Of course, tuning the relative bandwidths given each is a further, related
matter - perhaps some info on how OS schedulers go about sharing a CPU
amongst processes would be enlightening ;-) Then we could think of all
those bits in the IP header(s) as specifying 'process priorities' on a
pkt/pkt basis...

rick jones

___   _  ___
|__) /_\  |    Richard Anders Jones   | HP-UX   Networking   Performance
| \_/   \_/    Hewlett-Packard  Co.   | "It's so fast, that _______" ;-)
------------------------------------------------------------------------
Being an employee of a Standards Company, all Standard Disclaimers Apply

PIRARD@VM1.ULG.AC.BE (Andr'e PIRARD) (05/03/91)

On Mon, 11 Feb 91 02:33:35 GMT Rick Jones said:
>>In a different string, it was asked why cisco might suggest using
>>fewer buffers on slower links...
>
>There is probably a more fundamental reason for cisco's statement
>about using fewer buffers on slow-links:
>
>Keeping response time (RTT/SRT/etc) small.
>
>If you give a TCP the space, it will use it in an effort to maximize
>thruput - perhaps not 'consciously' ;-) but by always sending as many
>packets at a time as it can/is 'allowed.' While this is desirable for
>your file transfers, it might very well make the telnet/vt users a
>triffle peeved ;-)
>[...]

I still wonder about cisco's reason.
On one hand, they seem to have gone the TOS-faking way with heuristics for
response time. On the other, I don't think they believe many hosts respond to
source quench and throwing interactive packets on top of a full queue doesn't
help much, does it (but organizing queues by datagram size may help a bit).
The only reason I can think of for a small amount of buffers is avoiding
the duplicate datagrams thrashing plague (hence queue proportional to max
throughput), but a mixed limit seems coarse, per host seems unfair to
multiuser machines and per TCP connection incomplete and badly layered.

Finally, on grounds of what came in must go, I wonder if the most drastically
effective heuristics wouldn't be to detect and drop duplicate queued packets,
with the excuse of unreliability for rare mistakes.

Layers, layers...

Andr'e PIRARD         SEGI, Univ. de Li`ege        139.165 IP coordinator
B26 - Sart Tilman     B-4000 Li`ege 1 (Belgium)           +32 (41) 564932
pirard@vm1.ulg.ac.be  alias PIRARD%BLIULG11.BITNET@CUNYVM.CUNY.EDU

BILLW@MATHOM.CISCO.COM (William "Chops" Westfield) (05/04/91)

    >If you give a TCP the space, it will use it in an effort to maximize
    >thruput - perhaps not 'consciously' ;-) but by always sending as many
    >packets at a time as it can/is 'allowed.' While this is desirable for
    >your file transfers, it might very well make the telnet/vt users a
    >triffle peeved ;-)

Well, not if it's a good (Van Jacobson Slow start) TCP.  In that case, it
will try to source packets at a continuous bandwidth equal to the bandwidth
of the path, rather than sending out bursts of data at a bandwidth equal
to the local interface.  See Van's paper on congestion avoidance.


    I still wonder about cisco's reason....  On the other, I don't think they
    believe many hosts respond to source quench...

Even hosts that don't listen to source quench have been known to slow down
their transmissions in the face of lost packets.  In fact, it is a matter of
current debate whether sending a source quench has any advantage over merely
dropping the packet (and it has the obvious disadvantage of generating extra
traffic on an already congested path).


    The only reason I can think of for a small amount of buffers is avoiding
    the duplicate datagrams thrashing plague (hence queue proportional to max
    throughput), but a mixed limit seems coarse, per host seems unfair to
    multiuser machines and per TCP connection incomplete and badly layered.

Avoiding duplicate datagram thrashing (DDT?) is THE major reason for using
smaller queues.  This is unfortunately still very prevalent in TCP/IP, even
though the algorithms have been developed to help prevent it.  Many other
protocols are much dumber, and don't even have any facility for slowing
down in the presence of congestion.  If you believe that ANY queue with
an average size of greater than one represents a congested link, the actual
queue sizes on routers don't have to be very big.


    Finally, on grounds of what came in must go, I wonder if the most
    drastically effective heuristics wouldn't be to detect and drop duplicate
    queued packets, with the excuse of unreliability for rare mistakes.

Sure, given infinite CPU resources on the routers, one can do anything...

Bill Westfield
cisco Systems.
-------

PIRARD@VM1.ULG.AC.BE (Andr'e PIRARD) (05/07/91)

On Fri 3 May 91 14:08:37-PDT you said:
>Avoiding duplicate datagram thrashing (DDT?) is THE major reason for using
>smaller queues.  This is unfortunately still very prevalent in TCP/IP, even
>though the algorithms have been developed to help prevent it.  Many other
>protocols are much dumber, and don't even have any facility for slowing
>down in the presence of congestion.  If you believe that ANY queue with
>an average size of greater than one represents a congested link, the actual
>queue sizes on routers don't have to be very big.

Glad to know why you recommend small queues, and the underlying theories.
Unhappily, I still don't know how bigger than 1 they should be and I guess
no one can tell. Indeed not all TCPs are good. I've seen one transmitting
4 times what's needed when doing FTP alone on a slow link. So, even 0 maybe.
Routers can help when this link is not on the culprit's interface itself.

>    Finally, on grounds of what came in must go, I wonder if the most
>    drastically effective heuristics wouldn't be to detect and drop duplicate
>    queued packets, with the excuse of unreliability for rare mistakes.
>
>Sure, given infinite CPU resources on the routers, one can do anything...

I doubt it would spend much CPU on mismatches, especially if the line is
slow anyway and this feature used only on them. The larger expense on
matches would be pure benefit.
But indeed the transport level helping the others a bit would sure help.

>Bill Westfield
>cisco Systems.

Andr'e PIRARD         SEGI, Univ. de Li`ege        139.165 IP coordinator
B26 - Sart Tilman     B-4000 Li`ege 1 (Belgium)           +32 (41) 564932
pirard@vm1.ulg.ac.be  alias PIRARD%BLIULG11.BITNET@CUNYVM.CUNY.EDU