[comp.protocols.tcp-ip] fragmenting broadcasts

mr-frog@fxgrp.UUCP (Dave Pare) (05/03/89)

I was the original poster that started this whole mess, and I suspect
if I'd added some context, I could have made things somewhat clearer.
Asking people to comment in an information vacuum was the wrong approach.

So here is some context:

My task is to distribute fairly large volumes of data to sites on
an ethernet.  One central host takes an external data feed, massages
it a bit, and then sends it out on the main wire.  In this application,
each host on the wire is interested in receiving a major portion of
what is being broadcast.  No hosts are uninterested in receiving
these broadcast messages.

The network is being used as a broadcast data distribution mechanism.
My program will be the only broadcaster, perhaps with the exception
of occasional vendor-supported broadcasters like rarpd.

Given that I know what I'm doing, and that every host really and truly
does want to see almost all the data broadcast, can people understand
why I am perturbed by the limitation on the size of my broadcast
transmissions?

I don't dispute that on a general-purpose computing network, it is
best to have the general rabble restrained by rigid host requirements
which limit the ability to do mischief.  I don't believe that I or the
clients who purchase our product fit the above description, and so
in this case I feel frustrated by the restrictions.

So, with all this in mind, do people still maintain that the most
reasonable way for the network to behave is to force my program to
perform the fragmentation/reassembly task in user code?  That means
I have to switch into kernel mode for each 1k packet, instead of
buffering things up and sending 4k, or 8k packets.

Another thing: I've never seen a datagram get lost on a LAN.  They
do get discarded when the receiving process can't drain the receive
buffer space quickly enough, though.

I really don't want to sound like a whiner.  I like UDP.  I'm just
hoping that Someone Who Matters will read this posting, and decide
that the best way to interpret the Host Requirements document is to
have a default setting be "no fragmentation", but allow a kernel
global variable to remove this restriction.

Dave Pare

henry@utzoo.uucp (Henry Spencer) (05/04/89)

In article <1041@fxgrp.UUCP> mr-frog@fxgrp.UUCP (Dave Pare) writes:
>Given that I know what I'm doing, and that every host really and truly
>does want to see almost all the data broadcast, can people understand
>why I am perturbed by the limitation on the size of my broadcast
>transmissions?

I think you have made a very good case for why you are a special case and
should deliberately exceed the Host Requirements.  I don't think you've
made a particularly good case for a general relaxation of the rules,
given that your situation would seem to be quite unusual.
-- 
Mars in 1980s:  USSR, 2 tries, |     Henry Spencer at U of Toronto Zoology
2 failures; USA, 0 tries.      | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

braden@VENERA.ISI.EDU (05/05/89)

Dave,

I think there is probably a bug in the current wording in Section 3.3.3
of the Host Requirements RFC on fragmentation.  It was not meant to
outlaw intentional IP fragmentation in the source host -- but it certainly
did mean to DISCOURAGE it!

I gather your application will be used ONLY across a single Ethernet, with no
gateway hops, in an environment which is sufficiently constrained that
you assume reliable delivery from the link layer.  You're sure no site
will ever try to run it through a gateway (leading to congestive losses).
There is an enormous amount of experience that such ideal enironments
don't last; customers want to use the fact that these are INTERNET
protocols, and your assumptions collapse into a "puddle of glup."
You then discover you have to provide some reliable delivery in the
application, in which case the fragmentation/reassembly needs to be
in the same layer, or performance becomes terrible.

The Host Requirements Working Group would welcome your input on the
wording of the section you are concerned with.  Our mailing list is
ietf-hosts@nnsc.nsf.net.

Bob Braden

medin@NSIPO.NASA.GOV ("Milo S. Medin", NASA ARC NSI Project Office) (05/07/89)

>Date: Thu, 4 May 89 10:15:25 PDT
>From: braden@venera.isi.edu

...
>I gather your application will be used ONLY across a single Ethernet, with no
>gateway hops, in an environment which is sufficiently constrained that
>you assume reliable delivery from the link layer.  You're sure no site
>will ever try to run it through a gateway (leading to congestive losses).
>There is an enormous amount of experience that such ideal enironments
>don't last; customers want to use the fact that these are INTERNET
>protocols, and your assumptions collapse into a "puddle of glup."
>You then discover you have to provide some reliable delivery in the
>application, in which case the fragmentation/reassembly needs to be
>in the same layer, or performance becomes terrible.

...

>Bob Braden

It's not just only on an Ethernet without ever going through a gateway.
Many people these days bridge ethernets together with low cost bridges
that may drop packets under severe load.  Also, other bridge
ethernets with low speed (56 Kb) serial lines.  I am aware of
certain circumstances where some bridges will drop packets in
bridge queues in cases involving topology changes, and in cases
of congestion.  Also, losses on serial lines between bridges can
also introduce lossage...

So it's any sort of packet switch involved.  Gateways certainly
aren't the only source of packet attenuation!

					Thanks,
					   Milo