keogh@nixeid.UUCP (Paul Keogh) (12/06/90)
I am trying to determine, quantitively or qualititively, the effect of reducing the bandwidth of an IP link. Lets assume I'm running FTP over Ethernet, with resulting throughput varying between 50Kbytes and 100Kbytes, averaging out at say 80Kbytes. I then want to replace the 10Mbits media with 2Mbits and then with 64Kbits (using bridges). Obivously at 10Mbits, the average speed of an FTP transfer is not influenced by the physical media but what happens at speeds of 2Mbits and 64Kbits ? I am trying to establish the relationship: Speed(FTP) = G(Speed(physical media)) + F(other factors) where F(other factors) is assumed the same for all instances of the test. Does anyone have any ideas for the nature of G() as described above ? Thanx, Paul Keogh keogh@u.nix.ie -- Paul Keogh, * Nixdorf Computer Research & Development, * Ha! you think it's funny, Dublin, Ireland. * Turning rebellion into money * Net: keogh@u.nix.ie *
krol@ux1.cso.uiuc.edu (Ed Krol) (12/07/90)
Just remember that as the link speed decreases the IP (and presumably) TCP headers take up a greater percentage of the available bandwidth. Leaving less room for real data. (At a link speed of about 500 baud you have just enough bandwidth to send TCP IP headers and do HDLC bitstuffing - sorry no room for data)
karn@envy.bellcore.com (Phil R. Karn) (12/11/90)
In article <1990Dec7.104135.12@ux1.cso.uiuc.edu>, krol@ux1.cso.uiuc.edu (Ed Krol) writes: |> Just remember that as the link speed decreases the IP (and presumably) |> TCP headers take up a greater percentage of the available bandwidth. |> Leaving less room for real data. (At a link speed of about 500 baud |> you have just enough bandwidth to send TCP IP headers and do HDLC |> bitstuffing - sorry no room for data) Nonsense. The overhead of a TCP/IP link is entirely determined by the amount of data in each packet vs the size of the headers. The link speeds don't enter into it at all. When the application queues many small packets for transmission, a TCP that implements the Nagle algorithm (now a required part of the standard) actually *reduces* the header overhead automatically as the link speed decreases. Rather than launch multiple small packets into a narrow pipe, a Nagle TCP bunches them together by delaying transmission of new data when there is already unacknowledged data in the pipe; multiple packets are launched only when they are maximum-sized (i.e., have minimum header overhead). Phil
smb@ulysses.att.com (Steven Bellovin) (12/12/90)
In article <1990Dec10.181110@envy.bellcore.com>, karn@envy.bellcore.com (Phil R. Karn) writes: > In article <1990Dec7.104135.12@ux1.cso.uiuc.edu>, krol@ux1.cso.uiuc.edu > (Ed Krol) writes: > |> Just remember that as the link speed decreases the IP (and > presumably) > |> TCP headers take up a greater percentage of the available bandwidth. > |> Leaving less room for real data. (At a link speed of about 500 baud > |> you have just enough bandwidth to send TCP IP headers and do HDLC > |> bitstuffing - sorry no room for data) > > Nonsense. The overhead of a TCP/IP link is entirely determined by the > amount of data in each packet vs the size of the headers. The link > speeds don't enter into it at all. However, there are problems with large packets on slow links. For example, on a 9.6Kbps link, a 1K byte packet takes just under a second to transmit. If you have a smart router that tries to give priority to short packets (i.e., telnet keystrokes and echoes), it will be helpless -- it can't pre-empt the packet that's already going. (Well, it could, I suppose, but that wouldn't be a very good idea...) There's a second problem if you're going several hops at low speeds. For each packet, a router can only start sending it once the whole packet has been received. Thus, if you have to go 4 hops at 9600 bps, the total time to clock the bits onto the wire is about 4 seconds. If the same data were broken up into 256 byte chunks (or fragments), you can get considerable overlap, as you can have several routers transmitting simultaneously..