[comp.protocols.tcp-ip] DDN Backbone bandwidth vs. speed

WANCHO@SIMTEL20.ARPA (10/07/87)

As the host administrator for this machine, I often get asked why the
network is so slow.  Part of the answer is that this host is but a
2040 with 512KW, soon to be upgraded to a 2065 with 4MW.  That should
make a significant difference in that we will finally be able to run
the TCP service locked into the highest queue without swamping the
system.

But, for the rest of the answer, I point out that the DDN backbone is
still operating at 56Kbps with nodes which apparently cannot handle
higher rates.  That configuration maybe was adequate when the net
consisted of about 300 to 500 hosts and the protocol was the more
efficient, but less flexible NCP (in my opinion).  Now, we have an
order of magnitude more hosts sending TCP traffic through the net, and
the links are still 56Kbps.  Oh, there may be more links, more
cross-country paths, and even satellite hops added on a weekly basis
to handle the traffic.  But, the basic *speed* is still 56Kbps,
although the bandwidth *may* be greater.

Meanwhile, campus LANs architects sneer at anything less than 10Mbps
to get any work done.

Is it really unreasonable to ask why the backbone hasn't been upgraded
to at least T1 service?  Are there any plans for such an upgrade?  If
not, then what?  Still more 56Kbps links?  Does that *really* solve
the problem?  What should I tell my users (one in particular) to
expect, and when?

--Frank

PERRY@VAX.DARPA.MIL (Dennis G. Perry) (10/09/87)

Frank, all it takes is money.  Do you have some, or is DARPA/DCA supposed
to foot the bill?

dennis
-------

ron@TOPAZ.RUTGERS.EDU (Ron Natalie) (10/10/87)

I suppose the biggest reason why campus LAN experts work with higher
bandwidths is because we can.  But it's really more resonable to expect
that you need higher rates on local links because there is more traffic.
A quick check of the BRL gateway shows that most of the traffic never
leaves BRL (yet the gateway was still the sixth busiest MILNET host).
Nobody really expects earth-shattering response from the MILNET anymore
(right or wrong).  Most of the traffic is mail, which all happens in the
background.

DCA was probably left behind for a while in network planning because of
the overwhelming success of the INTERNET.   First, the amount of traffic
for any host has gone way up.  Seven years ago, when BRL brought up its
first ARPANET host, there were maybe a dozen people in the lab who used
the ARPANET services.  Now near a thousand people rely on electronic mail
daily.  Second, since IP became available five years ago, MILNET node traffic
was no longer limitted by the traffic generated by a single node.  You
could have one machine front for an entire installation.  I'm not sure
DCA fully comprehended that.  I remember them once telling me that they
liked BRL, we only had one host on the net.  Of course, that host (actually
two) fronts for dozens of Ethernets, Proteon Ring Nets, Hyperchannels, and
even a six IMP ARPANET-clone.  On this are scads of workstations, super-minis,
and two CRAY's.

It's clear that the whole thing is over capacity.  Between gateways and
more and more users relying on network service, the old traffic estimates
are way out of line.  I'm not sure what can really be done though.  Trunks
could always be added, which is probably the most expedient.  56K is not bad
when you have enough connectivity.  The IMPs certainly won't deal with T1, but
more sophisticated switches such as the Butterflies are probably a long way
from the MILNET.  The new end-to-end protocol and the mailbridge upgrades have
not yet been fielded, let alone drastically changing the network topology.

Oh well.  I've got to go hook up another T1 line.

-Ron