[comp.protocols.tcp-ip] Internet Uselessness

dpk@BRL.ARPA.UUCP (07/09/87)

Well, I am again trying to use the Internet to accomplish real
work and finding it almost impossible due to the almost none-existent
throughput.  IETF*/BBN/DCA/DARPA have yet to be able to solve the
performance problems that have been plaguing the Internet for months.
There was a time when I could easily sit here at BRL on the East Coast
and manipulate files and programs at Berkeley with ease.  This is no
longer possible to do reliably.  It has gotten to the point now that
phone calls at 1200 baud will get more work done.  I am getting ready
to dust off my UUCP...

Lest some of you misunderstand, there are parts of the Internet
that are reasonably healthy, such as the MILNET proper, which
while its had its problems (e.g. C300 software that falls on its
face), in general works fairly well.  But the MILNET is only a
part of the much larger Internet.  The MILNET alone is of little
use to us if we cannot talk successfully to the rest of the Internet.

Under one hat I am a member of IETF so I am aware of the nature of many
of these problems.  But under my other hat as a simple "user" of the
network, I don't care why its broken, it just needs to get fixed.
IETF understands many of the problems, but seems to lack power to get
those with the resources to take action.  There are ways to fix the
network but it takes money and priority on the problem. This need to
happen soon.  The Internet is needlessly getting a black eye because
its not working.

Telling me that the load went up and therfore performance went
down is no excuse.  If we expected to be successful, we should
have expected the load.

The Internet is too important to all of us (including the military)
to let this continue.  Do DCA/BBN/DARPA have any comments on this?
Who do we have to push to get this fixed?

Cheers,
	-Doug-

  Doug Kingston
  Advanced Computer Systems Team
  Systems Engineering and Concepts Analysis Division
  U.S. Army Ballistic Research Laboratory

* - Internet Engineering Task Force

cracraft@ccicpg.UUCP (Stuart Cracraft) (07/10/87)

In article <8707090344.aa02151@SEM.BRL.ARPA> dpk@BRL.ARPA (Doug Kingston) writes:
>Well, I am again trying to use the Internet to accomplish real
>work and finding it almost impossible due to the almost none-existent
>throughput.
>
There have been some real performance problems lately. Multi-second
delays between characters on TAC access from a TAC local to the area
code of the destination host.

>Lest some of you misunderstand, there are parts of the Internet
>that are reasonably healthy, such as the MILNET proper...
>But the MILNET is only a part of the much larger Internet.  
>The MILNET alone is of little use to us if we cannot talk 
>successfully to the rest of the Internet.
>
The situation described above occurred on a MILNET TAC.
Response from NIC and the host (a central hub) indicated
that it was a known problem that BBN is actively working on.
TAC performance has improved somewhat since then though
multi-second delays still occur, frequently.

>...
>Who do we have to push to get this fixed?
Talk with NIC and BBN.

    Stuart

LAWS@rsre.mod.UK (John Laws, on UK.MOD.RSRE) (07/13/87)

Doug,
 
I also have suffered (not quietly) for the last 10 months with the
terrible state of the Internet. I know some people in DARPA/BBN/DCA
but seemingly not the ones who "can make it happen" - (no offence
intended to those I know). More than the Internet getting a black eye
consider the following.
 
The Internet uses TCP/IP. There are elements of the Internet (Arpanet,
Milnet) that are under-resourced for the traffic - the performance can
be so bad that my remote host timeouts on me whem I'm trying to login.
TCP/IP is pushed by many members of DOD as the right protocol to be
used for networks in a military environment. Yes, they are going to
transition to ISO OSI - same protocols almost, TP4 and IP (via NBS).
 
Now my  vision of a military Internet is that it starts off in
peacetime with JUST enough resource for peacetime traffic (budget
problems on the Defence Vote we'll plug it next year etc (I think
you call it Get Well Later in the US)). Then the action starts to
warm up a little (the Gulf - Iraq/Iran) and the Internet falls over
the knee of the curve.
 
In part this is a consequence of some very stupid implementations of
TCP/IP, some elements of the protocol which if implemented in
a straightforward way cause congestion once it has occured, and
seemingly a complete failure to develop some other concepts (access
control sensitive to traffic volume, precedence and priority) to the
same degree. While the PTT X25 solution does potentially have its
problems in a hostile environment (note - X25 is more an interface than
an end-to-end protocol and the spec does not forbid self-healing nets
being built - it just needs the market to pay for it) its performance
is generally to a high standard. For good reason, revenue depends on
connection time AND traffic volume.
 
Maybe the answer is to fund the Internet a different way - the PTT
way.
 
John

PADLIPSKY@A.ISI.EDU (07/17/87)

[The following got to John, but not to the List, thanks to the vagaries
of trying to Answer on his foreign To and CC fields:]

John--
   As I understand your msg, "the PTT way" amounts to Overcharge So That
You Can Overengineer.  The Internet way, however, doesn't have to be
Undercharge And You Must Underengineer.  How about, instead, Undercharge
And You'd Better Engineer A Lot More Cleverly?  For example, I recall
that in the Multics NCP [sic] I had a privileged primitive that let me
ignore packets from a Host on a blacklist at interrupt time (which was
actually used once, during the infamous Network Horror Story Number 5):
Why not use a similar trick in Gateways?  Say, on a 30-second cycle (or
whatever time value seems appropriate) check the per-Host packet counters
(added for the purpose) and put any Host that's exceeded a threshold
count on the List for the next cycle.   A tad arbitrary, perhaps, but
much easier to implement than trying to spot and throttle sf-lovers file
transfers/SMTPs.  (The threshold should probably be a fraction of
total packets for the period rather than a set number, so that periods
of relative inactivity for other Hosts behind the Gateway can be
catered to--but this is just a "frinstance," not a spec, so no matter.)
   The example isn't necessarily to be taken literally, of course; it's
just meant to suggest a principle to the effect that you can combat
resource hogging creatively without violating protocol (since Gateways
are explicitly allowed to drop packets on the floor) on the one hand
yet without having to "go commercial" on the other hand.
   Actually, I've long suspected that the real problem with the
Internet Way is that we draw so heavily on Academe that clever
engineering is somehow suspect because it's not Computer Sciencey
enough somehow.  I mean, granted we couldn't have afforded to throw
as much hardware as we might have liked at Gateways from the outset,
but it still seems to me that all the pious queuing-theoretic stuff
about one buffer is enough and even infinite buffers aren't enough
and the like has clouded the issue so much that we still (unless
the relevant Task Force is about to step in) haven't come up with
a rough and ready congestion control approach because we "know"
it wouldn't be "optimal".  Well, as I've said before Optimality
Differs According To Context.
   cheers, map
P.S.  As I recall, the X.25 spec certainly _implies_ that there won't
be any dynamic routing, both in one of the error returns and, for that
matter, in the whole "virtual circuit" premise (since destination addrs
aren't carried in ordinary packets), but, yes, They could do it right
despite all that if They had/wanted to.  Does anybody know of an
implementations that do do it right, though?  (And even if there are some,
so what?  We could have some segments of the Internet which used
Multics Gateways if we wanted/had to.  Some old saw about weakest links
still seems to apply....)
-------

malis@CC5.BBN.COM (Andy Malis) (07/20/87)

Mike,

As a whole, I think the internet community has been doing "clever
engineering" for quite a number of years now.  However, there
comes a time when offered load just overwhelms the resources
devoted to the task.  We are very close to that point on the
ARPANET, even though we just made the routing algorithm more
"clever" and added another transcontinental trunk.

We are currently in the process of implementing congestion
control in the PSNs.  This should optimize the total available
throughput of the network (at the expense of backing flows into
source hosts if necessary).

Finally, the X.25 spec really says nothing about what goes on in
the subnet, it is just an interface spec between a DTE and its
DCE.  Internally, the PSNs use virtual circuits to support both
AHIP (1822) and X.25 traffic while using good old dynamic
adaptive routing to get the packets between the endpoint PSNs.
Internally, neither AHIP nor X.25 data packets contain full
addressing information, just the destination PSN number and a
connection identifier at that PSN.  So I guess you might say that
we "do it right".

Cheers,
Andy

PADLIPSKY@A.ISI.EDU (Michael Padlipsky) (07/21/87)

Andy--

I certainly did NOT mean to imply that we aren't doing good engineering,
merely that a bit more ruthlessness in extreme situations (which I
attempted to euphemize as "clever engineering") might be in order.
No intention whatsoever to minimize the efforts of the BBN troops
who keep the subnet going, whom I don't recall picking on since '71,
when the IMP's support for halfduplex interfaces wasn't as advertized.
(Horror Story Number Five involved TENEX, of course, but that's not
the subnet.)  Will be looking forward to the forthcoming congestion
control stuff.

A quibble over "addressing information": any packet that contains the
destination PSN number is addressed enough for me.  My impression was
(indeed, still is), though, that many/most X.25 subnets use the interface
format with just the VC number for packets in flight.  I'd be relieved
to learn I'm wrong in general, and am delighted to infer from your msg
that that isn't how it's done on our backbone.

   cheers, map
-------

malis@CC5.BBN.COM.UUCP (07/21/87)

Mike,

Your impression is correct: many X.25 networks (e.g. TELENET) set
up a path at call setup time and then force all packets along
that fixed path, just using the VC number for identification.

Your inference is also correct: in BBNCC networks, the same
underlying packet format (and dynamic adaptive routing) is used
for both AHIP and X.25 traffic.

If you are interested in the workings of the backbone subnet, you
might like to read my RFC 979.  It is the functional
specification for the PSN's new End-to-End protocol, which is
being implemented in PSN 7.0 (PSN 6.0 is now running in the
subnet).  My statement above concerning packet formats and
routing is true for both the existing and the new EEs.

Cheers,
Andy

mckee@MITRE.ARPA (H. Craig McKee) (07/21/87)

>As a whole, I think the internet community has been doing "clever
>engineering" for quite a number of years now.  However, there
>comes a time when offered load just overwhelms the resources
>devoted to the task.  We are very close to that point on the
>ARPANET, even though we just made the routing algorithm more
>"clever" and added another transcontinental trunk.

COMMENT: "Clever Engineering" - The ARPANET has been around for 
17 Years and there is still a need for clever engineering;
that's discouraging.

>We are currently in the process of implementing congestion
>control in the PSNs.  This should optimize the total available
>throughput of the network (at the expense of backing flows into
>source hosts if necessary).

COMMENT: With about a hundred different flavors of TCP/IP, some (many?) 
of which are network-hostile, the subscriber community is forcing the 
ARPANET designers to defend themselves.

>Finally, the X.25 spec really says nothing about what goes on in
>the subnet, it is just an interface spec between a DTE and its
>DCE.  Internally, the PSNs use virtual circuits to support both
>AHIP (1822) and X.25 traffic while using good old dynamic
>adaptive routing to get the packets between the endpoint PSNs.
>Internally, neither AHIP nor X.25 data packets contain full
>addressing information, just the destination PSN number and a
>connection identifier at that PSN.  So I guess you might say that
>we "do it right".

COMMENT: I didn't say it, Andy Malis said it: "...PSNs use virtual
circuits ... packets [DO NOT] contain full addressing information."
Then why are we flogging the network 40 octets of header per packet?

Isn't it time to swallow our embarrassment and admit that while
TCP/IP looks good on paper, in the reality of limited bandwidth and
unstable network delays, TCP/IP is, in fact, a Bad Idea?  The subscriber
and network communities need to work together and come up with a scheme
that doesn't hammer the network to its knees when something goes wrong.
The Commercial/PTT networks can do it, why can't we?  

malis@CC5.BBN.COM (Andy Malis) (07/22/87)

Craig,

I would like to respond to a couple of points in your message.

On the need for "clever engineering", and defending the ARPANET:
for all of its sophistication, the PSN's dynamic routing
algorithm was originally designed for, and worked very well in,
an environment where the offered load did not come close to
congesting the network's resources.  This is no longer the case,
with network subnet congestion as the predictable result.  The
recent changes in routing are actually slight modifications to
one part of the algorithm, to try to prevent routing oscillations
as a result of congested paths and to make the cross-country
satellite link more attractive when the land lines start
congesting.

As Steve Cohn said in a previous message, a more detailed
description of the change, and its effects, will be forthcoming.

Congestion control will further help the network in these days of
plentiful load and scarce resources.

I really don't see what we are doing as defending ourselves from
network-hostile hosts; rather, we are trying to allocate scarce
resources as fairly and evenly as possible, and trying to keep
the network from going past the point where additional load would
cause the network's total throughput to start degrading.  Of
course, that doesn't mean that there aren't "hostile" hosts out
there.

On 40 octets of header per packets: I was referring to the
internals of the ARPANET and MILNET subnets when I was discussing
packet headers and such.  However, they are only two networks on
an internet of over 100 networks now.  I am not a TCP/IP
implementer so I won't get into whether any of the 40 octets can
be squeezed out; you just have to realize that the environment
TCP/IP runs in is nothing like that of commercial and PTT
networks.  X.25 does internetting (X.75) using fixed routes
though transit networks and X.75 gateways without an end-to-end
transport layer like TCP, and is nowhere as reliable and
survivable as the TCP/IP internet.  But you have to pay for this
by using large datagram headers and end-to-end retransmissions.

I do agree that some of the assumptions that were made during the
TCP/IP design days (such as a richly configured backbone network)
may no longer be valid.  It may be time to revisit TCP/IP's
design, especially in light of the OSI protocol suite, just as
long as we keep in mind the overall requirements of the internet.

Andy

PADLIPSKY@A.ISI.EDU (Michael Padlipsky) (07/22/87)

H. Craig--

Just to show the unbelievers that there are some windmills even I
won't bother to lance, I'll merely answer your final rhetorical
question ("The Commercial/PTT networks can do it, why can't we?"):
As I understand the "it" John and Andy and I were talking about,
it's dealing with presented loads greater than a system was designed
to deal with; if you really think the PTTs know how to get five pounds
of bits in a two-pound sack, I hope you'll share your knowledge with us
all.  In other words, if it's physically impossible, even we can't
do it--but neither can They....

It would be too Zen to get into the subtleties of the flavor of
pie in the sky vs. the taste of bread in the mouth, but I would
feel myself remiss if I didn't mention that I've been aware of
pricing as a resource-demand regulator since my CTSS days, where
Corby told me it was already known for years to the phone company;
so I'd suggest that if the PTTs aren't groaning under their presented
loads (and for all I know they are but just don't talk about it in
public) it's because they have extra-protocol ways of limiting
the loads.  And, for that matter, if 40-byte headers bother you,
how about you be the one who finally does the exercise of totalling
up the sizes of the Network (actual), Network ("connectionless" [a/k/a
IP], Transport, Session, Presentation, Application, and Management
(times 5 or 6) "PDUs": after all, another good way of keeping the
loads down is to make things so barococo that nobody can fabricate
a load to present, right?

   pro forma cheers, map

P.S. Andy's reply to me, my reply to it, and his reply to my reply should
cast some light on the flavor of virtual circuits we were talking about
at the subnet processor to subnet processor level; sorry, I don't think
you can rightly infer that IMPs/PSNs "are" X.25 node to node.  Indeed,
the fundamental problem of the Internet in one sense is that there is
not and cannot be A node to node protocol at the subnet level, since
the subnet level is by definition hetereogenous.  Andy is adressing
the/a backbone subnet, which contrary to its original design parameters
is now dealing with a multiplicity of (to it) ancillary subnets; X.25
on the other limb is offering some (by spec) no more than 56.2 kb/s
"trunks", take 'em or leave 'em.  It's like comparing apples to
orange pits (or pips, if John Laws is still tuned in) to imagine that
Arpanet and "the PTTs" are interchangeable.  Try hooking say a 10-meg
Ethernet up to an X.25 PSN: you've either got a 56.2 bottleneck
or you're building some sort of milking machine and paying for
multiple X.25 ports even if your Hosts are "doing" X.25 as their
end-to-end/"Transport" protocol, but if the Hosts are doing the
ISO Stack, you've the same Gateway bashing problems as in the
Internet--with, almost certainly, a far less flexible/responsive
"backbone"....
-------

Mills@UDEL.EDU (07/24/87)

Craig,

Without addressing your comment that "TCP/IP is in fact a Bad Idea," I
might conclude from your remarks that public X.25 networks may have
to either surcharge or prohibit use of connectionless protocols over
virtual circuits. This is an interesting issue which should be raised
within the ANSI X3S3.3 committee. I also conclude from your remarks that
the public X.25 community has come up with "a scheme that doesn't hammer
the network to its knees when something goes wrong." Please, I desperately
need to know what scheme you have in mind.

Dave