[comp.protocols.tcp-ip] Friendliness vs. Performance

budden@tetra.NOSC.MIL (Rex A. Buddenberg) (08/25/88)

For Dave Crocker.

I would recast the question more in terms of capacity/throughput
traded off against robustness/fault tolerance/survivability.  Fair?

Most users tend to underestimate the real fault tolerance needs.
Once a network is put in place, it attracts applications like ants
to a picnic.  Before you can spell TCP/IP, the network has become
so critical that it's failure becomes intolerable.  I'm familiar
with a few horror stories like one about a bank that lost its 
funds transfer net for several hours.  The interest charges on the
cash it had to borrow to keep afloat ranged into 8 figures.

I have three very clear requirements where workstations and sensors
on a LAN is clearly the best way to go.  In all three cases, network
loading is very modest -- one ridiculously so: in terms of dozens
of bytes per day [!!].  But fault tolerance -- immunity against either
damage or component failure -- is vital to all three.

The curious technological development is that in the LAN world,
this isn't a trade off.  The only fault tolerant LAN architecture
out there readily available is doubly linked rings.  Discounting
Proteon's proprietary products, this leaves FDDI -- somewhat
higher performance than ether...

Fault tolerance is a lot like paychecks -- most of us rather take
them for granted.  They come every couple weeks -- especially for
us folks whose paychecks are provided by the taxpayers.  And like
LAN service -- expected to be there when we want to use it.  Watch
the fracas when your taken for granted paycheck or LAN fails.
I'd suggest a caveat for a vendor: fault tolerance may not gain
you lots of customers, but lack of it can lose a bunch.

Rex Buddenberg

dcrocker@TWG.COM (Dave Crocker) (08/26/88)

It is quite heartening to have honest-to-goodness customers demanding
robustness features.  In a perhaps-overly-cautious attempt to avoid
getting commercial in the discussion, I did not mention in my previous
note that our VMS product was one of the -- maybe even THE -- first to
be shipped to customers with Van & Co.'s congestion/slow-line features
and the next releases of our Streams and DOS products will contain them.
In other words, folks, please don't take my previous comments as suggesting
that the robustness features should not be included.

My concern was that the absolute assumption of its requirement be tempered
somewhat by looking at actual customer requirements; in some percentage of
cases, reasonable robustness and superb performance are more important
than superb robustness and reasonable performance.

Please note that standard implementations of TCP, using old-style congestion
and retransmission algorithms, are significantly robust.  In fact, we probably
are missing the boat by using the term to refer to the recent improvements...

The new capabilities do not alter data-loss with respect to the user.  They
alter packet-loss on the net, thereby reducing retransmission requirements.
With or without the new features, users will get equivalent data transfer
integrity at the receiving application.  With the new code, however, successful
COMPLETION of the transfer may be different.  (I.e., if you get the bits, they
will be correct.)

In effect, I was going creating an artificial constraint, much like asking a
person who they would save, if a parent and a spouse were drowning and they
could save only one.  On the other hand, there are limited development
resources and prioritizing customer requirements is essential.  Just because
all you knowledgeable, demanding networkers have the priorities set one
way -- which I agree with as someone who has suffered with Internet
performance -- does not mean that it is correct for the masses.  Long-term,
it IS correct, since they will all be part of a global internet and will
be subject to the phenomena that the new algorithms address.  However,
short-term, many of those networkers are isolated.

Dave

hedrick@athos.rutgers.edu (Charles Hedrick) (08/26/88)

It's hard to make guesses as to what will sell.  But aside from
occasional tests in the PC-compatible area, there isn't a lot of
benchmarking hysteria in the TCP/IP world.  So I'd think vendors would
not be under the sort of pressure to get performance at all costs that
they are in some other markets.  From the point of view of a manager I
can tell you that I get lots of calls about inability to get mail
through to distant sites, and broken telnet connections.  By and large
our users do not carefully time their FTP's and call me when their
throughput is only 100 kbits/sec.  I have conducted various reviews of
Internet performance at the IETF meetings, where we asked an
assemblage of network managers what problems they were seeing.  Again,
it's clear that everytime somebody gets a "connection broken" message,
their network manager gets an irate call, but I don't see signs of
irate users demanding 20% more speed.  (Gross slowdowns are another
thing, of course.)

So if there were really a speed/robustness tradeoff, I'd strongly
recommend that vendors favor robustness.  But I'm not even convinced
that there is.  The only case I know of where using Van's
recommendations would slow you down is where by not using them you
manage to get more than your fair share of a gateway.  It's clear that
this isn't a stable situation: only one person can do this at a time,
and you can't guarantee that he will be able to do it consistently.
Furthermore, you're going to start seeing gateways that defend
themselves against this sort of thing.  This is not just a concern of
us wierdos on the Internet either.  There are lots of big corporate
networks being built, and they typically have lots of serial lines
carefully spec'ed to have no more bandwidth than necessary.