[mod.protocols.tcp-ip] ICMP messages

karn@FLASH.BELLCORE.COM.UUCP (02/13/87)

Here are some thoughts based on those of Dave Clark in RFC-816, which I
think is still well worth reading.

When the network breaks, a real-time human user may or may not be involved
(e.g., Telnet vs. a background mail transfer).  In the former case, you
should let the user know what's happening (i.e., show him ICMP messages)
*but* you should leave it up to him as to what to do. If he wants to wait
indefinitely until things get better, he should have that option.

In the case of two computers talking, what's the hurry?  One of the major
virtues of a (properly programmed) computer is patience. If a SMTP session
gets stuck, why not just let TCP keep trying "forever", provided that it
backs off its retransmissions enough to prevent network congestion?  Keep
the ICMP messages around for debugging, but let TCP do its job instead of
giving up and forcing the application to start all over again.  Your mail
daemon is going to be sending out connection requests every hour or so
anyway (not to mention all those MX and IP address domain queries), so why
not just keep sending data packets so you can pick up where things left off?
Only when the remote host crashes and comes back up will you have to start
over.  The only drawback I can see with this is the memory used by the
almost idle mailer, but hey, memory is free these days.

So the solution, I think, is to use ICMP messages for debugging, but don't
let them affect TCP's actions.  One of my pet peeves about many TCP
implementations is how impatient they are, both in retransmission timing and
"give up" timing.  The end result is a network prone to the kind of
congestion collapse John Nagle talked about in his paper on networks with
infinite buffering.  Fix the timers and you'll avoid this.

Phil

mike@BRL.ARPA.UUCP (02/20/87)

The idea about having mailers stick around and let TCP go "forever"
to get mail though may be OK if you don't send much mail, but some
of our machines find themselves having to deliver mail to >> 5,000
host/message destinations per day.  It takes us several mailer
daemons running in parallel to punch this much mail through.
Without application level timeouts, talking to dead or overrun hosts
could tie up a significant fraction of our transmission daemon
resources.  Not all mail systems even offer the option of having multiple
transmission daemons, and instead rely on a single daemon for all
mail sending operations.

Clearly, application level timeouts need to be set to reasonable
values.  However, our experience with operating many mail-intensive
machines has been that SMTP servers, even in 1987, have many failure
modes.  SMTP servers can fail to answer after any part of the
transaction, even though their host and TCP connections are still well.
This level of failure can only be dealt with by application level timeouts.

Just as security can only be dealt with across all protocol levels,
the issue of timeouts spans protocol levels as well.
	-Mike

karn@FLASH.BELLCORE.COM.UUCP (02/22/87)

As I mentioned in my original message, the cost of the extra memory
to keep all those daemons around may be prohibitive. However, I was
suggesting that the *TCP* never give up, not SMTP. Clearly you still need
application-level timers for the reasons you describe (although you can
get into trouble there as well by making them too short). I was saying
that *TCP* perhaps is better off without a give-up timer, especially
if the application has no way to control it.  If the application wants to
give up if TCP can't get the data across, it can set a timer of its  own
before sending its data.  This is more in keeping with the original
philosophy of TCP timers as expressed in the original spec, but nobody
seemed to pay much attention to it.

Phil

kent@DECWRL.DEC.COM.UUCP (02/23/87)

Gee, this sounds very reminiscent of the (once popular, now extinct?)
Delta-T protocol, a "timer-based connectionless protocol" originally
proposed by Dick Watson at LLNL. It (and the whole class) are wholly
timer based, with a window mechanism for flow control, but no 3-way
handshakes on open/close. The key parameter is the "maximum roundtrip
packet lifetime", or Delta-T. 

By knowing the maximum amount of time any packet will live, you can
start sending data in the first packet because you know all stragglers
will have been discarded. Delta-T essentially assumes everyone is
talking to everyone else all the time, but there are gaps in
conversations. If the gaps are longer than Delta-T, there is no need to
keep state information about. Thus, the protocol state information is
essentially a cache of recent interchanges.

Of course, this all depends on accurate estimates of Delta-T, but then,
so does TCP. Watson proposed a "link timing protocol" which slips in
between an IP-type protocol and a transport-type protocol to provide
the necessary timing information.

I'm not clear on how a Delta-T style protocol fits into a large
internet with many gateways on hops, but it's an interesting class of
protocols to think aboiut, now that it seems that TCP performance and
robustness are turning out to be heavily reliant on accurate timer
estimates.

Cheers,
chris
----------

karn@FLASH.BELLCORE.COM.UUCP (02/23/87)

As further argument in favor of my position that TCP should not give up
unless the user process requests it, I have lately been deluged with
complains from users that they and their correspondents are being flooded
with duplicate but incomplete mail messages. It seems that frequently the
net path is just good enough to get a SMTP connection going, but when it
gets to the body of the message, good old Berkeley TCP quickly gives up in
disgust. A half hour later the mailer tries again, ad nauseum. Not only does
this put lots of garbage into people's mailboxes, it wastes net resources
and contributes to congestion collapse.

There is no facility that I know of in our mail system for filtering out
duplicate mail messages, but I don't believe one should really be necessary;
TCP should just be more patient.

I perceive that part of the problem is the fact that many (if not most)
TCP/IP vendors are not the Internet, so they never encounter these problems
on their own.  I would like to make a radical policy suggestion: being a
TCP/IP vendor should be sufficient cause by itself to justify a connection
to the Internet (preferably through a slow and expensive X.25 connection
that THEY have to pay for). The payback in "clean" implementations for the
rest of us will be more than worth it.

Phil

bzs@BU-CS.BU.EDU.UUCP (02/24/87)

I think duplicate and incomplete mail messages as a result of STMP
delivery is a bug in the SMTP implementation, plain and simple. If the
final '.<CR><LF>' is not received the message should have been
discarded. Is it really clear it would be easier to redesign and
re-implement the way TCP handles errors than to simply fix the mailer
agent to wait for a dot?

Really not trying to make a global statement on the issue at hand,
just thought this was not the right example of how to solve a problem.

I had that problem and it turned out that the root cause was a frag bug,
as well as SMTP being too anxious to deliver less than a whole message.

It's all so complicated...

	-Barry Shein, Boston University

MRC%PANDA@SUMEX-AIM.STANFORD.EDU.UUCP (02/24/87)

     I am aware of two causes of the multiple-copies-of-the-same-message
bug that are caused by poor design of a version of the Unix SMTP server
that runs in all too many Unix hosts.

     When the end-of-message signal (<CRLF>.<CRLF>) is received, this
server spawns a program that appears to be called "sndmsg" to deliver the
message to all the recipients and it insists upon "sndmsg" running to
completion before it will return a success reply code to the SMTP client.

     The first problem happens when there are lots of recipients of this
message and the server's system is a bit loaded.  Many SMTP clients get
fed up if the server doesn't acknowledge the message within a reasonable
period of time (e.g. 5 minutes).  They decide the server is hung, nuke
the connection and try again later.

     Even if you don't believe in timeouts, you must recognize that not
all systems are pleased to have an SMTP stream waiting forever for a
hung SMTP server.

     The second problem will happen with ANY correctly-coded SMTP client.
Somewhere along the line, "sndmsg" runs into trouble.  The SMTP server
sends a 4xx series message with the extremely informative phrase "sndmsg
balks, try again later".  The message has been delivered to some recipients,
but not to others.  But since this is a "whole message failure", it gets
retried for everybody.

     I hope that we can see the extinction of this version of the Unix SMTP
server soon.  The server should NOT make the client wait while a message is
being delivered.  The "sndmsg balks" bug shows why this behavior is so
wrong-headed.  An acknowledgement should be sent as soon as the end of message
signal is received.
-------

braden@ISI.EDU.UUCP (02/24/87)

Chris,

All the present and proposed "transaction" protocols (Birrell&Nelson,
VMTP, SEP, TTP, etc.) being discussed in the End2end Taskforce do in
fact depend upon the Delta-T concept (which you have so cogently
summarized) to avoid an initial handshake. Delta-T was never "popular",
since it was implemented only at LBL; it is much less well known in our
field than it deserves.  Dick Watson thought through the consequences
of his assumptions more thoroughly than we often do, and his papers are
recommended reading for anyone new to the protocol design field.

Bob Braden
 

karn@FLASH.BELLCORE.COM.UUCP (02/24/87)

Granted, sendmail is royally screwed up and it shouldn't deliver a partial
message.  But I still think that TCP shouldn't give up so easily.  Most of
our network outages are caused by one of several things: our Telenet X.25
interface wedges, csnet-relay's IMP interface wedges (speculation), and/or
our route drops out of the EGP tables.  If TCP just kept trying but backing
off on each retransmission, it wouldn't have to start over when service
resumes, our load on the network would decrease, and sendmail wouldn't have
to contend with message fragments in the first place.  Needless to say, this
also implies disabling the TCP keepalive feature that was hacked into BSD.

Phil

Mills@UDEL.EDU.UUCP (02/25/87)

Phil,

While not planting feet securely in either camp (you both have valid
points), it's amazing how your perspective changes when operating a
dinky Ethernet between an ARPANET gateway and another gateway to a vast
swamp and when all paths to that swamp have died for a week. You start
searching for words like "M16 effect" to describe the carnage
as the pileup of multiple j-random hosts try to slosh mail across the
dink. Howcum that mail got as far as the dink? Well, turns out the
swamp in question had no way to declare itself down. Jack Haverty, where
are you when we need you? Grumble-mumble and back to work.

Dave