[comp.protocols.tcp-ip] Charging and aborted transfers

mogul@DECWRL.DEC.COM (Jeffrey Mogul) (04/27/88)

Someone asked why one should pay "network charges" if 14 of 15
megabytes are transferred and then the FTP connection aborts.

If we are still asking this question when usage-based charges are
common, we deserve what we get.  There is (almost) no reason why this
should be a problem, given some thought to protocol design; in other
words, the problem is in FTP, not in whatever charging scheme.

Consider what happens today when you've just lost an FTP connection
after waiting 3 hours for a large file to ooze across the ARPAnet.  Do
you think "wow, what I really want to do is retransfer all those bytes"
or do you think "darn, all I really want is the last 37 bytes"?  FTP is
a good protocol for efficient transfers over stable networks, but it's
not particularly useful if the probability that the transfer will never
complete approaches 1.

If our bulk-transfer protocol allowed us to ask for a piece of a file,
rather than the whole thing, life today would be a lot easier (and life
in the pay-per-packet future would be a lot cheaper).  A robust FTP
user program might then be able to automatically retry failed transfers
without duplicating data already transferred.  I'm not saying that NFS
is perfect, but at least it makes this kind of thing possible.

The reason why I used the word "almost" in the 2nd paragraph is that
one would have to worry more about locking (in case the file changed)
between two partial transfers, and that opens up a hornets nest ... but
many current systems don't guarantee consistency anyway.

perry@MCL.UNISYS.COM (Dennis Perry) (04/27/88)

Jeffrey, I think you have hit on and important question that is not
asked very offten, namely, how does policy affect the architecture?
Certainly, this was answered when packet switching was chosen over
circuit switching.  Now that we have a network, new policy questions
may force us to reexamine the architecture we use.  This of course
may have some interesting consequences for the DoD.  For example, suppose
we go to pay per packet and the architecture is changed to minimize
costs.  Now we find that we are in conflict with survivability or
security or robustness policy issues.  What does the DoD do?  does
it want to recover cost, or develop technology which will survive
in a stress environment.

Oh well, in this case it may not matter, because if we get involved
in a stressed situation, there may not be anyone around to use the
net anyway.:-)

dennis

mogul@DECWRL.DEC.COM (Jeffrey Mogul) (04/27/88)

    Jeffrey, I think you have hit on and important question that is not
    asked very offten, namely, how does policy affect the architecture?

Actually, I was trying to make a slightly different point: that we
are already "paying" for lost packets, even if we aren't actually
mailing in checks.  We should be looking for ways to avoid "wasting"
packets even before we have to pay for them in real money, because
we can't afford the latency/congestion/low bandwidth anyway.  Alas,
charging real money might be the only way to get people to listen.

Thus, if I'm doing a transaction protocol and the network loses a
packet late in the transaction, or I'm running NFS with mobygrams
and the fragmentation demons strike (thanks to JQJ for these examples),
why should I not have to pay for the packets that the network delivered
properly?  Perhaps some sense of "justice" implies that the work
I've lost is the fault of the network, but nowadays when the network
is "free" we still have to deal with those losses, so we might as
well think about making our protocols more robust rather than looking
for someone else to pay.

True, if the phone company gives you a bad connection, you expect them
not to charge when you complain.  If the "network company" doesn't
meet their service guarantees (error rate) then you shouldn't pay them
either.  That doesn't make it any less necessary to make the best
of the situation.

I don't pretend to know how to build a transaction protocol that is
efficient yet responds well to lost packets.  I have, however, trashed
NFS in public before for using fragmentation instead of a real
protocol; there's no excuse for that and I have no sympathy for
people who don't want to pay for the successfully transmitted fragments
if the network drops one.

    For example, suppose we go to pay per packet and the architecture is
    changed to minimize costs.  Now we find that we are in conflict with
    survivability or security or robustness policy issues.  What does the
    DoD do?  does it want to recover cost, or develop technology which will
    survive in a stress environment.
    
    Oh well, in this case it may not matter, because if we get involved in
    a stressed situation, there may not be anyone around to use the net
    anyway.:-)
    
Does too matter, since if the network collapses from stress during
a crisis, we might launch those missiles by accident (or through panic).
I think it's foolish to try to combine both pay-per-packet and serious
real-time constraints; if you want guaranteed service you should pay
for it in advance, where "pay" may mean "assign a high priority" or
"build a VC and reserve the resources".  Datagram networks seems
best if "spreading the pain" is your goal; I'm not sure I would
be a datagram fan if I had to guarantee service through a big hairy
internet.

braden@VENERA.ISI.EDU (04/27/88)

	
	If we are still asking this question when usage-based charges are
	common, we deserve what we get.  There is (almost) no reason why this
	should be a problem, given some thought to protocol design; in other
	words, the problem is in FTP, not in whatever charging scheme.
	
We GAVE that thought to the FTP design, in 1975!  Alex McKenzie came up
with the simple and elegant Restart mechanism which is in the FTP protocol.
All that is needed is for people to implement it.  File access protocols
are very useful, but they are not needed, and probably not particularly
useful, to replace FTP Restart.

Bob Braden

budden@tetra.NOSC.MIL (Rex A. Buddenberg) (04/30/88)

It seems you are arguing that the charges should be based on
an assessment of usage at the application layer, where earlier
posters, talking about per-packet, are at the transport or lower
layer.

Rex Buddenberg