MAP@LCS.MIT.EDU (Michael A. Patton) (09/21/89)
Joshua Levy asks about considering E-Mail as an unreliable datagram transport and using existing protocols to make it reliable. From reading the description later, I would guess what you want has different characteristics than "reliable stream" or "reliable single transaction" approaches. You may want to think out the design of the reliability part of your protocol in a different light. One thing to ask is do you care about ordering? It seems to me that the thing you are designing is basically a standard distributed transaction protocol over a replicated and distributed database, all the standard synchrony and ordering considerations apply. Since I expect you want it to be running in more than two locations, you have to consider whether you require serializability and if so what kind. Then you need to think about which of the various methods you wish to use. Most of these are topics of current research. There are many designs which have various features, different tradeoffs, or restrictions. The problem here is that once you realize that you are proposing a distributed database maintained at more than two nodes you discover that the simple peer-to-peer models are not quite adequate. I might be able to dredge up some references from the course I took 18 months ago, but starting from more current articles on transaction protocols might be better.
karn@jupiter (Phil R. Karn) (09/22/89)
> Joshua Levy asks about considering E-Mail as an unreliable >datagram transport and using existing protocols to make it reliable. One thing to consider is the effective delay*bandwidth product of the email network you're using, particularly if it's a multihop store-and-forward network like UUCP. Since most of the time a message spends in such a network is spent in disk spool files on the various machines, the effective delay*bandwidth product is likely to be MUCH larger than what TCP and similar transport protocols are used to. Phil