ben@cernvax.UUCP (06/12/85)
There's been a good deal of debate, particularly in Europe, over the relative virtues of connection-oriented vs. connectionless protocols. This is often treated as a semi-religious issue, and has also greatly contaminated the ISO standardization procedure. Frequently there is additional confusion between "reliable" and "best-effort" protocols. As a believer in the K.I.S.S. principle (Keep It Simple, Stupid), I go for connectionless every time, ENHANCED WHERE NECESSARY to provide "reliable virtual-circuits". A small example is instructive here. One of the traditional uses of virtual circuits has been to support remote-login protocols; the rationale was obvious -- they were used almost exclusively over long- haul error-prone networks, and needed to have data integrity and sequencing guaranteed by the transport layer. Strangely, now that a growing fraction of remote-logins occur over reliable LAN's, nobody as far as I know has ever proposed the use of datagrams to support them. After all, many file transfer protocols use enhanced datagram methods, and in principle a lost packet during a remote-login session is much less serious than during a file-transfer. So I made myself a version of "telnet" running over DGRAM rather than STREAM sockets in 4.2; this took all of half a day's work. I'd challenge anyone running on Ethernet or Pronet to tell the difference between my "udp/tnet" and the official "tcp/telnet" (except that it's faster). Part of the ease of this exercise must be credited to the 4.2 IPC designers, whose "socket" interface is really powerful and general (once you can understand how to use it...). I'll be happy to provide the diffs for "tnet" and its server "tnetd" to anyone who would like to use them. This leads me to another more general point about protocols (flame?). Personally I am very disappointed with the trends today in protocol standardization, harmonization-of-standardization, etc... It seems to me that the essential step is standardization of PROGRAMMING INTERFACES and that the standardization of lower level protocols is receiving disproportionate attention. This is why the 4.2 socket work was such a valuable step forward, and why standards in, say, remote procedure call programming interfaces urgently need to be agreed on and published. When ISO finally lumbers into place with its 7 layers internationally signed, sealed and delivered, the main effect is simply going to be that all non-ISO networking software will have to be rewritten. A very real alternative at this point is that ISO standards may just be ignored as too much trouble to implement, recoding being a major effort for most existing systems. In the case of 4.2, with its existing programming standards, ISO standardization of lower layers can be gracefully incorporated by appropriate kernel modifications made by specialists; only changes to the socket call parameters need then be made in all existing applications. Higher level ISO protocols require only application-level work, and this of course benefits from the standard Unix application programming interface. Ben M. Segal, CERN-DD, 1211 Geneva 23, Switzerland. ben@cernvax (via mcvax).
chris@umcp-cs.UUCP (06/15/85)
UDP is unreliable even over Ethernet or other ``perfect''* networks (at least in 4.2BSD) because since it doesn't have windowing (flow control) so packets can and do get dropped after being successfully received, checksummed, etc. (Take a look at sbappendaddr.) ----- *Anyone who's used marginal hardware knows there's no such thing as a perfect network. . . . -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 4251) UUCP: seismo!umcp-cs!chris CSNet: chris@umcp-cs ARPA: chris@maryland
bc@cyb-eng.UUCP (Bill Crews) (06/19/85)
> *Anyone who's used marginal hardware knows there's no such thing as > a perfect network. . . . If it hurts when you use marginal hardware, don't use marginal hardware! :-) -- / \ Bill Crews ( bc ) Cyb Systems, Inc \__/ Austin, Texas [ gatech | ihnp4 | nbires | seismo | ucb-vax ] ! ut-sally ! cyb-eng ! bc