martillo@cpoint.UUCP (Joachim Carlo Santos Martillo) (03/15/89)
The following is an article which I am going to submit to Data Communications in reply to a column which William Stallings did on me a few months ago. I think people in this forum might be interested, and I would not mind some comments. Round 2 in the great TCP/IP versus OSI Debate I. INTRODUCTION When ISO published the first proposal for the ISO reference model in 1978, DARPA-sponsored research in packet switching for data communications had already been progressing for over 10 years. The NCP protocol suite, from which the X.25 packet-switching protocol suite originated, had already been rejected as unsuitable for genuine resource-sharing computer networks. The major architectural and protocol development for internetting over the ARPANET was completed during the 1978-79 period. The complete conversion of DARPA-sponsored networks to internetting occurred in January, 1983, when DARPA required all ARPANET computers to use TCP/IP. Since then, with an effective architecture, with working protocols on real networks, researchers and developers within the ARPA Internet community have been refining computer networking and providing continually more resource sharing at lower costs. At the same time, with no obvious architecture, with theoretical or idealized networks and while actively ignoring the work being done in the ARPA Internet context, the ISO OSI standards committees were developing basic remote terminal and file transfer protocols. The ISO OSI protocol suite generally provides potentially much less at much more cost than the ARPA Internet suite already provides. No one should be surprised that many computer networking system architects wish to debate the merits of the OSI reference model and that many relatively pleased business, technical and academic users of the ARPA Internet protocol suite would like such a debate to be actively pursued in the media. ______________________________________________________________ | | | Background | | | |Since June, 1988 William Stallings and I have been engaging| |in a guerilla debate in the reader's forum and the EOT| |feature on the technical and economic merits of OSI versus| |ARPANET-style networking. Enough issues have been raised to| |require a complete article to continue the discussion. The| |debate is of major interest because managers are now making| |strategic decisions which will affect the development, cost| |and functionality of corporate networks over the whole| |world. A valid approach to the debate deals with the| |technical, economic and logistic issues but avoids ad| |hominem attacks. I apologize for those comments in my forum| |letter which might be construed as personal attacks on| |William Stallings. | | | |Since I have not yet published many papers and my book is| |only 3/4s finished, I should introduce myself before I| |refute the ideas which Stallings presented in the September| |EOT feature. I am a system designer and implementer who is| |a founder and Project Director at Constellation Technologies| |which is a Boston-based start-up consulting and| |manufacturing company specializing in increasing the| |performance, reliability and security of standard low-level| |communications technologies for any of the plethora of| |computer networking environments currently available. | | | |I am not an "Arpanet Old Network Boy." My original| |experience is in telephony. I have implemented Signaling| |System 6, X.25, Q.921 and Q.931. During a one-year research| |position at MIT, I worked on TFTP and helped develop the X| |network transparent windowing protocol. Later I developed| |PC/NTS which uses IEEE 802.2 Type 2 to provide PC-Prime| |Series 50 connectivity over IEEE 802.3 (Ethernet) networks.| |My partner Tony Bono and I have attended various IEEE and| |CCITT standards-related committees in various official| |capacities. | _____________________________________________________________| II. THE DEBATE Part of the problem with debating is the lack of a mutually agreeable and understood set of concepts in which to frame the debate. I have yet to meet a communications engineer who had a sense of what a process might be. Having taught working software and hardware engineers at Harvard University and AT&T and having attended the international standards committees with many hardware, software and communications engineers, I have observed that overall system design concepts in computer networking need a lot more attention and understanding than they have been getting. Normally in the standardization process, this lack of attention would not be serious because official standards bodies usually simply make official already existing de facto standards like Ethernet 2.0 which had already proven themselves. In the case of OSI, the ISO committee, for no obvious reasons, chose to ignore the proven ARPA Internet de facto standard. ______________________________________________________________ | | | Architecture, | | Functional Specification, | | Design Specification | | | | |Nowadays, we read a lot of hype about CASE, object-oriented| |program techniques and languages designed to facilitate or| |to ease the development of large software projects. These| |tools generally duck the hardest and most interesting system| |design and development problem which is the design under| |constraint of major systems which somebody might actually| |want to buy. The hype avoids the real issue that student| |engineers are either simply not taught or do not learn| |system design in university engineering programs. If| |software engineers generally knew how to produce acceptable| |architectures, functional specifications and design| |specifications, the push for automatic tools would be much| |less. In fact, the development of CASE tools for automatic| |creation of systems architectures, functional specifications| |and design specifications requires understanding exactly how| |to produce proper architectures and specifications. But if| |engineers knew how to produce good architectures and| |specifications for software, presumably student engineers| |would receive reasonable instruction in producing| |architectures and specifications, and then there would be| |much less need for automatic CASE tools to produce system| |architectures, functional specifications or design| |specifications. | | | |Just as an architectural description of a building would| |point out that a building is Gothic or Georgian, an| |operating system architecture might point out that the| |operating system is multitasking, pre-emptively time-sliced| |with kernel privileged routines running at interrupt level.| |A system architecture would describe statically and| |abstractly the fundamental operating system entities. In| |Unix, the fundamental operating system entities on the user| |side would be the process and the file. The functional| |specification would describe the functionality to be| |provided to the user within the constraints of the| |architecture. A functional specification should not list the| |function calls used in the system. The design specification| |should specify the model by which the architecture is to be| |implemented to provide the desired functionality. A little| |pseudocode can be useful depending on the particular design| |specification detail level. Data structures, which are| |likely to change many times during implementations, should| |not appear in the design specification. | | | |Ancillary documents which treat financial and project| |management issues should be available to the development| |team. In all cases documents must be short. Otherwise,| |there is no assurance the all members of the development or| |product management teams will read and fully comprehend| |their documents. Detail and verbiage can be the enemy of| |clarity. Good architectures and functional specifications| |for moderately large systems like Unix generally require| |about 10-20 pages. A good high-level design specification| |for such a system would take about 25 pages. If the| |documents are longer, something may be wrong. The key is| |understanding what should not be included in such documents.| |The ISO OSI documents generally violate all these| |principles. | _____________________________________________________________| As a consequence, the ISO OSI committee and OSI boosters have an obligation to justify their viewpoint in debate and technical discussion with computer networking experts and system designers. Unfortunately, the debate over the use of OSI versus TCP/IP has so far suffered from three problems: o a lack of systems level viewpoint, o a lack of developer insight and o an hostility toward critical appraisal either technically or economically of the proposed ISO OSI standards. The following material is an attempt to engage in a critical analysis of OSI on the basis of system architecture, development principles and business economics. Note that in the following article unattributed quotations are taken from the itemized list which Stallings used in EOT to attempt to summarize my position. III. INTERNETWORKING: THE KEY SYSTEM LEVEL START POINT The most powerful system level architectural design concept in modern computer networking is internetworking. Internetworking is practically absent from the OSI reference model which concentrates on layering, which is an implementation technique, and on the virtual connection, which would be a feature of a proper architecture. Internetworking is good for the same reason Unix is good. The Unix architects and the ARPA Internet architects, after several missteps, concluded that the most useful designs are achieved by first choosing an effective computational or application model for the user and then figuring out how to implement this model on a particular set of hardware. Without taking a position on success or failure, I have the impression that the SNA and VMS architects by way of contrast set out to make the most effective use of their hardware. As a consequence both SNA and VMS are rather inflexible systems which are often rather inconvenient for users even though the hardware is often quite effectively used. Of course, starting from the user computational or application model does not preclude eventually making the most effective use of the hardware once the desired computational or application model has been implemented. ______________________________________________________________ | | | Internetworking | | | |The internetworking approach enables system designers and| |implementers to provide network users with a single, highly| |available, highly reliable, easily enlarged, easily| |modifiable, virtual network. The user does not need to know| |that this single virtual network is composed of a multitude| |of technologically heterogeneous wide area and local area| |networks with multiple domains of authority.| |Internetworking is achieved by means of a coherent system| |level view through the use of an obligatory internet| |protocol with ancillary monitoring protocol, gateways,| |exterior/internal gateway protocols and hierarchical domain| |name service. | | | |In the internetworking (not interworking) approach, if two| |hosts are attached to the same physical subnetwork of an| |internetwork, the hosts communicate directly with each| |other. If the hosts are attached to different physical| |subnetworks, the hosts communicate via gateways local to| |each host. Gateways understand and learn the internetwork| |topology dynamically at a subnetwork (not host level) and| |route data from the source subnetwork to destination| |subnetwork on a subnetwork hop by subnetwork hop basis. The| |detail of information required for routing and configuration| |is reduced by orders of magnitude. In the ARPA Internet,| |gateways learn topological information dynamically and| |provide reliability as well as availability by performing| |alternate routing of IP datagrams in cases of network| |congestion or network failures. | | | |An authoritative domain, Within the ARPA Internet, can| |conceal from the rest of the internetwork a lot of internal| |structural detail because gateways in other domains need| |only know about gateways within their own domain and| |gateways between authoritative domains. Thus, logical| |subnetworks of an internetwork may also themselves be| |catenets (concatenated networks) with internal gateways| |connecting different physical subnetworks within each| |catenet. For example, to send traffic to MIT, a gateway at| |U.C. Berkeley only need know about gateways between MIT and| |other domains and need know nothing about the internal| |structure of the MIT domain's catenet. | _____________________________________________________________| The ARPA Internet is one realization of the internetworking model. While I am not particularly enamored of some of the ARPA protocol features (nor of Unix features by the way),1 the ARPA Internet works well with capacity for expansion. SINet (described in "How to grow a world-class X.25 network," Data Communications, May 1988) is based on the CSNet subnetwork within the ARPA Internet. ____________________ 1 The use of local-IP-address, local-TCP-port, remote-IP- address, remote-TCP-port quadruples to uniquely identify a given TCP virtual circuit is an impediment to providing greater reliability and availability for a non-gateway multihomed host. A even larger problem with TCP/IP could lie in the possibly non-optimal partitioning of functionality between TCP, IP and ICMP. ____________________ ______________________________________________________________ | | | WANs and LANs | | | |OSI actually has an architecture. Like the ARPANET, OSI| |predicates the existence of a communications subnet| |consisting communications subnet processors (or subnet| |switches) and communications subnet access processors (or| |access switches). Access switches are also known as IMPs| |(Interface Message Processors) or PSNs (Packet Switch Nodes)| |in the ARPANET context. PSPDN (Packet-Switched Public Data| |Network) terminology usually designates access switches| |simply as packet switches. The communication subnet may be| |hierarchical and may contain adjunct processors other than| |subnet and access switches. The internal architecture of| |the communications subnet is quite distinct from the| |architecture presented to end-point hosts. The| |communications subnet may use protocols completely different| |from the protocols used for communication between two end-| |point hosts. An end-point host receives and transmits data| |to its attached access switch via a subnet access protocol.| |The communications subnet is responsible for taking a packet| |received at an access switch and transporting the packet to| |the access switch attached to the destination end-point| |host. The existence of such a well-defined communications| |subnet is the hall mark of a Wide-Area Network (WAN). | | |Unfortunately, from the standpoint of making computer| |networking generally and inexpensively available, access and| |subnet switches are expensive devices to build which need| |fairly complicated control software. DECNET gets around| |some of these problems by incorporating the communications| |subnet logic into end-point hosts. As a consequence,| |customers who wish to run DECNET typically have to purchase| |much more powerful machines than they might otherwise use.| |For the situation of a communications subnet which need| |support connectivity for only a small number of hosts, LAN| |developers found a more cost effective solution by| |developing a degenerate form of packet switches based on| |hardware-logic packet filtering rather than software| |controlled packet switching. These degenerate packet| |switches are installed in the end-point hosts, are accessed| |often via DMA2 as LAN controllers and are attached to| |extremely simplified communications subnets like coaxial| |cables. Direct host-to-switch (controller) access,| |degenerate packet-switching (packet-filtering) and| |simplified communications subnets are the distinguishing| |features of LANs. | | | |While ISO was ignoring the whole internetworking issue of| |providing universal connectivity between end-point hosts| |attached to different physical networks within internetworks| |composed of many WANs and even more LANs concatenated| |together, and while the IEEE was confusing all the issues by| |presenting as an end-to-end protocol a communications subnet| |protocol (IEEE 802.2) based on a communications subnet| |access protocol (X.25 level 2), the ARPA Internet community| |developed an internet architecture capable of providing the| |universal connectivity and resource sharing which business,| |technical and academic users really want and need. | ______________________________________________________________ ____________________ 2 Some machines like the Prime 50 Series do not use genuine DMA but instead use inefficient microcoded I/O. IBM machines generally use more efficient and somewhat more expensive internal switching. ____________________ The backbone of the ARPA Internet is the ARPANET. The ARPANET is a packet switched subnetwork within the ARPA Internet. The ARPANET communications subnet access protocol is 1822. CSNet was set up as an experiment to demonstrate that the ARPA Internet architecture and suite of protocols would function on a packet network whose communications subnet access protocol is X.25. Using an X.25-accessed packet network instead of an 1822-accessed packet network makes sense despite the glaring deficiencies of X.25,3 because X.25 controllers are available for many more systems than 1822 controllers and because many proprietary networking schemes like SNA and DECNET can use X.25-accessed packet networks but cannot use a packet network accessed by 1822. Yet, calling SINet a world class X.25 network is as reasonable as calling the ARPANET a world class 1822 network.4 Schlumberger has produced a world class TCP/IP network whose wires can be shared with SNA and DECNET hosts. Schlumberger has shown enthusiasm for the flexible, effective ARPANET suite of protocols but has given no support in the development of SINet to the idea that business should prepare to migrate to OSI based networks. I would be an OSI-enthusiast if ISO had reinvented internetworking correctly. Unfortunately, the ISO OSI reference model which first appeared in 1978 clearly ignored all the ARPA community work on intercomputer networking and resource sharing which was easily accessible in the literature of the time. Instead of building the OSI network on an internetworking foundation, ISO standardized on the older less effective host-to-packet-switch-to-packet-data- subnet-to-packet-switch-to-host (NCP) model which the DARPA ____________________ 3 For example, X.25 does flow control on the host to packet switch connection on the basis of packets transmitted rather than on the basis of consumption of advertised memory window. The exchange of lots of little packets on an X.25 connection can cause continual transmission throttling even though the receiver has lots of space for incoming data. 4 Or as much sense as calling Ethernet LANs DMA-based networks because the packet switches (an Ethernet controller is a degenerate case of a packet switch) on the LAN are typically accessed by DMA. ____________________ had abandoned 5 years earlier because of lack of flexibility and other problems. ______________________________________________________________ | | | Pieces of the ARPA Internet Conceptually | | | | | | | | | | | | | | (No Graphics) | | | | | ______________________________________________________________ Nowadays, mostly in response to US vendors and DARPA, pieces of the ARPA Internet architecture have resurfaced in the OSI reference model quite incoherently rather than as a consequence of an integrated correct architectural viewpoint. Connectionless-mode transmission is described in ISO/7498/DAD1 which is an addendum to ISO 7498 and not a core document. Because connectionless-mode transmission is defined in an addendum, the procedure apparently need not be implemented, and UK GOSIP, for example, explicitly rejects the use of the connectionless transmission mode. The introduction to the 1986 ISO 7498/DAD1 explicitly states, as follows, that ISO was extremely reluctant to incorporate a genuine datagram based protocol which could be used for internetworking. ISO 7498 describes the Reference Model of Open Systems Interconnection. It is the intention of that International standard that the Reference model should establish a framework for coordinating the development of existing and future standards for the interconnection of systems. The assumption that connection is a fundamental prerequisite for communication in the OSI environment permeates the Reference Model and is one of the most useful and important unifying concepts of the architecture which it describes. However, since the International Standard was produced it has been realized that this deeply-rooted connection orientation unnecessarily limits the power and scope of the Reference Model, since it excludes important classes of applications and important classes of communication network technology which have a fundamentally connectionless nature. An OSI connectionless-mode protocol packet may undergo something like fragmentation, but from the literature, this form of segmentation as used in OSI networks is hardly equivalent to ARPA Internet fragmentation. Stallings states the following in Handbook of Computer-Communications Standards, the Open Systems Interconnection (OSI) Model and OSI-Related Standards, on p. 18 (the only reference to anything resembling fragmentation in the book). Whether the application entity sends data in messages or in a continuous stream, lower level protocols may need to break up the data into blocks of some smaller bounded size. This process is called segmentation. Such a process is not equivalent to ARPA Internet fragmentation. In the ARPA Internet fragmentation is the process whereby the gateway software operating at the IP layer converts a single IP packet into several separate IP packets and then routes the packets. Each ARPA IP fragment has a full IP header. It is not obvious that each OSI segment has a complete packet header. The ARPA fragmentation procedure is not carried out by lower protocol layers. A N- layer packet in OSI is segmented at layer N-1 while the packet is routed (relayed) at layer N+1. This partitioning of basic internetworking procedures across layer 2 (N-1), layer 3 (N) and layer 4 (N+1) violates the following principles described in ISO/DIS 7498: Information Processing Systems -- Open Systems Interconnection -- Basic Reference Model. P1: do not create so many layers as to make the system engineering task of describing and integrating the layers more difficult than necessary [ISO uses three layers where one could be used]; P2: create a boundary at a point where the description of services can be small and the number or interactions across the boundary are minimized [by putting per-packet relaying in layer 4 at least two interactions across the boundary are required per packet]; P5: select boundaries at a point which past experience has demonstrated to be successful [the ARPA Internet layering boundaries which combine the addressing, fragmentation and routing in one layer has proven successful]; P6: create a layer where there is a need for a different level of abstraction in the handling of data, e.g. morphology, syntax, semantics [fragmentation, routing, and network addressing are all seem quit naturally to be part of network layer semantics as the ARPA Internet example shows]; P9: allow changes of functions or protocols to be made within a layer without affecting other layers [I would think changing the manner of addressing at layer 3 would affect relaying at layer 4]. Even if OSI N-1 segmentation and N+1 relaying could be used in the same way as fragmentation and routing in the ARPA Internet, it takes a lot more apparatus than simply permitting the use of the ISO connectionless "internet" protocol to achieve internetworking. The OSI documents almost concede this point because ISO 7498/DAD 1, ISO/DIS 8473 (Information Processing Systems -- Data Communications -- Protocol for Providing Connectionless-Mode Network Service) actually provide for N- layer segmentation (actually fragmentation) and N-layer routing right in the network layer in addition to the OSI standard N-1 segmentation and N+1 relaying. Providing such functionality directly in the network layer actually seems in greater accordance with OSI design principles, but if ISO is really conceding this point, ISO should go back and redesign the system rather than leaving this mishmash of N-1 segmentation, N segmentation, N routing and N+1 relaying. The current connectionless-mode network service is still insufficient for internetworking because the gateway protocols are not present and the connectionless-mode error PDUs (Protocol Data Units) do not provide the necessary ICMP functionality. The documents also indicate a major confusion between an internetwork gateway, which connects different subnetworks of one catenet (concatenated network), and a simple bridge, which connects several separate physical networks into a single network at the link layer, or an interworking unit, which is a subnet switch connecting two different communications subnets either under different administrative authorities or using different internal protocols.5 Tanenbaum writes the following about the ____________________ 5 This confusion is most distressing from a security standpoint. The November 2 ARPA Internet (Cornell) virus attack shows that one of the major threats to network security is insider attack which is a problem with even the most isolated corporate network. Because many ARPA Internet network authorities were assuming insider good behavior, ARPA Internet network administrators often did not erect security barriers or close trapdoors. Nevertheless, gateways have far more potential than bridges or interworking units to provide reasonable firewalls to hinder and frustrate insider attack. MIT/Project Athena which makes judicious use of gateways and which does not assume insider good behavior was relatively unaffected by the virus. Any document which confuses gateways, bridges and interworking units is encouraging security laxity. ____________________ connectionless-mode network service in Computer Networks, p. 321. In the OSI model, internetworking is done in the network layer. In all honesty, this is not one of the areas in which ISO has devised a model that has met with universal acclaim (network security is another one).6 From looking at the documents, one gets the feeling that internetworking was hastily grafted onto the main structure at the last minute. In particular, the objections from the ARPA Internet community did not carry as much weight as they perhaps should have, inasmuch as DARPA had 10 years experience running an internet with hundreds of interconnected networks, and had a good idea of what worked in practice and what did not. Internetworking, the key concept of modern computer networking, exists within the OSI reference model as a conceptual wart which violates even the OSI principles. If ISO had not tacked internetworking onto the OSI model, ISO was afraid that DARPA and that part of the US computer industry with experience with modern computer networking would have absolutely rejected the OSI reference model as unusable. ____________________ 6 Actually, I find ISO 7498/2 (Security Architecture) to be one of the more reasonable ISO documents. I would disagree that simple encryption is the only form of security which should be performed at the link layer because it seems sensible that if a multilevel secure mini is replaced by a cluster of PCs on a LAN, multilevel security might be desirable at the link layer. Providing multilevel security at the link layer would require more than simple encryption. Still, ISO 7498/2 has the virtue of not pretending to solve completely the network security problem. The document gives instead a framework indentifying fundamental concepts and building blocks for developing a security system in a networked environment. ____________________ IV. "GREATER RICHNESS" VERSUS DEVELOPER INSIGHT In view of this major conceptual flaw which OSI has with respect to internetworking, no one should therefore be surprised that instead of tight technical discussion and reasoning, implementers and designers like me are continually subjected to vague assertions of "greater richness" of the OSI protocols over the ARPA Internet protocols. In ARPA Internet RFCs, real-world practical discussion is common. I would not mind similar developer insight or even hints about the integration of these OSI protocol interpreters into genuine operating systems participating in an OSI interoperable environment. The customers should realize "greater richness" costs a lot of extra money even if a lot of the added features are useless to the customer. "Greater richness" might necessitate the use of a much more powerful processor if "greater richness" forced much more obligatory but purposeless protocol processing overhead. "Greater richness" might also represent a bad or less than optimal partitioning of the problem. A. OSI NETWORK MANAGEMENT AND NETVIEW Netview has so much "greater richness" than the network management protocols and systems under development in the ARPA Internet context that I have real problems with the standardization of Netview into OSI network management as the obligatory user interface and data analysis system. Netview is big, costly, hard to implement, and extremely demanding on the rest of the network management system. As OSI network management apparently subsumes most of the capabilities of Arpanet ICMP (Internet Control Monitoring Protocol) which is a sine qua non for internetworking, I am as a developer rather distressed that full blown OSI network management (possibly including a full implementation of FTAM) might have to run on a poor little laser printer with a dumb ethernet interface card and not much processing power. B. FTAM IS DANGEROUS The "greater richness" of FTAM seems to lie in the ability to transmit single records and in the ability to restart aborted file transfer sessions. Transmission of single records seems fairly useless in the general case since operating systems like Unix and DOS do not base their file systems on records while the records of file systems like those of Primos and VMS have no relationship whatsoever to one another. Including single record or partial file transfer in the remote transfer utility seems is a good example of bad partitioning of the problem. This capability really belongs in a separate network file system. A network file system should be separate from the remote file transfer system because the major issues in security, performance, data encoding translation and locating objects to be transferred are different in major ways for the two systems. The ability to restart aborted file transfers is more dangerous than helpful. If the transfer were aborted in an OSI network, it could have been aborted because one or both of the end hosts died or because some piece of the network died. If the network died, a checkpointed file transfer can probably be restarted. If a host died on the other hand, it may have gradually gone insane and the checkpoints may be useless. The checkpoints could only be guaranteed if end hosts have special self-diagnosing hardware (which is expensive). In the absence of special hardware and ways of determining exactly why a file transfer aborted, the file transfer must be restarted from the beginning. By the way, even with the greater richness of FTAM, it is not clear to me that a file could be transferred by FTAM from IBM PC A to a Prime Series 50 to IBM PC B in such a way that the file on PC A and on PC B could be guaranteed to be identical. C. X.400: E-MAIL AS GOOD AS THE POSTAL SERVICE As currently used and envisioned, the X.400 family message handling also has "greater richness." X.400 seems to include binary-encoded arbitrary message-transmission, simple mail exchange and notification provided by a Submission and Delivery Entity (SDE). In comparison with ARPA SMTP (Simple Mail Transfer Protocol), X.400 is overly complicated with hordes of User Agent Entities (UAEs), Message Transfer Agent Entities (MTAEs) and SDEs scurrying around potentially eating up -- especially during periods of high traffic -- lots of computer cycles on originator, target and intermediate host systems because the source UAE has to transfer mail through the local MTAE and intermediate MTAEs on a hop-by-hop basis to get to the target machine.7 ____________________ 7 I have to admit that if I were implementing X.400, I would probably implement the local UAE and MTAE in one process. The CCITT specification does not strictly forbid this design, but the specification does seem to discourage strongly such a design. I consider it a major flaw with a protocol specification when the simplest design is so strongly counterindicated. It does seem to be obligatory that mail traffic which passes through an Intermediate System (IS) must pass through an MTAE running on that IS. ____________________ The design is particularly obnoxious because X.400 increases the number of ways of getting mail transmission failure by using so many intermediate entities above the transport layer. The SMTP architecture is, by contrast, simple and direct. The user mail program connects to the target system SMTP daemon by a reliable byte stream (like a TCP virtual circuit) and transfers the mail. Hop-by-hop transfers through intermediate systems are possible when needed. One SMTP daemon simply connects to another the same way a user mail program connects to an SMTP daemon. The relatively greater complexity and obscurity of X.400 arises because a major purpose of X.400 seems to be to intermingle intercomputer mail service and telephony services like telex or teletex to fit the computer networking into the PTT (Post, Telegraph & Telephone administration) model of data communications (not an unreasonable goal for a CCITT protocol specification but probably not the best technical or cost-effective design for the typical customer). Mail gateways are apparently supposed to handle document interchange and conversion. Document interchange and conversion is a really hard problem requiring detailed knowledge at least of word processor file formats, operating system architecture, data encoding, and machine architecture. It may be impossible to develop a satisfactory network representation which can handle all possible document content, language and source/target hardware combinations as well as provide interconversion with tradition telephonic data transmission encodings. The cost of development of such a system might be hard to justify, and a customer might have a hard time justifying paying the price a manufacturer would probably have to charge for this product. A network file system or remote file transfer provides a much more reasonable means of document sharing or interchange than tacking an e-mail address into a file with a complicated internal structure, sending this file through the mail system and then removing the addressing information before putting the document through the appropriate document or graphics handler. A NETASCII-based e-mail system corresponds exactly to the obvious mapping of the typical physical letter, which does not usually contain complicated pictorial or tabular data, to an electronic letter and is sufficient for practically all electronic mail traffic. Special hybrid systems can be developed for that extremely tiny fraction of traffic for which NETASCII representations may be insufficient and for which a network file system or FTP may be insufficient. A correct partitioning of the electronic mail should be kept completely separate from telephony services, document interchange and document conversion. ______________________________________________________________ | | | X.400 Mail Connections | | | | | | | | | | | | | | (No Graphics) | | | | | ______________________________________________________________ D. ARPA SMTP: DESIGNING MAIL AND MESSAGING RIGHT The MIT environment at Project Athena, where IBM and DEC are conducting a major experiment in the productization of academic software, provides an instructive example of the differences between e-mail, messaging and notification. The mail system used at MIT is an implementation of the basic SMTP-based ARPA Internet mail system. More than four years ago the ARPA Internet mail system was extremely powerful and world-spanning. It enabled then and still enables electronic mail to reach users on any of well over 100,000 hosts in N. America, Europe, large portions of E. Asia and Israel. The Citicorp network (described in "How one firm created its own global electronic mail network," Data Communications, June 1988, p. 167), while probably sufficient for Citicorp's current needs, connects an insignificant number of CPUs (47), provides no potential for connectivity outside the Citicorp domain of authority and will probably not scale well with respect to routing or configuration as it grows. The MIT environment is complex and purposely (apparently in the strategies of DEC and IBM) anticipates the sort of environment which should become typical within the business world within the next few years. MIT is an authoritative domain within the ARPA Internet. The gateways out of the MIT domain communicate with gateways in other domains via the Exterior Gateway Protocol (EGP). Internally, currently used internal gateway protocols are GGP, RIP and HELLO. The MIT domain is composed of a multitude of Ethernet and other types of local area networks connected by a fiber-optic backbone physically and by gateway machines logically. This use of gateways provides firewalls between the different physical networks so that little sins (temporary network meltdowns caused by Chernobyl packets) do not become big sins propagating themselves throughout the network. The gatewayed architecture of the MIT network also permits a necessary traffic engineering by putting file system, paging and boot servers on the same physical network with their most likely clients so that this sort of traffic need not be propagate throughout the complete MIT domain. Difficult to reach locations achieve connectivity by means of non-switched telephone links. Since MIT has its own 5ESS, these links may be converted to ISDN at some point. While there are some minis and mainframes in the network, the vast majority of hosts within the MIT network are personal workstations with high resolution graphics displays of the Vaxstation and RT/PC type and personal computers of the IBM PC, PC/XT and PC/AT type. A few Apollos, Suns, Sonys and various workstations of the 80386 type as well as Lisp Machines and PCs from other manufacturers like Apple are also on the air. Most of the workstations are public. When a user logs in to such a workstation, after appropriate Kerberos (MIT security system) authentication, he has full access to his own network files and directory as well as access to those resources within the network which he has the right to use. To assist the administration of the MIT domain within the ARPA Internet, several network processes might be continually sending (possibly non-ASCII) event messages to a network management server which might every few hours perform some data analysis on received messages and then format a summary mail message to send to a network administrator. This mail message would be placed in that network administrator's mailbox by his mail home's SMTP daemon which then might check whether this network administrator is reachable somewhere within the local domain (maybe on a PC with a network interface which was recently turned on and then was dynamically assigned an IP address by a local authoritative dynamic IP address server after appropriate authentication). If this administrator is available, the SMTP daemon might notify him via the notification service (maybe by popping up a window on the administrator's display) that he has received mail which he could read from his remote location via a post office protocol. I have seen the above system being developed on top of the basic "static" TCP/IP protocol suite by researchers at MIT, DEC and IBM over the last 4 years. X.400 contains a lot this MIT network functionality mishmashed together but I as a customer or designer prefer the much more modular MIT mail system. It is an extensible, dynamically configurable TCP/IP-based architecture from which a customer could chose those pieces of the system which he needs. The MIT system requires relatively little static configuration. Yet by properly choosing the system pieces, coding an appropriate filter program and setting up a tiny amount of appropriate configuration data, a customer could even set up a portal to send e-mail to a fax machine. In comparison, X.400 requires complicated directory services and an immense amount of static configuration about the end user and end user machine to compensate for the internetworking-deficient or internetworking-incompatible addressing scheme. The need for such a level of static configuration is unfortunate for system users because in the real world a PC or workstation might easily be moved from one LAN to another or might be easily replaced by a workstation or PC of another type. An MIT-style mail system could also be much cheaper to develop and consequently could be much less costly to purchase than an X.400 mail system simply because it represents a much better partitioning of the problem. One or two engineers produced each module of the MIT mail system in approximately 6 months. Because of complexity and obscurity, the development of X.400 products (I saw an example at Prime) is measured in staff years. The executive who chooses X.400 will cost his firm an immense amount of money which will look utterly wasted when his firm joins with another firm in some venture and the top executives of both firms try to exchange mail via their X.400 mail systems. Simple mail exchange between such systems would likely be very hard to impossible because the different corporations could easily have made permissible but incompatible choices in their initial system set-up. At the very last complete reconfiguration of both systems could be necessary. Had the firms chosen an ARPA Internet mail system like the MIT system, once both firms had ARPA Internet connectivity or set up a domain-to-domain gateway, mail would simply work. ______________________________________________________________ | | | SMTP Mail Connections | | | | | | | | | | | | | | (No Graphics) | | | | | ______________________________________________________________ V. IS THE TCP/IP PROTOCOL SUITE "STATIC?" Because of the mail system development in progress at MIT, DEC and IBM, the X development which I and others have done and which is still continuing, SUN NFS (Network File System) development, IBM AFS (Andrew File System) development, Xenix-Net development, Kerberos development, and the other plethora of protocol systems being developed within the ARPA Internet context (including the VMTP transaction processing system and commercial distributed database systems like network Ingress), I am at the very least puzzled by Mr. Stallings' assertion that "[it] is the military standards that appear on procurement specifications and that have driven the development of interoperable commercially available TCP/IP products." ______________________________________________________________ | | | Partitioning the Problem | | | |The X window system is an example of a clearly and well| |partitioned system. In windowing, the first piece of the| |problem is virtualizing the high-resolution raster graphics| |device. Individual applications do not want or need to know| |about the details of the hardware. Thus, to provide| |hardware independence, applications should only deal with| |virtual high-resolution raster-graphics devices and should| |only know about its own virtual high resolution raster-| |graphics devices (windows). The next piece of the problem| |is to translate between virtual high-resolution raster| |graphics devices and the physical high-resolution raster| |graphics device (display). The final part of the problem| |lies in managing the windows on the display. This problem,| |with a little consideration clearly differentiates itself| |from translating between virtual and physical high-| |resolution raster-graphics devices. | | | | |In the X window system, communication between the| |application and its windows is handled by the X library and| |those libraries built on top of the basic X library.| |Virtual to physical and physical to virtual translation is| |handled by the X server. X display management is handled by| |the X window manager. | | | | | |After partitioning the problem, careful consideration of| |display management leads to the conclusion that if all| |windows on a display are treated as "children" of a single| |"root" window, all of which "belong" in some sense to the| |window manager, then the X window manager itself becomes an| |ordinary application which talks to the X server via the X| |library. As a consequence, developers can easily implement| |different display management strategies as ordinary| |applications without having to "hack" the operating system.| |The server itself may be partitioned (under operating| |systems which support the concept) into a privileged portion| |which directly accesses the display hardware and a non-| |privileged portion which requests services from the| |privileged part of the server. Under Unix, the privileged| |part of the server goes into the display, mouse and keyboard| |drivers while the non-privileged part becomes an ordinary| |application. In common parlance, X server usually refers to| |the non-privileged part of the X server which is implemented| |as an ordinary application. | | | |The last step in realizing the X window system is choosing| |the communications mechanism between the X server and| |ordinary applications or the display manager. Because the| |problem was nicely partitioned, the communications problem| |is completely extrinsic to the windowing problem as lives as| |an easily replaceable interface module. The initial choice| |at MIT was to use TCP/IP virtual circuits, which provided| |immediate network transparency, but in fact because X only| |requires sequenced reliable byte-streams so that DECNET VCs| |or shared-memory communications mechanisms can easily| |replace TCP/IP virtual circuits according to the| |requirements of the target environment. Systems built on| |well-partitioned approaches to solving problems often show| |such flexibility because of modularity of the approach and| |because a successful partitioning of the problem will often| |in its solution increase the understanding of the original| |problem that developers can perceive greater tractability| |and simplicity in the original and related problems than| |they might have originally seen. | _____________________________________________________________| It seems somewhat propagandistic to label the TCP/IP protocol suite static and military. New RFCs are continually being generated as Paul Strauss has pointed out in his September article. Such new protocols only become military standards slowly because the military standardization of new protocols and systems is a long tedious political process which once completed may require expensive conformance and verification procedures. After all, neither the obligatory ICMP nor the immensely useful UDP (User Datagram Protocol) have associated military standards. Often, after reviewing those products generated by market forces, the US military specifies and acquires products which go beyond existing military standards. By the way, hierarchical domain name servers and X are used on MILNET. VI. ENTERPRISE NETWORKING AND SOPHISTICATED APPLICATIONS: SELLING INTERCOMPUTER NETWORKING The military are not the only users "more interested in sophisticated applications than in a slightly enhanced version of Kermit." The whole DEC enterprise networking strategy is postulated on this observation. Stallings ignored my reference to network file systems as a sophisticated networking application. Yet, in several consulting jobs, I have seen brokers and investment bankers make extensive use of network file systems. I also believe network transparent graphics will be popular in the business world. At Saloman Brothers both IBM PCs and SUN workstations are extensively used. With X, it is possible for a PC user to run a SUN application remotely which uses the PC as the output device. This capability seems highly desirable in the Saloman Brothers environment. Unfortunately "OSI is unlikely ever to provide for [such] resource sharing because it is industry-driven." Wayne Rash Jr., a member of the professional staff of American Management Systems, Inc. (Arlington, Virginia) who acts as a US federal government microcomputer consultant, writes the following in "Is More Always Better," Byte, September 1988, p. 131. You've probably seen the AT&T television ads about this trend [toward downsizing and the development of LAN-based resource-sharing systems]. They feature two executives, one of whom is equipping his office with stand-alone microcomputers. He's being intimidated by another executive, who tells him in a very nasty scene, "Stop blowing your budget" on personal computers and hook all your users to a central system. This is one view of workgroup computing, although AT&T has the perverse idea that the intimidator is the forward thinker in the scene. AT&T and to an even greater extent the similarly inclined European PTTs have major input into OSI specification. VII. BIG AND SMALL PLAYERS CONSTRAIN OSI The inclinations of AT&T and the PTTs are not the only constraints under which the OSI reference model was developed. A proprietary computer networking system, sold to a customer, becomes a cow which the manufacturer can milk for years. Complete and effective official standards make it difficult for a company to lock a customer into a proprietary system. A customer could shop for the cheapest standard system, or could chose the offering of the manufacturer considered most reliable. It is proverbial that no MIS executive gets fired for choosing IBM. Small players have genuine reason to fear that a big player like Unisys, which no longer has a major proprietary computer networking installed base8, or AT&T, which never had a major proprietary computer networking installed base9, might try to establish themselves in the minds of customers as the ultimate authority for the supply of true OSI connectivity. Thus, small players fear that a complete and effective official standard might only benefit the big players. Players like AT&T or Unisys fear IBM might hi-jack the standard. IBM would prefer to preserve its own proprietary base and avoid competing with the little guys on a cost/performance basis in what could turn into a commodity marker. No such considerations were operative in the development of the ARPA Internet suite of protocols. DARPA had a specific need for intercomputer networking, was willing to pay top dollar to get the top experts in the intercomputer networking field to design the system right and was less concerned by issues of competition (except perhaps for turf battles within the U.S. government). By contrast, almost all players who have input into the ISO standardization process have had reasons and have apparently worked hard to limit the effectiveness of OSI systems. With all the limitations, which have been incorporated into the OSI design and suite of protocols, the small players have no reason to fear being overwhelmed by big players like Unisys or AT&T. The big players have the dilemma of either being non-standard or of providing an ineffective, incomplete but genuine international standards. Small vendors have lots of room to offer enhanced versions perhaps drawing from more sophisticated internetworking concepts. In any case, most small vendors, as well as DEC and IBM, are hedging their bets by offering both OSI and TCP/IP based products. IBM seems well positioned with on-going projects at the University of Michigan, CMU, MIT, Brown and Stanford and with IBM's creditability in the business world to set the standard for the business use of TCP/IP style ____________________ 8 BNA and DCA seem hardly to count even to the Unisys management. 9 Connecting computer systems to the telephone network is not computer networking in any real sense. ____________________ networking. By contrast, no major manufacturer really seems to want to build OSI products, and with the current state of OSI, there is really no reason to buy OSI products. VIII. MAP: FOLLOWING THE OSI MODEL MAP shows perfectly the result of following the OSI model to produce a computer networking system. GM analysts sold MAP to GM's top management on the basis of the predicted cost savings. Since GM engineers designed, sponsored and gave birth to MAP, I am not surprised that an internal GM study has found MAP products less expensive than non-MAP compliant products. If the internal study found anything else, heads would have to roll. Yet, as far as I know, neither IBM nor DEC have bought into the concept although both companies would probably supply MAP products for sufficient profit. Ungermann-Bass and other similar vendors have also announced a disinclination to produce IEEE 802.4 based products. Allen-Bradley has chosen DECNET in preference to a MAP-based manufacturing and materials handling system. This defection of major manufacturers, vendors and customers from the MAP market has to limit the amount of MAP products available for customers to purchase. Nowadays, GM can purchase equipment for its manufacturing floor from a limited selection of products, which are the computer networking equivalent of bows and arrows, whereas in the past GM was stuck with rocks and knives. Bows and arrows might be sufficient for the current GM applications; however, if my firm had designed MAP, GM would have the networking equivalent of nuclear weapons, for the MAP network would have been built around an internet with a genuine multimedium gatewayed easily modifiable environment so that in those locations where token-bus noise resistance is insufficient and where higher bandwidths might be needed, fiber media could be used. With the imminent deluge of fiber-based products, MAP looks excessively limited. (Actually, the MAP standards committees have shown some belated awareness that fiber might be useful in factories.) IX. EXTENDING OSI VIA PROTOCOL CONVERTERS: QUO VADIT? Interestingly enough, even when OSI systems try to overcome OSI limitations via protocol conversion to provide access to some of the sophisticated resource sharing to which ARPA Internet users have long been accustomed, the service is specified in such a way as to place major limitations on performance of more sophisticated applications. Just like IBM and other system manufacturers, I have no problems with providing to the customer at sufficient profit exactly those products which the customer specifies. Yet, if contracted for advice on a system like the NBS TCP/IP-to-OSI protocol converter IS (Intermediate System), described in "Getting there from here," Data Communications, August 1988, I might point out that such a system could easily double packet traffic on a single LAN, decrease network availability and reliability, prevent alternate routing, and harm throughput by creating a bottleneck at the IS which must perform both TCP/IP and OSI protocol termination. X. CONCLUSION Official standardization simply by itself does not make a proposal good. Good standards generally were already good before they became official standards. The IEEE and other standards bodies generate lots of standards for systems which quickly pass into oblivion. OSI was generated de novo, apparently with a conscious decision to ignore the already functioning ARPA Internet example. Unless a major rethinking of OSI (like redesigning OSI on the solid foundation of the internetworking concept) takes place in the near future, I must conclude that the ARPA Internet suite of protocols will be around for a long time and that users of OSI will be immensely disappointed by the cost, performance, flexibility and manageability of their networks. I. Introduction 1 II. The Debate 2 III. Internetworking: The Key System Level Start Point 4 IV. "Greater Richness" Versus Developer Insight 14 A. OSI Network Management and Netview 14 B. FTAM is Dangerous 14 C. X.400: E-Mail as Good as the Postal Service 15 D. ARPA SMTP: Designing Mail and Messaging Right 18 V. Is the TCP/IP Protocol Suite "Static?" 22 VI. Enterprise Networking and Sophisticated Applications: Selling Intercomputer Networking 24 VII. Big and Small Players Constrain OSI 24 VIII. MAP: Following the OSI Model 26 IX. Extending OSI Via Protocol Converters: Quo vadit? 26 X. Conclusion 27 Newsgroups: comp.protocols.iso Subject: TCP/IP versus OSI Summary: Expires: Sender: Reply-To: martillo@cpoint.UUCP (Joachim Carlo Santos Martillo) Followup-To: Distribution: world Organization: Clearpoint Research Corp., Hopkinton Mass. Keywords: Newsgroups: soc.culture.indian Subject: Re: Evil Westerners Summary: Expires: References: <5386@whuts.ATT.COM> <2126@cpoint.UUCP> <715@gould.doc.ic.ac.uk> <8765@aw.sei.cmu.edu> <4742@hubcap.UUCP> <VCHAUDHA.89Mar11025908@calcutta.oracle> Sender: Reply-To: martillo@cpoint.UUCP (Joachim Carlo Santos Martillo) Followup-To: Distribution: Organization: Clearpoint Research Corp., Hopkinton Mass. Keywords: In article <VCHAUDHA.89Mar11025908@calcutta.oracle> vchaudha@calcutta.oracle (Vikram Aaditya Chaudhary) writes: > >In article <4742@hubcap.UUCP> spingal@hubcap.UUCP (sridhar pingali) writes, >in response to firth@sei.cmu.edu (Robert Firth): > > [ Which leading writer of Goa? > [ > [ Sridhar > >I think Firth is talking about VS Naipaul. Firth is implying that >a "leading" "Indian" "writer" is criticizing India, and therefore there >must be some truth to it. > >Just because Naipual is ethnically Goan doesn't mean a damn thing. >I don't think he was even born in India; he certainly grew up in >Trinidad. He has no first hand knowledge of the Indian political >or social system. So he is certainly not a leading "Indian" writer. >That would be like saying William Faulkner is a leading English writer. >(Pardon the unfair analogy of the two-bit writer Naipaul to Faulkner) > >Naipaul is another product of colonial educational systems, trying desperately >to prove that he is the de facto sovereign of all that is anti-Indian. >Apologists for colonialism *love* his writing. > >No wonder Robert Firth quotes Naipaul. > > Vikram Aaditya > > *_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_* > * INTERNET: vchaudha%oracle.com@apple.com * > * UUCP : apple!oracle!vchaudha * > *_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_* Newsgroups: comp.protocols.iso,comp.protocols.misc,comp.protocols.tcp-ip,comp.std.internat,comp.protocols.iso.x400 Subject: TCP/IP versus OSI Summary: Expires: Sender: Reply-To: martillo@cpoint.UUCP (Joachim Carlo Santos Martillo) Followup-To: Distribution: world Organization: Clearpoint Research Corp., Hopkinton Mass. Keywords: The following is an article which I am going to submit to Data Communications in reply to a column which William Stallings did on me a few months ago. I think people in this forum might be interested, and I would not mind some comments. Round 2 in the great TCP/IP versus OSI Debate I. INTRODUCTION When ISO published the first proposal for the ISO reference model in 1978, DARPA-sponsored research in packet switching for data communications had already been progressing for over 10 years. The NCP protocol suite, from which the X.25 packet-switching protocol suite originated, had already been rejected as unsuitable for genuine resource-sharing computer networks. The major architectural and protocol development for internetting over the ARPANET was completed during the 1978-79 period. The complete conversion of DARPA-sponsored networks to internetting occurred in January, 1983, when DARPA required all ARPANET computers to use TCP/IP. Since then, with an effective architecture, with working protocols on real networks, researchers and developers within the ARPA Internet community have been refining computer networking and providing continually more resource sharing at lower costs. At the same time, with no obvious architecture, with theoretical or idealized networks and while actively ignoring the work being done in the ARPA Internet context, the ISO OSI standards committees were developing basic remote terminal and file transfer protocols. The ISO OSI protocol suite generally provides potentially much less at much more cost than the ARPA Internet suite already provides. No one should be surprised that many computer networking system architects wish to debate the merits of the OSI reference model and that many relatively pleased business, technical and academic users of the ARPA Internet protocol suite would like such a debate to be actively pursued in the media. ______________________________________________________________ | | | Background | | | |Since June, 1988 William Stallings and I have been engaging| |in a guerilla debate in the reader's forum and the EOT| |feature on the technical and economic merits of OSI versus| |ARPANET-style networking. Enough issues have been raised to| |require a complete article to continue the discussion. The| |debate is of major interest because managers are now making| |strategic decisions which will affect the development, cost| |and functionality of corporate networks over the whole| |world. A valid approach to the debate deals with the| |technical, economic and logistic issues but avoids ad| |hominem attacks. I apologize for those comments in my forum| |letter which might be construed as personal attacks on| |William Stallings. | | | |Since I have not yet published many papers and my book is| |only 3/4s finished, I should introduce myself before I| |refute the ideas which Stallings presented in the September| |EOT feature. I am a system designer and implementer who is| |a founder and Project Director at Constellation Technologies| |which is a Boston-based start-up consulting and| |manufacturing company specializing in increasing the| |performance, reliability and security of standard low-level| |communications technologies for any of the plethora of| |computer networking environments currently available. | | | |I am not an "Arpanet Old Network Boy." My original| |experience is in telephony. I have implemented Signaling| |System 6, X.25, Q.921 and Q.931. During a one-year research| |position at MIT, I worked on TFTP and helped develop the X| |network transparent windowing protocol. Later I developed| |PC/NTS which uses IEEE 802.2 Type 2 to provide PC-Prime| |Series 50 connectivity over IEEE 802.3 (Ethernet) networks.| |My partner Tony Bono and I have attended various IEEE and| |CCITT standards-related committees in various official| |capacities. | _____________________________________________________________| II. THE DEBATE Part of the problem with debating is the lack of a mutually agreeable and understood set of concepts in which to frame the debate. I have yet to meet a communications engineer who had a sense of what a process might be. Having taught working software and hardware engineers at Harvard University and AT&T and having attended the international standards committees with many hardware, software and communications engineers, I have observed that overall system design concepts in computer networking need a lot more attention and understanding than they have been getting. Normally in the standardization process, this lack of attention would not be serious because official standards bodies usually simply make official already existing de facto standards like Ethernet 2.0 which had already proven themselves. In the case of OSI, the ISO committee, for no obvious reasons, chose to ignore the proven ARPA Internet de facto standard. ______________________________________________________________ | | | Architecture, | | Functional Specification, | | Design Specification | | | | |Nowadays, we read a lot of hype about CASE, object-oriented| |program techniques and languages designed to facilitate or| |to ease the development of large software projects. These| |tools generally duck the hardest and most interesting system| |design and development problem which is the design under| |constraint of major systems which somebody might actually| |want to buy. The hype avoids the real issue that student| |engineers are either simply not taught or do not learn| |system design in university engineering programs. If| |software engineers generally knew how to produce acceptable| |architectures, functional specifications and design| |specifications, the push for automatic tools would be much| |less. In fact, the development of CASE tools for automatic| |creation of systems architectures, functional specifications| |and design specifications requires understanding exactly how| |to produce proper architectures and specifications. But if| |engineers knew how to produce good architectures and| |specifications for software, presumably student engineers| |would receive reasonable instruction in producing| |architectures and specifications, and then there would be| |much less need for automatic CASE tools to produce system| |architectures, functional specifications or design| |specifications. | | | |Just as an architectural description of a building would| |point out that a building is Gothic or Georgian, an| |operating system architecture might point out that the| |operating system is multitasking, pre-emptively time-sliced| |with kernel privileged routines running at interrupt level.| |A system architecture would describe statically and| |abstractly the fundamental operating system entities. In| |Unix, the fundamental operating system entities on the user| |side would be the process and the file. The functional| |specification would describe the functionality to be| |provided to the user within the constraints of the| |architecture. A functional specification should not list the| |function calls used in the system. The design specification| |should specify the model by which the architecture is to be| |implemented to provide the desired functionality. A little| |pseudocode can be useful depending on the particular design| |specification detail level. Data structures, which are| |likely to change many times during implementations, should| |not appear in the design specification. | | | |Ancillary documents which treat financial and project| |management issues should be available to the development| |team. In all cases documents must be short. Otherwise,| |there is no assurance the all members of the development or| |product management teams will read and fully comprehend| |their documents. Detail and verbiage can be the enemy of| |clarity. Good architectures and functional specifications| |for moderately large systems like Unix generally require| |about 10-20 pages. A good high-level design specification| |for such a system would take about 25 pages. If the| |documents are longer, something may be wrong. The key is| |understanding what should not be included in such documents.| |The ISO OSI documents generally violate all these| |principles. | _____________________________________________________________| As a consequence, the ISO OSI committee and OSI boosters have an obligation to justify their viewpoint in debate and technical discussion with computer networking experts and system designers. Unfortunately, the debate over the use of OSI versus TCP/IP has so far suffered from three problems: o a lack of systems level viewpoint, o a lack of developer insight and o an hostility toward critical appraisal either technically or economically of the proposed ISO OSI standards. The following material is an attempt to engage in a critical analysis of OSI on the basis of system architecture, development principles and business economics. Note that in the following article unattributed quotations are taken from the itemized list which Stallings used in EOT to attempt to summarize my position. III. INTERNETWORKING: THE KEY SYSTEM LEVEL START POINT The most powerful system level architectural design concept in modern computer networking is internetworking. Internetworking is practically absent from the OSI reference model which concentrates on layering, which is an implementation technique, and on the virtual connection, which would be a feature of a proper architecture. Internetworking is good for the same reason Unix is good. The Unix architects and the ARPA Internet architects, after several missteps, concluded that the most useful designs are achieved by first choosing an effective computational or application model for the user and then figuring out how to implement this model on a particular set of hardware. Without taking a position on success or failure, I have the impression that the SNA and VMS architects by way of contrast set out to make the most effective use of their hardware. As a consequence both SNA and VMS are rather inflexible systems which are often rather inconvenient for users even though the hardware is often quite effectively used. Of course, starting from the user computational or application model does not preclude eventually making the most effective use of the hardware once the desired computational or application model has been implemented. ______________________________________________________________ | | | Internetworking | | | |The internetworking approach enables system designers and| |implementers to provide network users with a single, highly| |available, highly reliable, easily enlarged, easily| |modifiable, virtual network. The user does not need to know| |that this single virtual network is composed of a multitude| |of technologically heterogeneous wide area and local area| |networks with multiple domains of authority.| |Internetworking is achieved by means of a coherent system| |level view through the use of an obligatory internet| |protocol with ancillary monitoring protocol, gateways,| |exterior/internal gateway protocols and hierarchical domain| |name service. | | | |In the internetworking (not interworking) approach, if two| |hosts are attached to the same physical subnetwork of an| |internetwork, the hosts communicate directly with each| |other. If the hosts are attached to different physical| |subnetworks, the hosts communicate via gateways local to| |each host. Gateways understand and learn the internetwork| |topology dynamically at a subnetwork (not host level) and| |route data from the source subnetwork to destination| |subnetwork on a subnetwork hop by subnetwork hop basis. The| |detail of information required for routing and configuration| |is reduced by orders of magnitude. In the ARPA Internet,| |gateways learn topological information dynamically and| |provide reliability as well as availability by performing| |alternate routing of IP datagrams in cases of network| |congestion or network failures. | | | |An authoritative domain, Within the ARPA Internet, can| |conceal from the rest of the internetwork a lot of internal| |structural detail because gateways in other domains need| |only know about gateways within their own domain and| |gateways between authoritative domains. Thus, logical| |subnetworks of an internetwork may also themselves be| |catenets (concatenated networks) with internal gateways| |connecting different physical subnetworks within each| |catenet. For example, to send traffic to MIT, a gateway at| |U.C. Berkeley only need know about gateways between MIT and| |other domains and need know nothing about the internal| |structure of the MIT domain's catenet. | _____________________________________________________________| The ARPA Internet is one realization of the internetworking model. While I am not particularly enamored of some of the ARPA protocol features (nor of Unix features by the way),1 the ARPA Internet works well with capacity for expansion. SINet (described in "How to grow a world-class X.25 network," Data Communications, May 1988) is based on the CSNet subnetwork within the ARPA Internet. ____________________ 1 The use of local-IP-address, local-TCP-port, remote-IP- address, remote-TCP-port quadruples to uniquely identify a given TCP virtual circuit is an impediment to providing greater reliability and availability for a non-gateway multihomed host. A even larger problem with TCP/IP could lie in the possibly non-optimal partitioning of functionality between TCP, IP and ICMP. ____________________ ______________________________________________________________ | | | WANs and LANs | | | |OSI actually has an architecture. Like the ARPANET, OSI| |predicates the existence of a communications subnet| |consisting communications subnet processors (or subnet| |switches) and communications subnet access processors (or| |access switches). Access switches are also known as IMPs| |(Interface Message Processors) or PSNs (Packet Switch Nodes)| |in the ARPANET context. PSPDN (Packet-Switched Public Data| |Network) terminology usually designates access switches| |simply as packet switches. The communication subnet may be| |hierarchical and may contain adjunct processors other than| |subnet and access switches. The internal architecture of| |the communications subnet is quite distinct from the| |architecture presented to end-point hosts. The| |communications subnet may use protocols completely different| |from the protocols used for communication between two end-| |point hosts. An end-point host receives and transmits data| |to its attached access switch via a subnet access protocol.| |The communications subnet is responsible for taking a packet| |received at an access switch and transporting the packet to| |the access switch attached to the destination end-point| |host. The existence of such a well-defined communications| |subnet is the hall mark of a Wide-Area Network (WAN). | | |Unfortunately, from the standpoint of making computer| |networking generally and inexpensively available, access and| |subnet switches are expensive devices to build which need| |fairly complicated control software. DECNET gets around| |some of these problems by incorporating the communications| |subnet logic into end-point hosts. As a consequence,| |customers who wish to run DECNET typically have to purchase| |much more powerful machines than they might otherwise use.| |For the situation of a communications subnet which need| |support connectivity for only a small number of hosts, LAN| |developers found a more cost effective solution by| |developing a degenerate form of packet switches based on| |hardware-logic packet filtering rather than software| |controlled packet switching. These degenerate packet| |switches are installed in the end-point hosts, are accessed| |often via DMA2 as LAN controllers and are attached to| |extremely simplified communications subnets like coaxial| |cables. Direct host-to-switch (controller) access,| |degenerate packet-switching (packet-filtering) and| |simplified communications subnets are the distinguishing| |features of LANs. | | | |While ISO was ignoring the whole internetworking issue of| |providing universal connectivity between end-point hosts| |attached to different physical networks within internetworks| |composed of many WANs and even more LANs concatenated| |together, and while the IEEE was confusing all the issues by| |presenting as an end-to-end protocol a communications subnet| |protocol (IEEE 802.2) based on a communications subnet| |access protocol (X.25 level 2), the ARPA Internet community| |developed an internet architecture capable of providing the| |universal connectivity and resource sharing which business,| |technical and academic users really want and need. | ______________________________________________________________ ____________________ 2 Some machines like the Prime 50 Series do not use genuine DMA but instead use inefficient microcoded I/O. IBM machines generally use more efficient and somewhat more expensive internal switching. ____________________ The backbone of the ARPA Internet is the ARPANET. The ARPANET is a packet switched subnetwork within the ARPA Internet. The ARPANET communications subnet access protocol is 1822. CSNet was set up as an experiment to demonstrate that the ARPA Internet architecture and suite of protocols would function on a packet network whose communications subnet access protocol is X.25. Using an X.25-accessed packet network instead of an 1822-accessed packet network makes sense despite the glaring deficiencies of X.25,3 because X.25 controllers are available for many more systems than 1822 controllers and because many proprietary networking schemes like SNA and DECNET can use X.25-accessed packet networks but cannot use a packet network accessed by 1822. Yet, calling SINet a world class X.25 network is as reasonable as calling the ARPANET a world class 1822 network.4 Schlumberger has produced a world class TCP/IP network whose wires can be shared with SNA and DECNET hosts. Schlumberger has shown enthusiasm for the flexible, effective ARPANET suite of protocols but has given no support in the development of SINet to the idea that business should prepare to migrate to OSI based networks. I would be an OSI-enthusiast if ISO had reinvented internetworking correctly. Unfortunately, the ISO OSI reference model which first appeared in 1978 clearly ignored all the ARPA community work on intercomputer networking and resource sharing which was easily accessible in the literature of the time. Instead of building the OSI network on an internetworking foundation, ISO standardized on the older less effective host-to-packet-switch-to-packet-data- subnet-to-packet-switch-to-host (NCP) model which the DARPA ____________________ 3 For example, X.25 does flow control on the host to packet switch connection on the basis of packets transmitted rather than on the basis of consumption of advertised memory window. The exchange of lots of little packets on an X.25 connection can cause continual transmission throttling even though the receiver has lots of space for incoming data. 4 Or as much sense as calling Ethernet LANs DMA-based networks because the packet switches (an Ethernet controller is a degenerate case of a packet switch) on the LAN are typically accessed by DMA. ____________________ had abandoned 5 years earlier because of lack of flexibility and other problems. ______________________________________________________________ | | | Pieces of the ARPA Internet Conceptually | | | | | | | | | | | | | | (No Graphics) | | | | | ______________________________________________________________ Nowadays, mostly in response to US vendors and DARPA, pieces of the ARPA Internet architecture have resurfaced in the OSI reference model quite incoherently rather than as a consequence of an integrated correct architectural viewpoint. Connectionless-mode transmission is described in ISO/7498/DAD1 which is an addendum to ISO 7498 and not a core document. Because connectionless-mode transmission is defined in an addendum, the procedure apparently need not be implemented, and UK GOSIP, for example, explicitly rejects the use of the connectionless transmission mode. The introduction to the 1986 ISO 7498/DAD1 explicitly states, as follows, that ISO was extremely reluctant to incorporate a genuine datagram based protocol which could be used for internetworking. ISO 7498 describes the Reference Model of Open Systems Interconnection. It is the intention of that International standard that the Reference model should establish a framework for coordinating the development of existing and future standards for the interconnection of systems. The assumption that connection is a fundamental prerequisite for communication in the OSI environment permeates the Reference Model and is one of the most useful and important unifying concepts of the architecture which it describes. However, since the International Standard was produced it has been realized that this deeply-rooted connection orientation unnecessarily limits the power and scope of the Reference Model, since it excludes important classes of applications and important classes of communication network technology which have a fundamentally connectionless nature. An OSI connectionless-mode protocol packet may undergo something like fragmentation, but from the literature, this form of segmentation as used in OSI networks is hardly equivalent to ARPA Internet fragmentation. Stallings states the following in Handbook of Computer-Communications Standards, the Open Systems Interconnection (OSI) Model and OSI-Related Standards, on p. 18 (the only reference to anything resembling fragmentation in the book). Whether the application entity sends data in messages or in a continuous stream, lower level protocols may need to break up the data into blocks of some smaller bounded size. This process is called segmentation. Such a process is not equivalent to ARPA Internet fragmentation. In the ARPA Internet fragmentation is the process whereby the gateway software operating at the IP layer converts a single IP packet into several separate IP packets and then routes the packets. Each ARPA IP fragment has a full IP header. It is not obvious that each OSI segment has a complete packet header. The ARPA fragmentation procedure is not carried out by lower protocol layers. A N- layer packet in OSI is segmented at layer N-1 while the packet is routed (relayed) at layer N+1. This partitioning of basic internetworking procedures across layer 2 (N-1), layer 3 (N) and layer 4 (N+1) violates the following principles described in ISO/DIS 7498: Information Processing Systems -- Open Systems Interconnection -- Basic Reference Model. P1: do not create so many layers as to make the system engineering task of describing and integrating the layers more difficult than necessary [ISO uses three layers where one could be used]; P2: create a boundary at a point where the description of services can be small and the number or interactions across the boundary are minimized [by putting per-packet relaying in layer 4 at least two interactions across the boundary are required per packet]; P5: select boundaries at a point which past experience has demonstrated to be successful [the ARPA Internet layering boundaries which combine the addressing, fragmentation and routing in one layer has proven successful]; P6: create a layer where there is a need for a different level of abstraction in the handling of data, e.g. morphology, syntax, semantics [fragmentation, routing, and network addressing are all seem quit naturally to be part of network layer semantics as the ARPA Internet example shows]; P9: allow changes of functions or protocols to be made within a layer without affecting other layers [I would think changing the manner of addressing at layer 3 would affect relaying at layer 4]. Even if OSI N-1 segmentation and N+1 relaying could be used in the same way as fragmentation and routing in the ARPA Internet, it takes a lot more apparatus than simply permitting the use of the ISO connectionless "internet" protocol to achieve internetworking. The OSI documents almost concede this point because ISO 7498/DAD 1, ISO/DIS 8473 (Information Processing Systems -- Data Communications -- Protocol for Providing Connectionless-Mode Network Service) actually provide for N- layer segmentation (actually fragmentation) and N-layer routing right in the network layer in addition to the OSI standard N-1 segmentation and N+1 relaying. Providing such functionality directly in the network layer actually seems in greater accordance with OSI design principles, but if ISO is really conceding this point, ISO should go back and redesign the system rather than leaving this mishmash of N-1 segmentation, N segmentation, N routing and N+1 relaying. The current connectionless-mode network service is still insufficient for internetworking because the gateway protocols are not present and the connectionless-mode error PDUs (Protocol Data Units) do not provide the necessary ICMP functionality. The documents also indicate a major confusion between an internetwork gateway, which connects different subnetworks of one catenet (concatenated network), and a simple bridge, which connects several separate physical networks into a single network at the link layer, or an interworking unit, which is a subnet switch connecting two different communications subnets either under different administrative authorities or using different internal protocols.5 Tanenbaum writes the following about the ____________________ 5 This confusion is most distressing from a security standpoint. The November 2 ARPA Internet (Cornell) virus attack shows that one of the major threats to network security is insider attack which is a problem with even the most isolated corporate network. Because many ARPA Internet network authorities were assuming insider good behavior, ARPA Internet network administrators often did not erect security barriers or close trapdoors. Nevertheless, gateways have far more potential than bridges or interworking units to provide reasonable firewalls to hinder and frustrate insider attack. MIT/Project Athena which makes judicious use of gateways and which does not assume insider good behavior was relatively unaffected by the virus. Any document which confuses gateways, bridges and interworking units is encouraging security laxity. ____________________ connectionless-mode network service in Computer Networks, p. 321. In the OSI model, internetworking is done in the network layer. In all honesty, this is not one of the areas in which ISO has devised a model that has met with universal acclaim (network security is another one).6 From looking at the documents, one gets the feeling that internetworking was hastily grafted onto the main structure at the last minute. In particular, the objections from the ARPA Internet community did not carry as much weight as they perhaps should have, inasmuch as DARPA had 10 years experience running an internet with hundreds of interconnected networks, and had a good idea of what worked in practice and what did not. Internetworking, the key concept of modern computer networking, exists within the OSI reference model as a conceptual wart which violates even the OSI principles. If ISO had not tacked internetworking onto the OSI model, ISO was afraid that DARPA and that part of the US computer industry with experience with modern computer networking would have absolutely rejected the OSI reference model as unusable. ____________________ 6 Actually, I find ISO 7498/2 (Security Architecture) to be one of the more reasonable ISO documents. I would disagree that simple encryption is the only form of security which should be performed at the link layer because it seems sensible that if a multilevel secure mini is replaced by a cluster of PCs on a LAN, multilevel security might be desirable at the link layer. Providing multilevel security at the link layer would require more than simple encryption. Still, ISO 7498/2 has the virtue of not pretending to solve completely the network security problem. The document gives instead a framework indentifying fundamental concepts and building blocks for developing a security system in a networked environment. ____________________ IV. "GREATER RICHNESS" VERSUS DEVELOPER INSIGHT In view of this major conceptual flaw which OSI has with respect to internetworking, no one should therefore be surprised that instead of tight technical discussion and reasoning, implementers and designers like me are continually subjected to vague assertions of "greater richness" of the OSI protocols over the ARPA Internet protocols. In ARPA Internet RFCs, real-world practical discussion is common. I would not mind similar developer insight or even hints about the integration of these OSI protocol interpreters into genuine operating systems participating in an OSI interoperable environment. The customers should realize "greater richness" costs a lot of extra money even if a lot of the added features are useless to the customer. "Greater richness" might necessitate the use of a much more powerful processor if "greater richness" forced much more obligatory but purposeless protocol processing overhead. "Greater richness" might also represent a bad or less than optimal partitioning of the problem. A. OSI NETWORK MANAGEMENT AND NETVIEW Netview has so much "greater richness" than the network management protocols and systems under development in the ARPA Internet context that I have real problems with the standardization of Netview into OSI network management as the obligatory user interface and data analysis system. Netview is big, costly, hard to implement, and extremely demanding on the rest of the network management system. As OSI network management apparently subsumes most of the capabilities of Arpanet ICMP (Internet Control Monitoring Protocol) which is a sine qua non for internetworking, I am as a developer rather distressed that full blown OSI network management (possibly including a full implementation of FTAM) might have to run on a poor little laser printer with a dumb ethernet interface card and not much processing power. B. FTAM IS DANGEROUS The "greater richness" of FTAM seems to lie in the ability to transmit single records and in the ability to restart aborted file transfer sessions. Transmission of single records seems fairly useless in the general case since operating systems like Unix and DOS do not base their file systems on records while the records of file systems like those of Primos and VMS have no relationship whatsoever to one another. Including single record or partial file transfer in the remote transfer utility seems is a good example of bad partitioning of the problem. This capability really belongs in a separate network file system. A network file system should be separate from the remote file transfer system because the major issues in security, performance, data encoding translation and locating objects to be transferred are different in major ways for the two systems. The ability to restart aborted file transfers is more dangerous than helpful. If the transfer were aborted in an OSI network, it could have been aborted because one or both of the end hosts died or because some piece of the network died. If the network died, a checkpointed file transfer can probably be restarted. If a host died on the other hand, it may have gradually gone insane and the checkpoints may be useless. The checkpoints could only be guaranteed if end hosts have special self-diagnosing hardware (which is expensive). In the absence of special hardware and ways of determining exactly why a file transfer aborted, the file transfer must be restarted from the beginning. By the way, even with the greater richness of FTAM, it is not clear to me that a file could be transferred by FTAM from IBM PC A to a Prime Series 50 to IBM PC B in such a way that the file on PC A and on PC B could be guaranteed to be identical. C. X.400: E-MAIL AS GOOD AS THE POSTAL SERVICE As currently used and envisioned, the X.400 family message handling also has "greater richness." X.400 seems to include binary-encoded arbitrary message-transmission, simple mail exchange and notification provided by a Submission and Delivery Entity (SDE). In comparison with ARPA SMTP (Simple Mail Transfer Protocol), X.400 is overly complicated with hordes of User Agent Entities (UAEs), Message Transfer Agent Entities (MTAEs) and SDEs scurrying around potentially eating up -- especially during periods of high traffic -- lots of computer cycles on originator, target and intermediate host systems because the source UAE has to transfer mail through the local MTAE and intermediate MTAEs on a hop-by-hop basis to get to the target machine.7 ____________________ 7 I have to admit that if I were implementing X.400, I would probably implement the local UAE and MTAE in one process. The CCITT specification does not strictly forbid this design, but the specification does seem to discourage strongly such a design. I consider it a major flaw with a protocol specification when the simplest design is so strongly counterindicated. It does seem to be obligatory that mail traffic which passes through an Intermediate System (IS) must pass through an MTAE running on that IS. ____________________ The design is particularly obnoxious because X.400 increases the number of ways of getting mail transmission failure by using so many intermediate entities above the transport layer. The SMTP architecture is, by contrast, simple and direct. The user mail program connects to the target system SMTP daemon by a reliable byte stream (like a TCP virtual circuit) and transfers the mail. Hop-by-hop transfers through intermediate systems are possible when needed. One SMTP daemon simply connects to another the same way a user mail program connects to an SMTP daemon. The relatively greater complexity and obscurity of X.400 arises because a major purpose of X.400 seems to be to intermingle intercomputer mail service and telephony services like telex or teletex to fit the computer networking into the PTT (Post, Telegraph & Telephone administration) model of data communications (not an unreasonable goal for a CCITT protocol specification but probably not the best technical or cost-effective design for the typical customer). Mail gateways are apparently supposed to handle document interchange and conversion. Document interchange and conversion is a really hard problem requiring detailed knowledge at least of word processor file formats, operating system architecture, data encoding, and machine architecture. It may be impossible to develop a satisfactory network representation which can handle all possible document content, language and source/target hardware combinations as well as provide interconversion with tradition telephonic data transmission encodings. The cost of development of such a system might be hard to justify, and a customer might have a hard time justifying paying the price a manufacturer would probably have to charge for this product. A network file system or remote file transfer provides a much more reasonable means of document sharing or interchange than tacking an e-mail address into a file with a complicated internal structure, sending this file through the mail system and then removing the addressing information before putting the document through the appropriate document or graphics handler. A NETASCII-based e-mail system corresponds exactly to the obvious mapping of the typical physical letter, which does not usually contain complicated pictorial or tabular data, to an electronic letter and is sufficient for practically all electronic mail traffic. Special hybrid systems can be developed for that extremely tiny fraction of traffic for which NETASCII representations may be insufficient and for which a network file system or FTP may be insufficient. A correct partitioning of the electronic mail should be kept completely separate from telephony services, document interchange and document conversion. ______________________________________________________________ | | | X.400 Mail Connections | | | | | | | | | | | | | | (No Graphics) | | | | | ______________________________________________________________ D. ARPA SMTP: DESIGNING MAIL AND MESSAGING RIGHT The MIT environment at Project Athena, where IBM and DEC are conducting a major experiment in the productization of academic software, provides an instructive example of the differences between e-mail, messaging and notification. The mail system used at MIT is an implementation of the basic SMTP-based ARPA Internet mail system. More than four years ago the ARPA Internet mail system was extremely powerful and world-spanning. It enabled then and still enables electronic mail to reach users on any of well over 100,000 hosts in N. America, Europe, large portions of E. Asia and Israel. The Citicorp network (described in "How one firm created its own global electronic mail network," Data Communications, June 1988, p. 167), while probably sufficient for Citicorp's current needs, connects an insignificant number of CPUs (47), provides no potential for connectivity outside the Citicorp domain of authority and will probably not scale well with respect to routing or configuration as it grows. The MIT environment is complex and purposely (apparently in the strategies of DEC and IBM) anticipates the sort of environment which should become typical within the business world within the next few years. MIT is an authoritative domain within the ARPA Internet. The gateways out of the MIT domain communicate with gateways in other domains via the Exterior Gateway Protocol (EGP). Internally, currently used internal gateway protocols are GGP, RIP and HELLO. The MIT domain is composed of a multitude of Ethernet and other types of local area networks connected by a fiber-optic backbone physically and by gateway machines logically. This use of gateways provides firewalls between the different physical networks so that little sins (temporary network meltdowns caused by Chernobyl packets) do not become big sins propagating themselves throughout the network. The gatewayed architecture of the MIT network also permits a necessary traffic engineering by putting file system, paging and boot servers on the same physical network with their most likely clients so that this sort of traffic need not be propagate throughout the complete MIT domain. Difficult to reach locations achieve connectivity by means of non-switched telephone links. Since MIT has its own 5ESS, these links may be converted to ISDN at some point. While there are some minis and mainframes in the network, the vast majority of hosts within the MIT network are personal workstations with high resolution graphics displays of the Vaxstation and RT/PC type and personal computers of the IBM PC, PC/XT and PC/AT type. A few Apollos, Suns, Sonys and various workstations of the 80386 type as well as Lisp Machines and PCs from other manufacturers like Apple are also on the air. Most of the workstations are public. When a user logs in to such a workstation, after appropriate Kerberos (MIT security system) authentication, he has full access to his own network files and directory as well as access to those resources within the network which he has the right to use. To assist the administration of the MIT domain within the ARPA Internet, several network processes might be continually sending (possibly non-ASCII) event messages to a network management server which might every few hours perform some data analysis on received messages and then format a summary mail message to send to a network administrator. This mail message would be placed in that network administrator's mailbox by his mail home's SMTP daemon which then might check whether this network administrator is reachable somewhere within the local domain (maybe on a PC with a network interface which was recently turned on and then was dynamically assigned an IP address by a local authoritative dynamic IP address server after appropriate authentication). If this administrator is available, the SMTP daemon might notify him via the notification service (maybe by popping up a window on the administrator's display) that he has received mail which he could read from his remote location via a post office protocol. I have seen the above system being developed on top of the basic "static" TCP/IP protocol suite by researchers at MIT, DEC and IBM over the last 4 years. X.400 contains a lot this MIT network functionality mishmashed together but I as a customer or designer prefer the much more modular MIT mail system. It is an extensible, dynamically configurable TCP/IP-based architecture from which a customer could chose those pieces of the system which he needs. The MIT system requires relatively little static configuration. Yet by properly choosing the system pieces, coding an appropriate filter program and setting up a tiny amount of appropriate configuration data, a customer could even set up a portal to send e-mail to a fax machine. In comparison, X.400 requires complicated directory services and an immense amount of static configuration about the end user and end user machine to compensate for the internetworking-deficient or internetworking-incompatible addressing scheme. The need for such a level of static configuration is unfortunate for system users because in the real world a PC or workstation might easily be moved from one LAN to another or might be easily replaced by a workstation or PC of another type. An MIT-style mail system could also be much cheaper to develop and consequently could be much less costly to purchase than an X.400 mail system simply because it represents a much better partitioning of the problem. One or two engineers produced each module of the MIT mail system in approximately 6 months. Because of complexity and obscurity, the development of X.400 products (I saw an example at Prime) is measured in staff years. The executive who chooses X.400 will cost his firm an immense amount of money which will look utterly wasted when his firm joins with another firm in some venture and the top executives of both firms try to exchange mail via their X.400 mail systems. Simple mail exchange between such systems would likely be very hard to impossible because the different corporations could easily have made permissible but incompatible choices in their initial system set-up. At the very last complete reconfiguration of both systems could be necessary. Had the firms chosen an ARPA Internet mail system like the MIT system, once both firms had ARPA Internet connectivity or set up a domain-to-domain gateway, mail would simply work. ______________________________________________________________ | | | SMTP Mail Connections | | | | | | | | | | | | | | (No Graphics) | | | | | ______________________________________________________________ V. IS THE TCP/IP PROTOCOL SUITE "STATIC?" Because of the mail system development in progress at MIT, DEC and IBM, the X development which I and others have done and which is still continuing, SUN NFS (Network File System) development, IBM AFS (Andrew File System) development, Xenix-Net development, Kerberos development, and the other plethora of protocol systems being developed within the ARPA Internet context (including the VMTP transaction processing system and commercial distributed database systems like network Ingress), I am at the very least puzzled by Mr. Stallings' assertion that "[it] is the military standards that appear on procurement specifications and that have driven the development of interoperable commercially available TCP/IP products." ______________________________________________________________ | | | Partitioning the Problem | | | |The X window system is an example of a clearly and well| |partitioned system. In windowing, the first piece of the| |problem is virtualizing the high-resolution raster graphics| |device. Individual applications do not want or need to know| |about the details of the hardware. Thus, to provide| |hardware independence, applications should only deal with| |virtual high-resolution raster-graphics devices and should| |only know about its own virtual high resolution raster-| |graphics devices (windows). The next piece of the problem| |is to translate between virtual high-resolution raster| |graphics devices and the physical high-resolution raster| |graphics device (display). The final part of the problem| |lies in managing the windows on the display. This problem,| |with a little consideration clearly differentiates itself| |from translating between virtual and physical high-| |resolution raster-graphics devices. | | | | |In the X window system, communication between the| |application and its windows is handled by the X library and| |those libraries built on top of the basic X library.| |Virtual to physical and physical to virtual translation is| |handled by the X server. X display management is handled by| |the X window manager. | | | | | |After partitioning the problem, careful consideration of| |display management leads to the conclusion that if all| |windows on a display are treated as "children" of a single| |"root" window, all of which "belong" in some sense to the| |window manager, then the X window manager itself becomes an| |ordinary application which talks to the X server via the X| |library. As a consequence, developers can easily implement| |different display management strategies as ordinary| |applications without having to "hack" the operating system.| |The server itself may be partitioned (under operating| |systems which support the concept) into a privileged portion| |which directly accesses the display hardware and a non-| |privileged portion which requests services from the| |privileged part of the server. Under Unix, the privileged| |part of the server goes into the display, mouse and keyboard| |drivers while the non-privileged part becomes an ordinary| |application. In common parlance, X server usually refers to| |the non-privileged part of the X server which is implemented| |as an ordinary application. | | | |The last step in realizing the X window system is choosing| |the communications mechanism between the X server and| |ordinary applications or the display manager. Because the| |problem was nicely partitioned, the communications problem| |is completely extrinsic to the windowing problem as lives as| |an easily replaceable interface module. The initial choice| |at MIT was to use TCP/IP virtual circuits, which provided| |immediate network transparency, but in fact because X only| |requires sequenced reliable byte-streams so that DECNET VCs| |or shared-memory communications mechanisms can easily| |replace TCP/IP virtual circuits according to the| |requirements of the target environment. Systems built on| |well-partitioned approaches to solving problems often show| |such flexibility because of modularity of the approach and| |because a successful partitioning of the problem will often| |in its solution increase the understanding of the original| |problem that developers can perceive greater tractability| |and simplicity in the original and related problems than| |they might have originally seen. | _____________________________________________________________| It seems somewhat propagandistic to label the TCP/IP protocol suite static and military. New RFCs are continually being generated as Paul Strauss has pointed out in his September article. Such new protocols only become military standards slowly because the military standardization of new protocols and systems is a long tedious political process which once completed may require expensive conformance and verification procedures. After all, neither the obligatory ICMP nor the immensely useful UDP (User Datagram Protocol) have associated military standards. Often, after reviewing those products generated by market forces, the US military specifies and acquires products which go beyond existing military standards. By the way, hierarchical domain name servers and X are used on MILNET. VI. ENTERPRISE NETWORKING AND SOPHISTICATED APPLICATIONS: SELLING INTERCOMPUTER NETWORKING The military are not the only users "more interested in sophisticated applications than in a slightly enhanced version of Kermit." The whole DEC enterprise networking strategy is postulated on this observation. Stallings ignored my reference to network file systems as a sophisticated networking application. Yet, in several consulting jobs, I have seen brokers and investment bankers make extensive use of network file systems. I also believe network transparent graphics will be popular in the business world. At Saloman Brothers both IBM PCs and SUN workstations are extensively used. With X, it is possible for a PC user to run a SUN application remotely which uses the PC as the output device. This capability seems highly desirable in the Saloman Brothers environment. Unfortunately "OSI is unlikely ever to provide for [such] resource sharing because it is industry-driven." Wayne Rash Jr., a member of the professional staff of American Management Systems, Inc. (Arlington, Virginia) who acts as a US federal government microcomputer consultant, writes the following in "Is More Always Better," Byte, September 1988, p. 131. You've probably seen the AT&T television ads about this trend [toward downsizing and the development of LAN-based resource-sharing systems]. They feature two executives, one of whom is equipping his office with stand-alone microcomputers. He's being intimidated by another executive, who tells him in a very nasty scene, "Stop blowing your budget" on personal computers and hook all your users to a central system. This is one view of workgroup computing, although AT&T has the perverse idea that the intimidator is the forward thinker in the scene. AT&T and to an even greater extent the similarly inclined European PTTs have major input into OSI specification. VII. BIG AND SMALL PLAYERS CONSTRAIN OSI The inclinations of AT&T and the PTTs are not the only constraints under which the OSI reference model was developed. A proprietary computer networking system, sold to a customer, becomes a cow which the manufacturer can milk for years. Complete and effective official standards make it difficult for a company to lock a customer into a proprietary system. A customer could shop for the cheapest standard system, or could chose the offering of the manufacturer considered most reliable. It is proverbial that no MIS executive gets fired for choosing IBM. Small players have genuine reason to fear that a big player like Unisys, which no longer has a major proprietary computer networking installed base8, or AT&T, which never had a major proprietary computer networking installed base9, might try to establish themselves in the minds of customers as the ultimate authority for the supply of true OSI connectivity. Thus, small players fear that a complete and effective official standard might only benefit the big players. Players like AT&T or Unisys fear IBM might hi-jack the standard. IBM would prefer to preserve its own proprietary base and avoid competing with the little guys on a cost/performance basis in what could turn into a commodity marker. No such considerations were operative in the development of the ARPA Internet suite of protocols. DARPA had a specific need for intercomputer networking, was willing to pay top dollar to get the top experts in the intercomputer networking field to design the system right and was less concerned by issues of competition (except perhaps for turf battles within the U.S. government). By contrast, almost all players who have input into the ISO standardization process have had reasons and have apparently worked hard to limit the effectiveness of OSI systems. With all the limitations, which have been incorporated into the OSI design and suite of protocols, the small players have no reason to fear being overwhelmed by big players like Unisys or AT&T. The big players have the dilemma of either being non-standard or of providing an ineffective, incomplete but genuine international standards. Small vendors have lots of room to offer enhanced versions perhaps drawing from more sophisticated internetworking concepts. In any case, most small vendors, as well as DEC and IBM, are hedging their bets by offering both OSI and TCP/IP based products. IBM seems well positioned with on-going projects at the University of Michigan, CMU, MIT, Brown and Stanford and with IBM's creditability in the business world to set the standard for the business use of TCP/IP style ____________________ 8 BNA and DCA seem hardly to count even to the Unisys management. 9 Connecting computer systems to the telephone network is not computer networking in any real sense. ____________________ networking. By contrast, no major manufacturer really seems to want to build OSI products, and with the current state of OSI, there is really no reason to buy OSI products. VIII. MAP: FOLLOWING THE OSI MODEL MAP shows perfectly the result of following the OSI model to produce a computer networking system. GM analysts sold MAP to GM's top management on the basis of the predicted cost savings. Since GM engineers designed, sponsored and gave birth to MAP, I am not surprised that an internal GM study has found MAP products less expensive than non-MAP compliant products. If the internal study found anything else, heads would have to roll. Yet, as far as I know, neither IBM nor DEC have bought into the concept although both companies would probably supply MAP products for sufficient profit. Ungermann-Bass and other similar vendors have also announced a disinclination to produce IEEE 802.4 based products. Allen-Bradley has chosen DECNET in preference to a MAP-based manufacturing and materials handling system. This defection of major manufacturers, vendors and customers from the MAP market has to limit the amount of MAP products available for customers to purchase. Nowadays, GM can purchase equipment for its manufacturing floor from a limited selection of products, which are the computer networking equivalent of bows and arrows, whereas in the past GM was stuck with rocks and knives. Bows and arrows might be sufficient for the current GM applications; however, if my firm had designed MAP, GM would have the networking equivalent of nuclear weapons, for the MAP network would have been built around an internet with a genuine multimedium gatewayed easily modifiable environment so that in those locations where token-bus noise resistance is insufficient and where higher bandwidths might be needed, fiber media could be used. With the imminent deluge of fiber-based products, MAP looks excessively limited. (Actually, the MAP standards committees have shown some belated awareness that fiber might be useful in factories.) IX. EXTENDING OSI VIA PROTOCOL CONVERTERS: QUO VADIT? Interestingly enough, even when OSI systems try to overcome OSI limitations via protocol conversion to provide access to some of the sophisticated resource sharing to which ARPA Internet users have long been accustomed, the service is specified in such a way as to place major limitations on performance of more sophisticated applications. Just like IBM and other system manufacturers, I have no problems with providing to the customer at sufficient profit exactly those products which the customer specifies. Yet, if contracted for advice on a system like the NBS TCP/IP-to-OSI protocol converter IS (Intermediate System), described in "Getting there from here," Data Communications, August 1988, I might point out that such a system could easily double packet traffic on a single LAN, decrease network availability and reliability, prevent alternate routing, and harm throughput by creating a bottleneck at the IS which must perform both TCP/IP and OSI protocol termination. X. CONCLUSION Official standardization simply by itself does not make a proposal good. Good standards generally were already good before they became official standards. The IEEE and other standards bodies generate lots of standards for systems which quickly pass into oblivion. OSI was generated de novo, apparently with a conscious decision to ignore the already functioning ARPA Internet example. Unless a major rethinking of OSI (like redesigning OSI on the solid foundation of the internetworking concept) takes place in the near future, I must conclude that the ARPA Internet suite of protocols will be around for a long time and that users of OSI will be immensely disappointed by the cost, performance, flexibility and manageability of their networks. I. Introduction 1 II. The Debate 2 III. Internetworking: The Key System Level Start Point 4 IV. "Greater Richness" Versus Developer Insight 14 A. OSI Network Management and Netview 14 B. FTAM is Dangerous 14 C. X.400: E-Mail as Good as the Postal Service 15 D. ARPA SMTP: Designing Mail and Messaging Right 18 V. Is the TCP/IP Protocol Suite "Static?" 22 VI. Enterprise Networking and Sophisticated Applications: Selling Intercomputer Networking 24 VII. Big and Small Players Constrain OSI 24 VIII. MAP: Following the OSI Model 26 IX. Extending OSI Via Protocol Converters: Quo vadit? 26 X. Conclusion 27
huitema@mirsa.UUCP (Christian Huitema) (03/17/89)
From article <2145@cpoint.UUCP>, by martillo@cpoint.UUCP (Joachim Carlo Santos Martillo): > The following is an article which I am going to submit to Data < Communications in reply to a column which William Stallings > did on me a few months ago. I think people in this forum might < be interested, and I would not mind some comments. > < > Round 2 in the great TCP/IP versus OSI Debate < .... I always thought that the date for April fools was April 1st, not March 15th.. Christian Huitema <huitema@mirsa.inria.fr>
jos@idca.tds.PHILIPS.nl (Jos Vos) (03/20/89)
In article <146@mirsa.UUCP> huitema@mirsa.UUCP (Christian Huitema) writes:
:From article <2145@cpoint.UUCP>, by martillo@cpoint.UUCP (Joachim Carlo Santos Martillo):
:> Round 2 in the great TCP/IP versus OSI Debate
:< ....
:
:I always thought that the date for April fools was April 1st, not March 15th..
I think you're wrong: it's obviously March 16th... :-(
--
-- ###### Jos Vos ###### Internet jos@idca.tds.philips.nl ######
-- ###### ###### UUCP ...!mcvax!philapd!jos ######
larry@rnms1.uucp (0000-Larry Swift(0000)) (03/23/89)
In article <2145@cpoint.UUCP> martillo@cpoint.UUCP (Joachim Carlo Santos Martillo) writes: >be interested, and I would not mind some comments. OK, here goes. Some of your remarks seem to be deliberately baiting entire groups of people, to wit: > costs. At the same time, with no obvious architecture, with ^^^^^^^^^^^^^^^^^^^^^^^ > theoretical or idealized networks and while actively > ignoring the work being done in the ARPA Internet context, > the ISO OSI standards committees were developing basic > remote terminal and file transfer protocols. The ISO OSI I won't argue the merit of this statement one way or the other. But the fact is that there have been lots and lots of people working on architectural issues in the ISO context since the very early days who could easily get offended by the off-hand nature of this remark. > |Since June, 1988 William Stallings and I have been engaging| > |in a guerilla debate in the reader's forum and the EOT| ^^^^^^^^ I would offer that a lot of people who might otherwise be interested in this subject will not bother with this debate because of the "guerilla" nature of your discourse. I'm certainly less interested than I otherwise would be. > |hominem attacks. I apologize for those comments in my forum| > |letter which might be construed as personal attacks on| > |William Stallings. | 'nuff said. > the debate. I have yet to meet a communications engineer > who had a sense of what a process might be. Having taught In other words, you are the only exception? Give us a break, here. Maybe I'll read the rest later. Larry Swift email: larry@pdn.paradyne.com AT&T Paradyne, LG-129 Phone: (813) 530-8605 P. O. Box 2826 Largo, FL, 34649-9981 She's old and she's creaky, but she holds!
pete@relay.nixctc.de (Pete Delaney) (04/15/89)
Come on guys, where is the Followup to Joachim Carlo Santos Martillo's paper " Round 2 in the great TCP/IP versus OSI Debate". I expected a bit more than everyone being quite. Is Darth Vader watching or something? I've been working on OSI for a few years and it seems that most of what Joachin is presenting is right. I'm a little disapointed in explanations about why this obsurdity has developed the way it has. I am sure a lot of developers love OSI over arpanet, I'd like to here their story. Also, it might be nice to hear why arpnet is migrating to OSI; do the arpa developers really like ASN1 more than postscript? I found programing in ASN1 with the NBS meta compiler very time consuming. What about using postscript instead of ASN1 for network management and Manufactoring Messageing? Is it really that important to check, cross-check, and then tripple-check as is done in OSI and ASN1. Can't we just assume like Joachim points out that the connection above transport is reliable and then send programs like postscript to due what we want? Pete Delaney - Nixdorf UCC | pete@relay.NIXCTC.DE Prefered Addr Loffel Strasse 3 | pyramid!nixctc!pete UUCP from Calf 7000 Stuttgart 70 West Germany | Phone: +49 (711) 7685-128
pete@relay.nixctc.de (Pete Delaney) (04/16/89)
In article <146@mirsa.UUCP>, huitema@mirsa.UUCP (Christian Huitema) writes: > < > > Round 2 in the great TCP/IP versus OSI Debate >> > I always thought that the date for April fools was April 1st, >> not March 15th.. > > Christian Huitema <huitema@mirsa.inria.fr> Noop, it's the 15th :) How about a serious comeback from one of Europes leading OSI guru's? Pete Delaney - Nixdorf UCC | pete@relay.NIXCTC.DE Prefered Addr Loffel Strasse 3 | pyramid!nixctc!pete UUCP from Calf 7000 Stuttgart 70 West Germany | Phone: +49 (711) 7685-128
sylvain@roxy.chorus.fr (Sylvain Langlois) (04/20/89)
In "TCP/IP versus OSI" (<1042@nixctc.DE>), pete@relay.nixctc.de (Pete Delaney) writes: >[..]. Also, it might be nice to hear why >arpnet is migrating to OSI; Arpanet has to move to *real* international standards someday, even if the protocol suite it uses today has shown it was far more superior to the currently available implementations. Bringing OSI implementations up to TCP/IP and known applications today's quality is, to my own advice, a question of time. People working on the ISODE (Marshall Rose, Steve Kille and co.) are doing a wonderful job in this way. Once 4.4BSD OSI support will be available, I guess people will be more motivated than they are today. >[...] do the arpa developers really like ASN1 >more than postscript? PostScript has nothing to do here. > Can't we just assume like Joachim points out >that the connection above transport is reliable The problem seems to me that reliability has different meannings, depanding on the context the Transport Service is used. Sylvain ---------------- Sylvain Langlois "Dogmatic attachement to the supposed merits (sylvain@chorus.fr) of a particular structure hinders the search (sylvain%chorus.fr%uunet.uu.net) of an appropriate structure" (Robert Fripp)
cheng@homxc.ATT.COM (W.CHENG) (05/04/89)
A side note of the TCP/IP vs OSI discussion. I wonder if someone can point to me where protocol conversion between IP and ISO CLNP is discussed or done. I'm interested in protocol conversion in general but IP <--> CLNP in particular. Any help or info. is very appreciated.
tozz@hpindda.HP.COM (Bob Tausworthe) (05/12/89)
>A side note of the TCP/IP vs OSI discussion. I wonder if someone can >point to me where protocol conversion between IP and ISO CLNP is >discussed or done. I'm interested in protocol conversion in general but >IP <--> CLNP in particular. Any help or info. is very appreciated. ---------- The most common way to get IP-- CLNP interworking is to use a transport bridge. This is what Wollangong and others seem to be doing. tozz@hpda.hp.com
dcrocker@AHWAHNEE.STANFORD.EDU (Dave Crocker) (05/13/89)
The transport service bridge essentially maps TCP to TP4. It requires that both networks/users/end-points be running the same application protocol. This reduces to requiring that the TCP user also be running OSI applications. If the two sides must remain "pure" to their base technology -- i.e., there may be no change to the environments on either side -- then you are stuck with using application gateways. These implement all of both stacks, all the way up to and including both sets of applications, with an "eighth" layer that translates from one application to the other. Dave
tozz%hpindlm@HP-SDE.SDE.HP.COM (05/13/89)
>The transport service bridge essentially maps TCP to TP4. It requires >that both networks/users/end-points be running the same application >protocol. This reduces to requiring that the TCP user also be running >OSI applications. True to a point. The topology I was addressing was +-------------+ +-----------+ ( ) +-----------+ +-------------+ |OSI Appl/CLNP|--|trns bridge|-( IP )-|trns bridge|--|CLNP/OSI Appl| +-------------+ +-----------+ ( ) +-----------+ +-------------+ {although RFC 1070 proposes another solution to this problem (CLNP/IP)} OR +-----------+ +-----------+ ( ) +-----------+ +-----------+ |TCP Appl/IP|--|trns bridge|-( CLNP )-|trns bridge|--|IP/TCP Appl| +-----------+ +-----------+ ( ) +-----------+ +-----------+ Transport bridges take care of both of these. Note that with the inclusion of RFC 1006 (ISODE) a trasnport bridge gives you other connectivity possibilities: +---------------+ +-----------+ ( ) +-------------------------+ | OSI Appl/CLNP |--|trns bridge|-( IP )-| IP/TCP/RFC1006/OSI Appl | +---------------+ +-----------+ ( ) +-------------------------+ The specific point I was responding to was IP<--> CLNP protocol conversion. I was not responding to any other part of the TCP/IP vs OSI discussion. >If the two sides must remain "pure" to their base techn>ology -- i.e., >there may be no change to the environments on either side -- then you >are stuck with using application gateways. These implement all of >both stacks, all the way up to and including both sets of applications, >with an "eighth" layer that translates from one application to the >other. Yes very true, however IP<-->CLNP won't buy you that, in fact TP<-->TCP won't buy you that. Let's face it, two machines must talk the same protocol in order to communicate. If its a layer 7 protocol, then they must talk the same protocol directly, or somewhere there is an intermediate machine which acts as an application gateway. I know of no other way. >Dave thanks for the input >bob tausworthe
dcrocker@AHWAHNEE.STANFORD.EDU (Dave Crocker) (05/15/89)
Bob, I think that you may be importing too much functionality to the TSB and that you may want to use it in some ways that could be better done in other ways... You cite three situations. The first has OSI, down to CLNP, running in an IP network. You put in a TSB to allow CLNP to run over OSI Transport and then have it translated to run over TCP. This sounds strange, to say the least. Running CLNP encapsulated over IP, directly, where CLNP views IP as a link-level protocol, is far cleaner. The second situation reverses the relationships, to have IP running over a CLNP network. Again, the solution is far cleaner to have IP view CLNP as a link protocol and CLNP think that IP is a Transport protocol. I do not see any benefits of placing TSB-like technology in between and, in fact, it does not sound as if it would work. Your last scenario uses a TSB to map OSI, down to CLNP, over to OSI, down to session, over RFC1006 and IP. (ISODE only goes down to session.) This truly does not compute. A rule for determining how/when/where to do conversions: Create two protocol stacks. Look for the highest and lowest points of departure. You need to translate in between those points, at the highest level. e.g., OSI APPl TCP APPL ... ... Means that nothing is in common and you translate above it; i.e., an applicatio gateway. OSI APPL OSI APPL OSI PRES OSI PRES OSI SESS OSI SESS OSI TPORT TCP ... ... Means that you need to translate at transport level. This is the exact, and only, case that the documented Transport Service Bridge will solve. To the extent that the other scenarios you describe can and should be done, they are independent of this particular kind of TSB. I.e., they are different products. ANother example: OSI APPL/ROSE OSI APPL/ROSE OSI PRES OSI SESS OSI TPORT TCP ... ... The right-hand column is what is being called "light-weight presentation" and is what Marshall did for the NetMan CMIP-over-TCP group. To interwork with the pure OSI world, this requires a ROSB (Remote Operations Service Bridge). Dave
mckee@MITRE.MITRE.ORG (H. Craig McKee) (05/15/89)
>From: Dave Crocker <dcrocker@ahwahnee.Stanford.EDU> > >If the two sides must remain "pure" to their base technology -- i.e., >there may be no change to the environments on either side -- then you >are stuck with using application gateways. These implement all of >both stacks, all the way up to and including both sets of applications, >with an "eighth" layer that translates from one application to the >other. I heard a "war story" concerning the 1982 cutover from NCP to TCP: That application bridges were built for Mail, FTP and TELNET, but only the Mail and FTP bridges proved successful; the TELNET bridge was too difficult or slow, or both. I think the list would like to hear from those who were "present at the creation." Regards - Craig
tozz%hpindlm@HP-SDE.SDE.HP.COM (05/16/89)
>Bob, >I think that you may be importing too much functionality to the TSB and >that you may want to use it in some ways that could be better done in other >ways... Dave, I was never arguing what was the best way. A person on the net was interested in CLNP<-->IP translation. I was simply desiminating information. I was giving him the scoop on what is presently available in the REAL world. Running IP over CLNP or vice-versa are being done EXPERIMENTALY ONLY. In fact, RFC 1070 states that very clearly. I won't argue the fact this may be the direction to take for OSI/ARPA connectivity, however, real-world (meaning comercial/government) people using TCP/IP and wanting a way to incorporate ISO functionality do not presently have 1070-type functionality available to them. >You cite three situations. The first has OSI, down to CLNP, running in >an IP network. You put in a TSB to allow CLNP to run over OSI Transport >and then have it translated to run over TCP. This sounds strange, to say >the least. Running CLNP encapsulated over IP, directly, where CLNP views >IP as a link-level protocol, is far cleaner. First off, what you have up there does sound strange, to say the least. CLNP over TP? Don't you mean TP over CLNP? If I make this assumption about what you were trying to say You put in a TSB to allow OSI Transport to run over CLNP [what normally occurs] and then have TCP run over it [it==CLNP???]. Close, but no cigar. A TSB such as Wollangong has produced translates between IP and CLNP by bridging at the transport layer. So your not putting TCP over CLNP at all. Maybe the diagrams should have been more explicit. Try this on for size: Node A Node B OSI Appl. Pres Sess Transport Bridge TP TP TCP CLNP CLNP IP | subnet (802.x, X.25) | | +--------------------------+ +---------- . . . In short, the TSB gives you a method to interwork between CLNP and IP. If you look at my diagrams in the previous message, you will see that two TSBs were used: one to convert to TCP/IP, and one to convert back to TP/CLNP. Or, in the case of ISODE, only one is used (to translate from TP/CLNP to TP/TCP/IP). In the first case, you aren't using the Transport functionality, merely the layer 3 interworking functionality. Its in the second case that you use the Transport functionality. >The second situation reverses the relationships, to have IP running over >a CLNP network. Again, the solution is far cleaner to have IP view CLNP >as a link protocol and CLNP think that IP is a Transport protocol. Yes, but is it available today to a customer? Does IP/CLNP buy you anything? Only if there are people out there with CLNP networks (and we know how many of those there are ;-) ) who want to run existing TCP/IP applications over their CLNP networks. A valid need, but one which may be solvable by something other than IP/CLNP. IP/CLNP is harder (read "non-cleaner") than you may think, for instance, CLNP has 20-byte network addresses. Which ARPA routing protocols can support addresses of this size? The interesting point is that what you propose is two Subnetwork Dependent Convergence Protocols: one for CLNP/IP, one for IP/CLNP, both of which have to be accepted by at least the Internet community (more likely ISO/ANSI X3S3.3..). Something which would only increase the complexity of the Internet layer (not to mention Routing), and the internet layer /routing is far too complex already. The TSB gives both of these abilities in the same package using technology which exists today. >not see any benefits of placing TSB-like technology in between and, in fact, >it does not sound as if it would work. In fact, it does work and is an existing product marketed by Wollangong. >Your last scenario uses a TSB to map OSI, down to CLNP, over to OSI, down >to session, over RFC1006 and IP. (ISODE only goes down to session.) >This truly does not compute. I agree, what you wrote above does not compute. Yes, ISODE sits on top of transport, yet it incorporates transport as well: TP0 to be precise. The way RFC1006 gets ISO applications to run over TCP/IP is to have the stack: Appl/Pres/Sess/TP0/TCP/IP. Wollangong's TSB cover's this problem so you can talk "pure" OSI Application to a TCP/IP ISODE OSI Application. >A rule for determining how/when/where to do conversions: Create two >protocol stacks. Look for the highest and lowest points of departure. >You need to translate in between those points, at the highest level. Thanks!! ;-) Let me write that one down! Seriously, the part you left out is that rather than simply looking at the two stacks, it is sometimes just as important to look at the connectivity/topology of the network in question. >e.g., >OSI APPl TCP APPL >... ... >Means that nothing is in common and you translate above it; i.e., an >applicatio gateway. >OSI APPL OSI APPL >OSI PRES OSI PRES >OSI SESS OSI SESS >OSI TPORT TCP >... ... whoops!! read the above, those are not ISODE stacks. >Means that you need to translate at transport level. This is the exact, and >only, case that the documented Transport Service Bridge will solve. To the >extent that the other scenarios you describe can and should be done, they >are independent of this particular kind of TSB. I.e., they are different >products. The important point you are forgetting is that TSB's allow more than just the ability to convert between protocol stacks. They also allow interworking at the INTERNET layer. They solve much more than just the TCP<-->TP problem. Sorry if the above discussion sounds somewhat caustic but I got your reply on early monday morning. I guess the points I was trying to make are: 1) I'm not supporting one method over the other. I am telling what's available to customers TODAY. I will agree to the hilt that CLNP/IP is a good way to do this, although its not a panacea and is not mature enough to be available to customers. I will argue that IP/CLNP is not as good an idea and I hope that we can come up with a better alternative when that type of connectivity is required. 2) I am clarifying that a TSB gives you more than just conversion between layer 4 protocols: it gives you the ability to hook ISO nodes into a TCP/IP network TODAY. 3) I am clarifying the semantic problems in our last discussion. >Dave
ron@ron.rutgers.edu (Ron Natalie) (05/16/89)
By the time the end of 1982 rolled around, there was not supposed to be any bridging at all. Link 0 was supposed to be shut off on all IMPs forbidding NCP traffic at all. There were a few people granted dispensations, and it is easy to see that you'd only need a host with both NCP and TCP/IP to make a mail gateway, but I don't recall that any officially existed. I'm sure nothing automatic was done for FTP, and telnet was loosely supported as the TAC's were able to initiated both NCP and TCP telnet sessions for quite some time after the changeover (probably only deleted to get the code space back to implement tacacs). Fortunately this was the last Jan 1 protocol change after much griping by people who blew a couple of Christmas holidays. First on the long leaders conversion and then on TCP. -Ron
dcrocker@AHWAHNEE.STANFORD.EDU (Dave Crocker) (05/17/89)
Bob, Well, today is Tuesday, so perhaps the clarity of our exchange will improve. Then again... First of all, the diagram in your recent note certainly matches the one that we produced while I was at Wollongong, describing the TSB. The three that you had in your original note did not seem to map to this one, so I thought that you had something else in mind. Perhaps most of my confusion stems from the reference to IP/CLNP. The TSB does not know anything about IP or CLNP and does not see them. You are correct that ISODE implements TPO (sorry for my excessive simplification of omitting the TP0/RFC1006 portions of the stack) and interfaces to TCP and TP4 (and X.25, ...). However, it does not know about the lower layers; hence it does not seem correct to me to refer to it as CLNP/IP translation. The distinction might seem like nit-picking, but is not intended to be. There often is confusion about how the layers interact and how to do translations between incompatible layers. My own experience suggests that painfully careful detail must be maintained in such discussions. With respect to the TSB, it performs two wonderful functions: It allows a TCP-Based host, running an OSI application, to talk with an OSI-based host, running the SAME application. Further, it will allow a TP0-based OSI host to talk to a TP4-based OSI host. Last item: The TSB is not an off-the-shelf product. At the time that Marshall Rose and I left Wollongong, it had been demonstrated at various conferences but had not yet received field-testing (alpha or beta), nor was the documentation written, tho source material prepared for it. I do not know what the current release schedule is for it. Over the year-and-a-half of working with Marshall to develop a coherent view of TCP/OSI transition and co-existance, I came to believe that a number of very different technologies would be required. The TSB is only one of them. In a sufficiently constrained operational environment, it might be the only tool required, along with ISODE-type software on the end-user TCP hosts. In most networks, I believe that several other components will be required. The CLNP-over-IP encapsulation is likely to be quite signficant to such other options, since it allows hosts to run dual stack without requiring the backbone (i.e., routers) to be dual-stack. The EON (Experimental OSI Network) effort plans to use this also to get basic field testing in large networks. To interwork such hosts with others running in pure OSI networks, you will need something that Marshall likes to call a Network Tunnel and I am tempted to call a CLNP-router. It looks just like an IP Router on one side and a CLNP router on the other. Its peculiarity derives from the fact that the CLNP-router functionality dominates and it views IP as a link layer... Dave
syackey@secola.Columbia.NCR.COM (Steve Yackey) (05/17/89)
Dave, I'm not sure I understand everything I know about TSB's, tunnels, life in general, etc.... Is there any difference in the reliability offered by a TSB vs. a Network Tunnel (CLNP-router) ? Can a TSB provide checksum assurance from end-point to end-point ? Ditto for a Tunnel ? Thanx
dcrocker@AHWAHNEE.STANFORD.EDU (Dave Crocker) (05/18/89)
Steve, You ask just the right questions... A TSB terminates a TCP connection and a TP4 connection. Hence, it represents a single point of failure. Further, the connection terminations mean that there is no end-to-end checksum. (A minor additional point is that this means some extra overhead, since two checksums are computed.) Just so the point is not lost, note that the primary benefit of the TSB is that its use, in conjunction with ISODE-like software at the TCP-based end-user host, means that the underlying IP network AND the kernel operating system of the end-user's host do not need to be modified. ALL of the software can run in the user's application space. The only (potential) impact upon the end-user's host manageer is having to run the application server (responder), if that is a desired service. The Tunnel operates at the CLNP/IP datagram level. Since it is stateless, multiple Tunnels can be made available and data can be alternately routed, providing robustness against Tunnel failure. Since it does not participate in the Transport-level ("end-to-end") mechanism, there is only one checksum (TP4's) and it is legitimately end-to-end, as it was intended to be. Just so this point is not lost, either, the Tunnel requires a modified dual-stack of TCP and OSI in the TCP end-user's host kernel operating system. (Note that the modification is to have CLNP talk to IP, rather than to the link-layer driver.) However, the network administrator continues to be able to be unaware of the game. Only the host is modified, not the IP backbone, although the Tunnel needs to be added to the backbone, looking -- as far as the net administrator is concerned -- like just another IP router. Dave