fkittred@spca.bbn.com (Fletcher Kittredge) (06/05/91)
I develop software application systems which run on a wide variety of platforms. I am not aware of my company being a member of either OSF or UI. I don't care about the origin of either de facto or de juri standards, all I care about is that they work and are widely supported. You might say that people like me are the real 'users' of standards. From my perspective, the DEcorum suite is the best standard system since the advent of X. >In article 1720009@hpbbi4.HP.COM Mark Lufkin writes: >OSF chose NCS 2.0: "...a joint submission of Digital and Hewlett-Packard." >This is not a shippable product. It is not appropriate to compare an >unavailable NCS 2.0 against Sun's RPCGen. Sun has announced availability of >the Netwise RPC Tool in the Second Half of 1990 ("Distributed Computing Road >Map--An Outlook on the Future of Open Network Computing" Sun Microsystems, >Inc., dated April 30, 1990). A comparison of NCS 2.0 and these new Sun >offerings would be appropriate. You can make this statement, but that does not make it true. You should be aware of the weakness of your position and offer at least some argument. Acually reading the OSF documents show two items of bearing, OSF is interested in not only blessing de facto standards, but in moving technology forward. Second, all that was required is that a technology is demonstratable to the OFS. They were interested in leading technologies which were demonstratably and testably correct. This approach has worked well in the past, look at X and Kerberos. Both were widely adopted after they were demonstratably a good idea, but before they were of comercial grade software. Fletcher E. Kittredge fkittred@bbn.com Platforms and Tools Group BBN Software Products Company 10 Fawcett St. Cambridge, MA. 02138
pae@athena.mit.edu (Philip Earnhardt) (06/05/91)
In article 57382@bbn.BBN.COM Fletcher Kittredge writes: >>In article 1720009@hpbbi4.HP.COM Mark Lufkin writes: >>OSF chose NCS 2.0: "...a joint submission of Digital and Hewlett-Packard." >>This is not a shippable product. It is not appropriate to compare an >>unavailable NCS 2.0 against Sun's RPCGen. Sun has announced availability of >>the Netwise RPC Tool in the Second Half of 1990 ("Distributed Computing Road >>Map--An Outlook on the Future of Open Network Computing" Sun Microsystems, >>Inc., dated April 30, 1990). A comparison of NCS 2.0 and these new Sun >>offerings would be appropriate. > You can make this statement, but that does not make it true. You should > be aware of the weakness of your position and offer at least some argument. > Acually reading the OSF documents show two items of bearing, OSF is interested > in not only blessing de facto standards, but in moving technology forward. > Second, all that was required is that a technology is demonstratable to the > OFS. They were interested in leading technologies which were demonstratably > and testably correct. Hmmm. What I was trying to say was that it would be appropriate to either compare two products that are either both shipping or neither shipping. I also think the products are comparable with your criteria. Since you've read the OSF documents, you know that the Sun/Netwise offering was the other "finalist" technology for the OSF RPC. It is new technology and was demonstrated to the OSF. In fact, the reasons that the OSF cited for not choosing the Sun/Netwise product were the ones that Mark Lufkin quoted in his article. As an impartial observer, I would be interested in your comments on those specific technical issues discussed in my previous posting. > This approach has worked well in the past, look at X > and Kerberos. Both were widely adopted after they were demonstratably a good > idea, but before they were of comercial grade software. Perhaps comparisons to OSF/1 would be more appropriate. Isn't Motif using an unmodified MIT X11.4? Also, my understanding is that OSF is making some small enhancements to the current version of MIT Kerberos. On the other hand, there will be large changes to NCS for the NCS 2.0 offering. > Fletcher E. Kittredge fkittred@bbn.com > Platforms and Tools Group > BBN Software Products Company > 10 Fawcett St. > Cambridge, MA. 02138 In <23527@uflorida.cis.ufl.EDU> Brian Bartholomew writes: > If the customer had used a more capable OS in the first place, there > would be no reason to supplement its features via a networking package > add on. So instead, the U*IX users, the vast majority of users for > whom this standard is being written for in the first place, are to be > penalized in wasted kernal memory, dead code, and unused features. There are many places were our customers are using customization. Some are to get around software/hardware definciencies in their environment. Others are more generic customizations to the RPC: asynchronous RPC calls, call-back RPC calls (where the server makes a nested RPC call back to the client), custom naming schemes, custom security code, auditing code, debugging code, etc. Customization is not be needed for all RPC applications. If the user doesn't specify customization, then no customization code is generated. Neither of the two RPC offerings have code in kernel space. The "waste" problem is not an issue. Finally, it's not at all clear that UNIX is the platform for the "vast majority of users" of distributed computing. Netwise has submitted its technology to the UNIX standards organizations; that does not imply that our technology is prejudiced towards UNIX systems. For better or worse, there are many, many non-UNIX systems out in the real world, and a lot of them have deficiencies in their hardware/software capabilities. A distributed computing standard that ignores these environments will be a hard sell to the companies owning those machines. Phil Earnhardt Netwise, Inc. 2477 55th St. Boulder, CO 80301 Phone:303-442-8280 UUCP: onecom!wldrdg!pae My opinions do not reflect any official position of Netwise. In "some article that someone forwarded me" Phil Earnhardt writes: >The OSF rationale is referring to Netwise's customization feature. Both >the server and client sides of an RPC call are modeled as a state >machines. A user can add hooks to modify what happens in a particular >state or change the state transitions. > >There are 2 implications in the OSF document. The first is that >customization of the RPC specification is not valuable. Netwise's >expericnce has been that customization is important to our customers. >The second implication is that customization creates possible >interoperability problems. To cast the argument in the most extreme terms (always a good way to have a rational conversation :-), many people probably think the "asm" feature of C compilers is a good idea. That doesn't mean it should be included in ANSI C. I have no doubt that people have found ways to use Netwise's customization feature to do things that they've found useful. This doesn't mean it's a good idea. It's simply too unconstrained. Further, I suspect it's frequently used to get around deficiencies in the base system. I suppose you could say "better to have a way to get around deficiencies than not", but I'd be afraid that the existence of this "loophole" feature ends up acting as a panacea, making people feel relieved from the responsibility of creating well-defined general purpose constructs. >Finally, it's not clear what an "RPC protocol" is or what it would >mean to change one. My feeling is that the RPC Specification File is >specifying the protocol for some set of RPC calls. The customization >is part of that Specification File. An "RPC protocol" is a specification of what messages must go back and forth (over a network, typically above some particular transport, or class of transports) at what times to effect a remote procedure call -- i.e., to convey to a server the bytes representing the input parameters to a call to a procedure (identified in some particular well-defined way in the messages) and to convey the bytes representing the output parameters back to the caller. An "interface definition language" is a scheme for identifying procedures, their parameters, and their mapping into the appropriate slots (message bodies and headers) of an RPC protocol. Unquestionably, a particular interface definition "specifies a protocol". in that it tells you how to understand (at least some subset of) the bytes of a network message. However, it is layered on top of the RPC protocol. There are certain fundamental things an implementation of the entire RPC system can "know" about messages that are part of an RPC exchange knowing ONLY the RPC protocol and NOT the particular interface definition. Since this whole approach seems so natural to me, I don't know how I can make an argument in its favor. First, it's the way people typically design network protocol suites (i.e., strict layering). TCP is TCP and IP is IP; they're different things and people recognize that it's useful to keep the things separate. Second, we've found the approach useful -- it's let us do things like NCS's Local Location Broker, which solves the "well-known port problem", by transparently forwarding messages (intra-machine) to the appropriate server. As I understand it, the Netwise customization feature allows both the "RPC protocol" and the protocol specified by a particular interface definition to be manipulated in fairly arbitrary ways. First, I think that being able to manipulate the RPC protocol itself is a bad idea (for the reasons I stated above). Second, even if you restrict the customization to the data parts controlled by the interface definition proper, I don't think that the arbitrary bits of C code that can be placed in the interface definition qualifies as a "specification" (at least not a declarative specification) of the protocol, unless you want to include some well-spec'd semantics of C as an appendix to your protocol specifications. >It's unclear why a home-brew reliable transport would be superior to >using the reliable transport provided by the OS. It could be slightly >faster in some environments, particularly if few packets are lost. Tuning >in more hostile environments would probably be difficult, particularly >since the application may not have access to the appropriate real-time >tools that kernel code does. Finally, there is the added code space >for implemting the reliable transport on top of the datagram transport. While there is certainly some truth to your comments about the problems with the approach of building a reliable transport in user space, I think most of them are tractable and tolerable in light of the benefits that you get from using our approach. Many of these are described in my paper you cited in your message. For example, the connection-based approach imposes some penalties in connection setup and teardown. Obviously these costs are amortized in case you make a lot of calls with that connection. Unfortunately, we expect to see lots of clients that need to make a relatively few calls to a relatively large number of servers in turn. In this situation, the connection management overhead can be high. The connection-based approach is also fairly tricky, at least if you want to do it right. By "right", I mean that the decisions about when and how connections are opened and closed and worrying about the system overhead they represent is kept invisible to the user of RPC. By contrast, a user of Sun RPC (in its TCP/IP flavor) must know (assuming it wants to not misuse system resources) that the "CLIENT *" it's created actually represents a network connection that's not going to go away until the client manually destroys the "CLIENT *". All this connection management stuff gets even trickier to do right on the server side, since it might have to deal with lots of clients, perhaps enough so that it simply can't allow a client to keep a connection open to it for as long as the client finds convenient. NCS 2.0 supports connection-based transports in what we think of as the "right" way. (The work to do this is one of DEC's major contributions to NCS 2.0.) >In the NCS2.0 product, my understanding is that they will use both >connection-oriented and datagram-oriented transports (Mike?). The >next-generation Netwise tool will be offering datagram transports. >However, we will be offering the raw datagram functionality to the >application--message delivery will be unreliable and message size will >be limited to the size permitted by the transport. This is what OSF >means when they say we don't have uniform transport behavior. > >Why did we make this choice? Basiclly, we feel that a connection-oriented >transport is the way to go. However, if an applications writer is willing >to deal with the reliability and space constraints, then a raw datagram >transport interface can be used. Assuming appropriate reliability >characteristics for the datagram transport, we will have much lower >overhead per packet than either flavour of NCS2.0. Finally, a datagram >transport is necessary to support broadcast. There will no point in my trying to convince you that making the application's choice of underlying transport affect the RPC semantics is a bad idea, so I won't bother. However, it's really pretty hard for me to believe that your datagram-based RPC is significantly (if at all) cheaper than NCS's, assuming you specify (in NCS/NIDL) that the procedure you're defining is "idempotent" (to make the comparison fair). All we (and presumably you) do is send one message out and one message back. The overhead of calling the OS to do the message I/O and the cost of the message I/O dominate. >NCS2.0 will run on top of either a datagram or a connection-oriented >transport, but you're really getting a connection-oriented service in >either case. Sounds good to me :-) >Mike: what will NCS2.0 do WRT broadcast? Will it be available with both >types of transport? If broadcast is not available under connection-oriented >transports, won't this constitute non-uniform transport behavior? Geez, cut me some slack. I will arrange that the manual has a skull and crossbones around the section that describes the broadcast feature of RPC. I'm not so facist as to refuse to let people use broadcast in the datagram RPC system just because the people who define connection-based transports aren't clever or ambitious enough to figure out how to support broadcast in their protocols. I'm so glad that now there's yet another newsgroup (i.e., this one) I have to read to make sure I don't miss something important :-) Pretty clever of you-all to put the conversation someplace where I wouldn't look (as opposed to, oh comp.sys.apollo or comp.protocols.misc or something). Not clever enough though! -- Nat Mishkin Cooperative Object Computing Operation Hewlett-Packard Company mishkin@apollo.hp.com