[comp.os.research] why is net-trans bad?

glockner%cs@ucsd.edu (Alexander Glockner) (02/03/89)

   I'd like to thank those who responded to my request for help:

Richard Guy <guy@CS.UCLA.EDU>
David Keppel <pardo@june.cs.washington.edu>
Fred B. Schneider <fbs@gvax.cs.cornell.edu>
Walt Christmas <wpc@druin.ATT.COM>
James Sterbenz <jps@wucs1.wustl.edu>
Darrell Long <darrell@midgard.ucsc.edu>
and one other person whose name I lost....

   Since there were no (important) disagreements in the responses,
I've decided to join them.
------------------------------------------------------------------
The panel at the Ninth SOSP,

  RESOLVED: That Network Transparency is a Bad Idea

consisted of:
   G. Popek,
   B. Randell,
   R. Needham,
   P. Weinberger,
   D. Clark
   F. Schneider, Moderator

"Random comments":
Clark (I think) pointed out that he didn't really care much if he
wanted to take a train somewhere, and it travelled a distance of
a block or across town.  But if it needed to go across the state,
"I'd want to know so I could pack a lunch."

Many applications need to 'roll their own'.  For instance, for an
ATM application, if there was a net failure, rather than having a
transparent service block the transaction or give an 'I am sorry'
response, the ATM application would be better off following a
simple algorithm "give them $100, but no more".  The bottom line
[is] that the definition of 'transparent' [is] different for
different applications.

Towards the end of the debate, someone (Peter Denning?) recalled
the debates two decades earlier regarding virtual memory, and
indicated that all the pros/cons he had just heard were essentially
identical to those from the past.

----------------------------------------------------------

In general, network transparency (like most other kinds of
transparency) is very much a good idea.  At least, as the default
mode of operation, transparency is a big win--the success of
Sun's NFS is a particular commercial case in point.  In the good
old days, working with FTP, or even DECnet, was an unquestionable
pain in the neck. (and then there is/was UUCP...)

On the other hand, certain applications (like databases) that are
incredibly intensive on I/O and processing, may want to bypass the
default transparent mode, and do their own "smart" thing.  Clearly
you want to access ``cheap'' data whenever possible.  If you can't
tell what's cheap and what's expensive, you're in trouble.  You
may want to `chunk' expensive accesses, prefetch remote data, ...
(eg, see "Operating System Support for Data Management", by
Stonebraker & Kumar, _Database Engineering_ vol 5, 169-76, 1986.)

The basic question seems to be, How smart do you expect applications
to be regarding distributed systems issues?  The fact is, most
applications don't know about distribution, don't want to know,
can't know (they were compiled too long ago, source unavailable/lost),
or would likely be wrongly programmed regarding the issues in any
case.   As noted above, it all sounds like the debate over virtual
memory, doesn't it?

------------------------------------------------------------------