[comp.protocols.iso] TCP/IP vs. OSI Performance

wcs@skep2.ATT.COM (Bill.Stewart.[ho95c]) (03/25/89)

Well, OSI seems to be the wave of the future :-), so we might as well
get used to it.  I'm working on several projects that will be using
OSI-based protocols, and I'm concerned about what the performance of
the system will be like.  OSI has a reputation for dogginess, but is it
really justified, or is it just that most implementations have been new
enough mot to be highly tuned yet?

One of the applications we're looking at will be using the misnamed
OSI IP internet protocol (ISO 8473) over X.25.  We know how the current
DoD IP based system performs - will an ISO-based system be about the
same speed, or slower, or faster?  

			Thanks;   Bill Stewart
-- 
# Bill Stewart, AT&T Bell Labs 2G218 Holmdel NJ 201-949-0705 ho95c.att.com!wcs
# Washington, DC.  Raining.  Long, cold, heavy rain.  Been raining for days.
# I was here last year in the spring.  It was raining like this then, too.

joel@arizona.edu (Joel M. Snyder) (03/25/89)

8473 should perform at about the same speed as IP.  I implemented ISO IP
by editing DoD IP code and doing a bit of twiddling of the field
positions.  It turns out that ISO IP is more efficient in its use of
memory; you know at the first fragment the length of the entire message,
so you can preallocate.  Other than that, the protocols are almost
identical.  Of course, some of this depends on whether or not you've
been computing your IP checksum  (and your TCP checksum :-) :-) ).

Joel Snyder
U Arizona MIS Dep't
ANSI X3S3.7

karn@ka9q.bellcore.com (Phil Karn) (03/26/89)

>It turns out that ISO IP is more efficient in its use of
>memory; you know at the first fragment the length of the entire message,
>so you can preallocate...

I don't quite understand this. If you represent packets as linked lists of
dynamically allocated buffers, then there is no need with either DoD IP or
ISO 8473 to preallocate memory when reassembling fragments. You just keep
the pieces in a sorted linked list, with "fragment descriptors" telling you
where the holes are. As each fragment comes in, you trim off any overlap and
insert (link) it into the proper place.  When all the holes are filled in
and the last fragment (MF=0) has been received, you're done. I suspect
preallocating space means you intend to do memory-to-memory copying. It may
be easier to code, but it's best avoided for performance reasons.

It is true that knowing how long an entire datagram is could save you some
CPU time if you're so memory-starved that you won't be able to reassemble
it; you could toss all of the fragments as soon as you receive them instead
of running out of memory halfway through reassembly and having to toss the
partially reassembled datagram.  I don't see this as a big advantage,
though, since any system that runs out of memory often enough for this to be
a signficant performance factor is going to have many other, much more
serious problems.

One important factor re ISO 8473 performance that hasn't been mentioned yet
is its fetish for variable length fields. These are inherently harder to
deal with than DoD IP's nice fixed-length fields (IP options are rare enough
in most environments that they can be largely ignored as a performance
issue).  It's a *lot* easier to deal with 32-bit IP addresses on machines
that support native 32-bit integers than it will be to handle the monster
variable length byte strings that make up ISO addresses. Van Jacobsen once
commented to me that much of his header prediction work is likely to be
inapplicable to ISO.

Yet another excellent reason to question whether the whole ISO "trip" is
necessary -- as if the many reasons already stated aren't enough...

Phil

jms@mis.arizona.edu@arizona.edu (jms@mis.arizona.edu) (03/27/89)

Phil Karn suggests that knowing the length of an IP message ahead of
time is immaterial given that you have lots of memory to play with.
As difficult as it is to quibble with the author of a major IP implementation,
I think that it's important to point out that *the* major defect of
the TCP/IP protocol suite falls in IP---its routing (rather, lack of
thereunto) and congestion-control techniques.  Having the additional
information that a mongo huge packet is coming your way doesn't 
contribute to a nice stateless system, but it does allow you to begin
to gather the memory resources you will need early on, AND it allows
you to begin to make routing decisions based on that information.  For
example, an intelligent IP might well open a connection to a different
IP based on the MTU.

More importantly, Phil's second argument is what I consider the best
argument FOR ISO protocols.  The job of interpreting NSAP addresses
is difficult, yes, much more difficult than masking off the top two or
three bits of an RFC 791 address, but you get to address REAL end-systems
in REAL networks, not just one small Internet.  The parochial view that
a network should have no options, no room for expansion, no concept of
connection-oriented service works beautifully when you have 56K and
T1 lines floating around the place and you've got a rational addressing
authority, and you're trying to run a research network.  But the requirements
for commercial systems, and interworking between national networks, commercial
networks, and research systems, mean that expansion upon RFC 791
is really required.  Addressing and Routing in the Internet are a crock
---having an address imply a route means that IP uses up a lot of
bandwidth doing extra and unnecessary work.

I will be the last person to believe in ISO 8072/8073 (usually called TP4,
or ISO TP), but I think that 8473 contains well-reasoned and useful 
extensions to RFC 791 that can make the global Internet work.

Joel Snyder (U Arizona MIS Dep't)

lear@NET.BIO.NET (Eliot Lear) (03/27/89)

This is a minor technical correction (it became apparent while
discussing your article with someone).  An IP address does not imply a
route.  Given an address and nothing more, one cannot determine a
route.  Never-the-less, in order for an Internet to be well connected,
routing information about an address must be kept, if nowhere else, at
the entrance points to an internet (in the simplest case).  I see
things from the DARPA Internet point of view.  How is routing done
with ISO IP?

From my little introduction to ISO protocols (Stallings), it sounds to
me like routing would be handled very much like it is handled
on the Internet.  Stallings says (Volume I p. 169):

	Network service users cannot derive routing information from
	NSAP addresses.  They cannot control the route chosen by the
	network service by the choice of the synonym and they cannot
	deduce the route taken by an incoming NSDU from the NSAP
	address.  However, as pointed out by CCITT document [X.213],
	NSAP addresses should be constructed, when possible, in such a
>>>	way as to facilitate routing through an internet.  That is,	<<<
>>>	the network service providers, especially gateways, may be	<<<
>>>	able to take advantage of the address structure to achieve	<<<
>>>	economical processing of routing aspects.			<<<

To me this sounds very much like something that could have been stated
in RFC791.

hedrick@geneva.rutgers.edu (Charles Hedrick) (03/27/89)

Technically you're right that there is no difference between ISO and
IP as to the relationship between addresses and routing.  But
practically there is.  While IP may not specify an official routing
strategy based on addresses, there is a de facto one, and given the
addressing structure, it's hard to see any practical alternative.  We
have basically two-level hierarchical routes, with a table indexed by
network number doing routing between institutions and a table indexed
by subnet doing routing within institutions.  The network number
allocation procedures guarantee that there isn't any more structure in
the network numbers that we could use.  ISO has the possibility of
more structure.  It allows longer addresses, and it allows for
multiple address allocation authorities.  This allows for more levels
of hierarchy, and it supplies a builtin top level to the hierarchy
(authority responsible for the address format) that is not present in
IP.  Thus in theory we can hope that gateways won't have to have
routing tables that list the entire world.  There is a cost for this,
however, which is that addresses are harder to parse, and that the
structure we are trying to use for routing is different for different
addresses.  I don't think we'll know which approach is better for a
number of years.  Basically I expect the Internet to develop routing
technology that uses knowledge outside the address.  Until we get the
equivalent of the Internet using ISO technology, we simply won't know
whether the flexibility inherent in the ISO addresses is worth the
overhead of handling variable address formats.

There are several different ways of getting past the current Internet
routing problem.  (The problem being that at the moment every gateway
has to compute routes to every network in the Internet.)  The most
promising seem to involve additional routing technology other than
the conventional IP route table.  The following is a straw man that
has the same flavor as what seems to be floating around in the IP
community.  Suppose we no longer keep a complete routing table,
i.e. one that lists every possible network.  Instead we have several
levels of hierarchy.  E.g. the world is divided into USnet, Euronet,
and Japannet.  We don't care about routing within Euronet and
Japannet.  USnet is divided into NSFnet, NASAnet, and ARPAnet.
We are a member of NSFnet, so we don't bother to keep track of the
others.  etc.  The routing table at Rutgers looks sort of like
this:

   Euronet -> jvnc-euro-gw
   Japannet -> sri-japan-gw
   NASAnet -> ames-nasa-gw
   ARPAnet -> mills-1200baud-fuzzball-link
   nsfnet -> jvnc-nsfnet-gw  [default used for NSFnet nets we don't know]
   nysernet -> columbia-gw

In addition to this we have a conventional routing table based on
specific network numbers.  But that's just a cache.  When we get a
packet, we look for its network number in the conventional routing
table.  If it's there, we route it.  Otherwise, we go through the
routing equivalent of the domain name system.  If the root route
server tells us that the network in question is part of euronet, we
stop there and just send it to a European gateway.  If it's part of
USnet, we ask the USnet route server for more information, etc.  What
gets circulated in routing updates within NSFnet is then routes for
these higher-level entities, not for specific networks.  The current
NSFnet backbone routing is in effect a one-level version of this, with
fixed tables giving the membership of various networks in the
higher-level entities, rather than using route servers.

This sort of scheme, or various other similar suggestions, uses "route
servers" as an alternative to additional structure in the addresses.
My guess is that in the long run depending entirely upon address
structure is going to turn out not to be enough, and we're going to
have to go to a scheme like this.  But we probably won't know for sure
what the right tradeoffs are until 10 years of so from now, when we
have very large international networks that use routing technology
more complex than the current IP technology.

jh@tut.fi (Juha Hein{nen) (03/27/89)

In article <Mar.26.21.09.10.1989.8398@geneva.rutgers.edu> hedrick@geneva.rutgers.edu (Charles Hedrick) writes:

   We are a member of NSFnet, so we don't bother to keep track of the
   others.  etc.  The routing table at Rutgers looks sort of like
   this:

      Euronet -> jvnc-euro-gw
      Japannet -> sri-japan-gw
      NASAnet -> ames-nasa-gw
      ARPAnet -> mills-1200baud-fuzzball-link
      nsfnet -> jvnc-nsfnet-gw  [default used for NSFnet nets we don't know]
      nysernet -> columbia-gw

The principle of your proposal sounds ok.  However, it has to allow
for backup etc. purposes more that one gateway between two geograhical
nets and preferences in using those gateways for various target
networks.  Maybe this is what you meant with "asking more information"
from the known gateway.
--
--	Juha Heinanen, Tampere Univ. of Technology, Finland
	jh@tut.fi (Internet), tut!jh (UUCP), jh@tut (Bitnet)

jms@mis.arizona.edu@arizona.edu (jms@mis.arizona.edu) (03/28/89)

In article <Mar.26.16.30.50.1989.18182@NET.BIO.NET> lear@NET.BIO.NET (Eliot Lear) writes:
>An IP address does not imply a
>route.  Given an address and nothing more, one cannot determine a
>route.  Never-the-less, in order for an Internet to be well connected,
>routing information about an address must be kept, if nowhere else, at
>the entrance points to an internet (in the simplest case).  I see
>things from the DARPA Internet point of view.  How is routing done
>with ISO IP?
>

In IP, an address does imply a route.  For example,  the address of
a hypothetical CPU is 128.196.3.12.  This is a class B address, so
the Internet knows that it has to know where 128.196.x.x is, and
the routing is hierarchically dealt with from there.

If I move that CPU to some other campus, the address must change, because
the address implies a route (note that the NAME doesn't have to
change; just the address).  

More subtly, given a campus 128.196.x.x, where there are multiple
routes from that campus to the "world,"  the IP addressing/routing
schemes require that one manually insert any routing information
which isn't "fewest hops" in nature, and even then, the autonomous
system aspect of IP doesn't necessarily allow that minimal information
to flow from a campus to some other site.  As an example, again,
given that Arizona is connected to Purdue, Utah, and JVNC, it is
pretty obvious that Purdue knows not to go through either Utah or
JVNC to get to Arizona.  However, the next stop from Purdue, let's
say Indiana (hypothetically), has to manually be taught the Internet
topology, and can't learn the best way to get from here to there.
In any case, it's the addressing scheme which restricts the
routing scheme.

Some regional networks (like NSFNET) use a scheme where the entire
topology of the network is learned by routers.  This is OK as long
as you have enough memory to keep your entire topology in core, AND
as long as you're willing to not allow any granularity of routing 
beyond that which IP addresses give you. 

In both instances, the simplifying assumption which makes the
Internet work (and it's a miracle that it does) is that IP addresses
imply hierarchical routing. 

(I reference Paul Tsuchiya's work "Landmark Routing" for an excellent
solution to this and my own expansion on Paul's work, "Traveler Routing")

Finally, to answer your question, ISO IP doesn't imply routing; the
OSI model divides the network layer into a variety of sublayers, and
routing and IP are completely separate.  The framework for OSI-style
routing has been created, but the actual protocols and algorithms
are still a matter for discussion.

Joel Snyder
U Arizona MIS Dep't

mogul@decwrl.dec.com (Jeffrey Mogul) (03/29/89)

In article <14957@bellcore.bellcore.com> karn@ka9q.bellcore.com (Phil Karn) writes:
>>It turns out that ISO IP is more efficient in its use of
>>memory; you know at the first fragment the length of the entire message,
>>so you can preallocate...
>
>I don't quite understand this. If you represent packets as linked lists of
>dynamically allocated buffers, then there is no need with either DoD IP or
>ISO 8473 to preallocate memory when reassembling fragments.
> [...]
>It is true that knowing how long an entire datagram is could save you some
>CPU time if you're so memory-starved that you won't be able to reassemble
>it; you could toss all of the fragments as soon as you receive them instead
>of running out of memory halfway through reassembly and having to toss the
>partially reassembled datagram.  I don't see this as a big advantage,
>though, since any system that runs out of memory often enough for this to be
>a signficant performance factor is going to have many other, much more
>serious problems.

As Chris Kent and I wrote (after Dave Mills raised the issue) in our
paper "Fragmentation Considered Harmful" (SIGCOMM '87), one problem
is that because IP doesn't tell you how much space is needed, you
can run into pseudo-deadlock when several large packets are being
reassembled.

If the rate of incoming fragmented packets is high enough, it doesn't
matter how much memory you have, because the situation is unstable:
once you begin to run out of memory, you're likely to see new fragments
arrive faster than the old ones time out.

The real problem, of course, is that IP-style internetwork fragmentation
is generally a bad idea ("Harmful", in fact).  Issues of "efficiency
in the use of memory" are second-order compared to efficient mechanisms
for avoiding fragmentation.  I certainly agree with Phil that variable-
length fields are a false economy.

-Jeff

martillo@cpoint.UUCP (Joachim Carlo Santos Martillo) (04/04/89)

In article <9910@megaron.arizona.edu> jms@mis.arizona.edu (Joel M. Snyder) writes:
>In article <Mar.26.16.30.50.1989.18182@NET.BIO.NET> lear@NET.BIO.NET (Eliot Lear) writes:
>>An IP address does not imply a
>>route.  Given an address and nothing more, one cannot determine a
>>route.  Never-the-less, in order for an Internet to be well connected,
>>routing information about an address must be kept, if nowhere else, at
>>the entrance points to an internet (in the simplest case).  I see
>>things from the DARPA Internet point of view.  How is routing done
>>with ISO IP?

>In IP, an address does imply a route.  For example,  the address of
>a hypothetical CPU is 128.196.3.12.  This is a class B address, so
>the Internet knows that it has to know where 128.196.x.x is, and
>the routing is hierarchically dealt with from there.

I understood the original routing scheme of the Arpa Internet to be
flat-routing between networks.  Later, as a need developed for more
structure within a local administration (in particular how to handle
multiple physical networks within one class B network), subnetting
techniques were introduced kludge to permit multiple physical networks
within one administrative networks.  I guess this is hierarchical in
that routing is handled first among networks and then is handled
within networks.  Still this was not part of the original design, and
subnetting is hardly obligatory. If some network authority wanted to
develop their own non-hierarchical system, nothing stops them which
seems utterly reasonable because I want complete and total control
over how packets are routed within my network and for security I don't
want anyone outside of my network to have any idea how I might be
doing it.

>If I move that CPU to some other campus, the address must change, because
>the address implies a route (note that the NAME doesn't have to
>change; just the address).

Which does not seem horrible but is wrong within the domain system in
that if I take my portable PC (lower_slobovia) and plug it into an MIT
LCS network, its name is lower_slobovia.LCS.MIT.EDU.  When I use my
network at clearpoint its name is
lower_slobovia.RESEARCH.CLEARPOINT.COM while at my home office its
name is lower_slobovia.AOR.COM.  By the way, I expect to have so
many portable PCs within the next year or so that the only real
solution is dynamic assignment of IP addresses, so that I would
expect the addresses of the machines to change everytime I turn
them on.  I really have no idea what Snyder is worrying about here.

>More subtly, given a campus 128.196.x.x, where there are multiple
>routes from that campus to the "world,"  the IP addressing/routing
>schemes require that one manually insert any routing information
>which isn't "fewest hops" in nature, and even then, the autonomous
>system aspect of IP doesn't necessarily allow that minimal information
>to flow from a campus to some other site.  As an example, again,
>given that Arizona is connected to Purdue, Utah, and JVNC, it is
>pretty obvious that Purdue knows not to go through either Utah or
>JVNC to get to Arizona.  However, the next stop from Purdue, let's
>say Indiana (hypothetically), has to manually be taught the Internet
>topology, and can't learn the best way to get from here to there.

So the gateways at Indiana don't talk EGP and they don't listen to
ICMP redirects.  That seems to be their problem not the addressing
scheme of IP.

>In any case, it's the addressing scheme which restricts the
>routing scheme.

Now Snyder seems to be implying that the absense of routing
information and not the implied routing in the IP address is the
problem?

>Some regional networks (like NSFNET) use a scheme where the entire
>topology of the network is learned by routers.  

Now I am really puzzled.  I could be wrong but I thought NSFNET was a
high-speed replacement for the ARPANET in which clusters of RTPCs
connected by 16Mbs token rings acted as packet-switches and replaced
IMPs (PSNs) and where the 56Kbs leased lines were replaced by T1 lines
so that there are issues of routing here between communications subnet
switches but this is orthogonal to internetwork routing and should be
totally transparent to hosts connected to the ARPA Internet of which
NSFNET is one piece.

>						 This is OK as long
>as you have enough memory to keep your entire topology in core, AND
>as long as you're willing to not allow any granularity of routing 
>beyond that which IP addresses give you. 

I thought the PCRT clusters kept topology of interswitch connectivity
in memory and each cluster knew what hosts were directly connected to
it but that they did not necessarily know the whole network topology.

>In both instances, the simplifying assumption which makes the
>Internet work (and it's a miracle that it does) is that IP addresses
>imply hierarchical routing. 

I thought the simplifying assumption was that gateways only
worry about routing between networks.  Since this reduces the
routing problem by several orders of magnitude in difficulty
it is no miracle at all that this works.  Is there a networking
system out there which works better?

>(I reference Paul Tsuchiya's work "Landmark Routing" for an excellent
>solution to this and my own expansion on Paul's work, "Traveler Routing")

>Finally, to answer your question, ISO IP doesn't imply routing; the
>OSI model divides the network layer into a variety of sublayers, and
>routing and IP are completely separate.  The framework for OSI-style
>routing has been created, but the actual protocols and algorithms
>are still a matter for discussion.

I think you are saying that ISO IP addresses don't identify to which
network within a catenet a host is connected.  I can think of some
security reasons for this but lack of identification might actually
create some security holes.  But the bottom line is the IP means
internetwork protocol and some where in the sublayers the ISO IP
address has to be converted to an (network, host id) identifier (i.e.
internetwork address) so all you have really done is interpose yet
another layer of CPU cycle burning software in any network
communications problem.  This seems to be a useless partitioning of
the problem into an internetwork datagram piece and an internetwork
addressing piece. 

>Joel Snyder
>U Arizona MIS Dep't

In any case the ISO CLNP is not obligatory this discussion is all
relatively academic (as well as the discussion of fragmentation).  The
major characteristic of an internetwork protocol is that it must be
obligatory.  ISO IP is not.  UK GOSIP excludes the use of ISO IP so
that if I am at BTR-USA and trying to connect from my machine
(conformant to US GOSIP) to a machine at BTR-UK via OSI, I won't be
able to succeed.  But if both machines implement TCP/IP and my
company has ARPA Internet connectivity I have no problem.

Of course, my company spent less than $1000 per machine for the
TCP/IP implementation while it spent somewhere between $50,000 and
$500,000 per machine for the OSI implementation.  Seems like
a big waste of money.  This is even more distressing because
now state governments as well as the federal are mandating
next-to-useless OSI implementations for new machines.  Well, I pay
for that major give away to companies like Unisys, and I don't
like supporting this sort of welfare.  In a period of economic
retrenchment there are a lot better uses for that federal money
which comes from the US taxpayer.

Anyway, I updated the FTAM section of my document and here it
is if you are interested:

                                B. FTAM is Dangerous

            The  "greater   richness"  of  FTAM,  specified  in  ISO/DIS
            8571/1,2,3,4 (Information Processing Systems -- Open Systems
            Interconnection --  File Transfer,  Access and Management --
            Part 1:  General Introduction, Part 2: The Virtual Filestore
            Definition, Part 3:  The File Service Definition and Part 4:
            The File Protocol Specification) seems to lie in the ability
            to transmit  single records  and in  the ability  to restart
            aborted file transfer sessions through the use of obligatory
            checkpointing (ISO/DIS  8571/1 C.2.1 & C.2.4).  Transmission
            of single  records seems  fairly useless in the general case
            since operating  systems like Unix and DOS do not base their
            file systems  on records  while the  records of file systems
            like  those   of  Primos  and  VMS    have  no  relationship
            whatsoever to one another.

            The  obligatory  ability  to  restart  aborted  simple  file
            transfers,  which   actually  is   available  in   the   FTP
            specification as  an option  for block  or  compressed  file
            transfers, is  more dangerous than helpful.  If the transfer
            were aborted  in an  OSI network, it could have been aborted
            because one  or both  of the  end hosts died or because some
            piece  of  the  network  died.    If  the  network  died,  a
            checkpointed file  transfer can probably be restarted.  If a
            host died  on the  other hand,  it may  have gradually  gone
            insane and  the checkpoints may be useless.  The checkpoints
            could only  be guaranteed  if end  hosts have  special self-
            diagnosing hardware (which is expensive).  In the absence of
            special hardware  and ways of determining exactly why a file
            transfer aborted,  the file  transfer must be restarted from
            the beginning.   Even  with the greater richness of FTAM, it
            is not  clear to me that a file could be transferred by FTAM
            from IBM PC A to a Prime Series 50 to IBM PC B in such a way
            that the  file on PC A and on PC B could be guaranteed to be
            identical because  of the  16-bit word  orientation  of  the
            Prime file system.

            Including single  record or  partial file  transfer  in  the
            remote transfer  utility seems  is a  good  example  of  bad
            partitioning of the problem.  This capability really belongs
            in a  separate network  file system  or  remote  transaction
            processing system.  A network file system should be separate
            from the  remote file  transfer system or remote transaction
            processing because  the  major  issues  in  fault  recovery,
            security, performance,  data  encoding  translation,  object
            location and  object interpretation  are different  in major
            ways for each of the thee cases.  Applying ISO/DIS 8571/1 to
            remote file  service for  a  diskless  workstation  will  be
            interesting to  say  the  least.  Section  D.4  specifically

                                                                        19
            states "[s]ince  the LAN  communications are  not liable  to
            error,  the  recovery  procedures  are  not  needed."    The
            document  actually   considers  the   case  of   the  client
            workstation failing but ignores failures of the file server.
            Since the  virtual filestore  has state,  probably  all  the
            client systems become completely "hosed."

            The goal of the virtual file store itself, as defined by ISO
            in ISO/DIS  8571/2, is  unclear, probably  constricting  and
            probably performance limiting because of the way objects are
            typed within  the virtual  file store.   A  Unix device is a
            file within  the Unix computational model.  Even if a remote
            device file  made sense within the ISO virtual filestore, it
            is totally  unclear what might happen when a process on some
            client machine  writes on  a device file in the virtual file
            store.   If the  process writes  the console device which is
            instantiated in  the remote  file system,  should  the  data
            appear on  the local  console or  on the remote console?  If
            the endpoint  machine is  a diskless  Unix workstation,  the
            data better  appear on  the local  workstation, but  such  a
            necessity raises  serious naming  and location issues, which
            seem intractable  within  the  framework  of  FTAM,  in  the
            mapping of  objects in  the virtual file store to objects in
            real file  stores or  real file  systems.     The  issue  of
            diskless workstations  is simply inadequately handled in the
            FTAM documents,  and really  belongs in  a separate  set  of
            standards.   Unless the  class of  diskless workstations  is
            extremely restricted, a remote file system  without a remote
            paging or  swapping system,  which  are  often  remote  file
            system or remote disk system issues, is fairly useless.

            The main  problem with  the virtual  filestore is that it is
            not a  remote file  system.   In a  genuine resource sharing
            environment, a  Unix machine  should be  able to  serve as a
            remote file  server for  a VMS  machine, and  a Unix machine
            should be  able to  serve as  remote file  server for  a VMS
            machine.   A fairly simple, robust, stateless protocol which
            used a general handle into a data stream would be the proper
            approach,   A VMS  application might  wish  to  access  some
            record in  a file  which appears  in  a  structured  record-
            oriented VMS  file.   The remote  file system  client module
            running on  the VMS  system would  translate  this  into  an
            appropriate request  for data  which the  server on the Unix
            machine would  get from  the unstructured Unix file.  When a
            Unix application wishes to access a data contained in a data
            file actually residing on a VMS server, the Unix application
            would view  this file  as a  sequence of  fixed or  variable
            length records  contained in an apparently unstructured Unix
            file even  though the actual VMS file is a structured record
            oriented  file.     The  Unix  client  code  would  send  an
                                                                        20
            appropriate request,  which the VMS remote file server would
            extract as a record from the VMS file system and transmit as
            a stream  of bytes  to the  client Unix  machine.  With  the
            fairly strong  file typing  employed in the OSI virtual file
            store,  such   client-server  relationships   would  not  be
            possible unless the remote file system were implemented with
            the virtual  filestore as  the virtual  server machine.  But
            such a remote file system architecture does not simplify the
            problem  at   all  because   the  software   must  still  be
            implemented to  make files  from real file systems available
            to the  virtual filestore.  FTAM only adds one more layer of
            performance-killing software complexity.

            The virtual  filestore would  also add a similar superfluous
            software  layer   to  remote  transaction  processing  while
            hindering the  efficient implementation  of data  integrity,
            data  security,   data  access   and  transaction   ordering
            procedures  specific   to  remote   transaction   processing
            problem.   Trying to  lump the remote transaction processing
            system with simple file transfer and network file systems at
            best threatens  the performance  and data  integrity of  the
            distributed system  whether the system is executing a simple
            file transfer,  providing a remote file system or processing
            remote transactions.