[mod.protocols.tcp-ip] Do we need another protocol?

WANCHO@SIMTEL20.ARPA (09/28/86)

There is a growing trend in the Army to network Intel 310s running
Xenix on a fat Ethernet under OpenNet.  When asked why OpenNet instead
of TCP/IP, the answer most often heard is because OpenNet provides
inter-machine file and record-level access at the application level.

At one time, there was a brief discussion of the possibility of
extending the FTP definition to allow for record-level access.  It
seemed to me then that FTP was the wrong place and that an entirely
new protcol should be defined.  Was this ever done and formally
recognized as part of the TCP protocol suite?  If not, why not?  Would
it be possible to provide an OpenNet functionality within the TCP
confines so that we don't have to provide two otherwise incompatible
services requiring two sets of hardware interfaces for every node that
should have both capabilities.

--Frank

braden@VENERA.ISI.EDU (Bob Braden) (09/29/86)

	
	There is a growing trend in the Army to network Intel 310s running
	Xenix on a fat Ethernet under OpenNet.  When asked why OpenNet instead
	of TCP/IP, the answer most often heard is because OpenNet provides
	inter-machine file and record-level access at the application level.
	
I never understood military politics, but I am curious how the army can
do this in the face of the DoD directive to use TCP/IP.  If they are
in fact justifying it by the requirement for file access, it amazes me
that someone in DCA has not gotten excited about this.
	
	At one time, there was a brief discussion of the possibility of
	extending the FTP definition to allow for record-level access.  It
	seemed to me then that FTP was the wrong place and that an entirely
	new protcol should be defined.  Was this ever done and formally
	recognized as part of the TCP protocol suite?  If not, why not? 
	
The "why not" is easy to answer.  No one eager to fund it.  Protocol
development requires a number of  experienced people to devote quite a
lot of time and attention.  In our community, it also requires a cycle of
experiment and experience with test implementations.  The existing DoD
protocol suite -- IP, TCP, Telnet, FTP, and SMTP -- was developed as
part of a coherent R&D effort programmed and largely funded by DARPA.  The
importance of DARPA's leadership cannot be too strongly emphasized.

Since DARPA's mission is generally long-range research, it is no
longer interested in funding Internet R&D as an end in itself, and little
interest in "small" protocol improvements.  So, a lot of good ideas for
protocols (file access is only one example) have lain in the dust for 10
years.  Every 11.3 months someone brings up the need for file access on
this mailing list, for example.

Another fact has delayed work on the file access problem.  There has been
general agreement for a long time that, as you say, "FTP was the wrong place
and that an entirely new protcol should be defined", but no coherent concept
of what a new protocol should look like. With no terrific ideas, and no
funding interest, it is no wonder that file access has languished for 10 
years (actually 13 -- file access extensions to FTP were first proposed
by John Day in RFC520, dated June 25, 1973!)

Recently there has been a growing interest in the network file system
model, exemplified  by Sun's NFS, as the right way to go for file
access.  There is also an attempt, organized by DARPA and the IAB, to
revitalize Internet protocol research with a variety of funding sources.
The effective agency for this is supposed to be the IAB task forces. So
maybe the time has come to actually slay the file access dragon.

Suppose there were to be some meeting of interested persons to come up
with a draft specification of an Internet standard network file system
(note: NO capital letters!!)  protocol.  Would you have time and travel
funds to attend and contribute?


Bob Braden

   chairperson, End-to-End Protocols task force.
  
	
	

nowicki@SUN.COM (Bill Nowicki) (09/29/86)

	There is a growing trend in the Army to network Intel 310s
	running Xenix on a fat Ethernet under OpenNet.  When asked why
	OpenNet instead of TCP/IP, the answer most often heard is
	because OpenNet provides inter-machine file and record-level
	access at the application level.

Of course I am biased, but you might want to consider the Sun Network
File System (NFS) protocol.  NFS has the advantage of being 
available on many different machines and operating systems:
MS-DOS, many Unix versions, VMS etc. It is licensed by more than 60
vendors, and based on the IP protocol.  Specifications are public
domain, with fully-supported implementations avilable from several
sources. "Open" Net is quite a misnomer if it is only available from
one vendor.

I know, we should circulate an RFC form of the NFS spec; we are 
working on it.
	
	Bill Nowicki
	Sun Microsystems

raklein@MITRE.ARPA (Richard A. Klein) (10/01/86)

I'm getting tired of listening to people pontificating DoD policy.
The reason why that particular organization in the U. S. Army did not
choose the MIL-STD protocol suite (TCP/IP) is because they are not
well informed on the existance and benefits of the DoD protocol suite.
In addition, the U. S. Army organization in charge of developing
standards is not well informed and has been remiss in general with
regard to getting standards out to the rest of the Army.  JTC3A's lack
of progress in developing protocol standards for tactical applications
for all of DoD has further hampered the standardization effort.  But
the real crux of the matter is that WE are responsible, as
consultants, engineers, researchers, etc., for insuring the
recommended use of the DoD MIL-STDs when appropriate.

I'm currently supporting the Army's effort to "go ISO."  This means
that they want a "militarized" ISO protocol suite right-a-way, and
they don't want to be bothered by an intermediate standard such as
TCP/IP (?!).  Some how or the other, we will make the best effort to
direct them towards the right standards for the right times.  I have
found that the most convincing arguments are demonstrations to the
sponsor of the real benefits of using TCP/IP now, ISO later, when it
is tested and proven.  By the way, watching the bureaucrats yell
policy at each other has proven to us, without a doubt, that it won't
get you anywhere.

Richard

bzs@BU-CS.BU.EDU (Barry Shein) (10/01/86)

Re: record-level access in TCP

This has come up before as being seriously lacking in other contexts.

It may be a can of worms due to the nature of heterogeneous networks,
usually the person calling for it is thinking of just THEIR record
level access (eg. VMS/RMS.) My usual reaction is that this problem has
not been adequately solved for magnetic tapes across heterogeneous
systems so I suspect there is a nut there to crack, it's not like tape
users never thought of it.

Even if OpenNet gets you this service to another OpenNet/Intel310
host, I doubt it helps you much with a PDS file on the MVS system
down the hall. It just solves the easy, short term problem.

I would suggest the community start looking hard at how far NFS/RPC/XDR
from SUN (and, as we all know, being adopted as a layered protocol on
TCP/UDP/IP by almost every major computer vendor) can be used to solve
the problem. It's not really 'record' level access, the question is better
put as:

	"How in general can I create a network I/O stream
	 which, rather than bytes, uses an arbitrary structured
	 data type, with a file offset calculation, as a unit of
	 transfer?"

Perhaps just semantics but I think it brings one a little closer to
understanding why something like XDR/RPC already addresses this
problem to a large extent and working from that base, where weaknesses
are perceived, might be the most profitable route to a solution (lord I
sound pedantic, sorry...)

Essentially (for those who haven't looked at the SUN protocols) one
sits down (they have already) and defines a network representation
for various primitive types (eg. byte order and format of integers).
Then a method for constructing arbitrary data types out of those as
n-tuples. Then a protocol for exchanging what you have in mind (as
a trivial example exchanging a fortran format string would be close)
and finally a remote procedure call protocol for specifying various
remote operations.

To stave off the flood of "XYZ system has been doing that for years",
yes, we know, but XYZ system is not licensable on our machines, is it?
The XDR/RPC protocols (protocols != code) are in the public domain.

And yes, I don't think it would be a good idea to put this into FTP
until this layer is defined. At some later date it might be clear
how these mechanisms can best be utilized in an extension to FTP for
some subset, but I doubt FTP is designed to support such generality.

	-Barry Shein, Boston University

P.S. I have no economic interests in any of this, if I did I'd probably
be rich and out spending my money instead of typing this stuff in...