[comp.protocols.tcp-ip] Super-Dumb Transport Protocol?

zweig@p.cs.uiuc.edu (06/12/89)

  Is there a protocol out there (possibly one for which an RFC is still
forthcoming) that is designed to do file-transfer from a server to a
client at minimal computational cost? That is, a protocol that just involves a
client sending a reqest which, if received by the server, generates a bunch of
IP datagrams in response (maybe with a terribly naive checksum/retransmit
of lost/damaged packets mechanism).

  FTP has far too much baggage -- sitting as it does on top of TCP -- and
TFTP is close, but still sits on top of UDP. I was thinking of an utterly
mindless protocol so I can run it on a PC which has some archival files
on a hard disk and whenever somebody wants a copy of one, they basically
send the filename and snarf up the replies. Optimally, the PC would just
need to have IP and this snarf/barf protocol in order to be an archive
server.

  I have been toying with the idea of using a Mac or an AT as a net-accessible
archive-disk, and bringing up TCP/IP/UDP/NFS/whatever seems like way too
much work, both in terms of hacking and in terms of getting the best
performance out of the slow, dumb box. I could come up with a local kludge
for this protocol -- but if there is something like it already out there,
I would like to know.

-Johnny Streamlined/Stripped-Down

romkey@asylum.SF.CA.US (John Romkey) (06/13/89)

In article <93400023@p.cs.uiuc.edu> zweig@p.cs.uiuc.edu writes:
>That is, a protocol that just involves a
>client sending a reqest which, if received by the server, generates a bunch of
>IP datagrams in response (maybe with a terribly naive checksum/retransmit
>of lost/damaged packets mechanism).

Sounds like TFTP to me, just add in the acknowledgements.

>  FTP has far too much baggage -- sitting as it does on top of TCP -- and
>TFTP is close, but still sits on top of UDP.

I've done this several times, so I'm not just hypothesizing here.  If
you've already got an IP, UDP is *trivial*. It adds some demultiplexing
and checksums. It has an 8 byte header. It almost does nothing. IP proper
is *much* more complicated.

The real complexity is in dealing with timeout and retransmission.
You'll find that you'll implement it primarily dealing with a certain
speed of network (serial lines or ethernet or FDDI) and then other
people will come along and run it over a speed at the opposite end of
the spectrum and it won't work so well, so you'll tinker with the
algorithms, and other people will come along and run it on a congested
overloaded network like the ARPANET (well, former-ARPANET) and you'll
need to tinker some more. There's a lot of stuff out there to look at
already to learn about these algorithms, especially Van Jacobsen's
work, but that's where a lot of the complexity comes in.
-- 
			- john romkey
USENET/UUCP: romkey@asylum.sf.ca.us	Internet: romkey@ftp.com
"We had some good machines/But they don't work no more" - Shriekback

zweig@p.cs.uiuc.edu (06/15/89)

  Thanks to all who pointed out how simple IP/UDP/TFTP can be (I got lots of
e-mail). One can make a dumb IP with a naive view of the network (say, a RARP
server plus a gateway somewhere that will accept any packets I dump onto the
local net that are for nonlocal hosts), a 100-line implementation of UDP and
a TFTP responder. Boink!
  What I _didn't_ want was routing/runtime support above what a really dumb
operating system (MS-DOS, Mac-OS...) could provide/a limited number of
simultaneous connections. The only thing I object to in TFTP is the stop-and-
wait aspect of the protocol -- thoroughly evil on a high-speed, high-latency
connection.
  While I agree it's a waste of time to reinvent the wheel, and that nobody
will be able to speak a new (slightly different) protocol, this seems like
a terribly simple wheel (so not too much time wasted) and, well, nobody spoke
NFS much more than five years ago, either....
  The reason I'm interested in a super-dumb protocol is that a server could
easily have the disk- and net-bandwidth to barf files out to hundreds or
thousands of users simultaneously, but if it has to do anything much more
complicated than just fetch a sector and transmit it (trick: calculate the
checksums when you put the archived file on disk -- no snarftime math!) over
the network, the bottleneck will be at the CPU/Memory interface where it
doesn't belong.

-Johnny Still-scratching-my-head