[net.news] Bigger packet sizes in uucp

sch (04/14/83)

In a discussion with Lauren Weinstein, he mentioned that uucp has
a very general purpose packet driver that supports several sizes of
packets.  Uucp only uses 64 byte packets, and the code has bugs which
keep it from using larger packets.  Has anyone though about fixing
those bugs and running with a bigger packet size?  That should improve
performance substantially.  Also has anyone done any experiments with
the 4.1bsd network line discipline?  On hard wired high speed lines this
would reduce the system load from character at a time interrupts.
			Always looking for ways to speed up uucp,
			Stephen Hemminger

swatt (04/15/83)

Steve Hemminger wonders about using larger packet sizes in UUCP.  For
most connections (those at 1200 baud), the dominant time waster is
"dead" time between file transfers.  It takes the File protocol 2-3
seconds to get the handshaking done so both sides agree what to do
about moving a file.

Most handshaking messages will fit in 1 packet; 2 at most.  If your
connection is on a satelite line, you can add ~.5 seconds delay for
each file protocol response.  There is absolutely nothing you can do
about this.  Unlike the actual transfer protocol, which allows multiple
data packets outstanding without acknowledgment, each file protocol
message must be answered before the next one can be issued.

If your UUCP traffic is lots of relatively small files, you can easily
spend 50% of your total connect time negotiating for the next
transfer.  Following is a summary of today's activity:

======================================================================
got from    qubix    2 files     469 bytes     8 secs 58.63 bytes/sec
got from   ittral    6 files    2133 bytes    29 secs 73.55 bytes/sec
got from    qumix    4 files    1750 bytes    22 secs 79.55 bytes/sec
got from   wxlvax   24 files   37613 bytes   150 secs 250.75 bytes/sec
got from   decvax  120 files   61323 bytes   766 secs 80.06 bytes/sec
got from   bunker   12 files    3339 bytes    52 secs 64.21 bytes/sec
sent to   dcdwest   17 files   75907 bytes   702 secs 108.13 bytes/sec
sent to     qubix    1 files      72 bytes     1 secs 72.00 bytes/sec
sent to    ittral    4 files   18431 bytes   165 secs 111.70 bytes/sec
sent to    ittapp    3 files   18325 bytes   175 secs 104.71 bytes/sec
sent to     qumix   11 files     792 bytes     0 secs
sent to   lbl-csa    8 files    5687 bytes    46 secs 123.63 bytes/sec
sent to       sii    6 files   71494 bytes   660 secs 108.32 bytes/sec
sent to      duke    1 files      72 bytes     0 secs
sent to    wxlvax   39 files   42786 bytes    96 secs 445.69 bytes/sec
sent to    decvax   74 files   40188 bytes   332 secs 121.05 bytes/sec
sent to     eosp1    1 files      72 bytes     0 secs
sent to    bunker   10 files  134273 bytes  1218 secs 110.24 bytes/sec
sent to    atcvax   11 files     792 bytes     2 secs 396.00 bytes/sec


This table summarizes "live" time (transferring files), "dead" time
(connected, but not transferring files), and the percentage of
"dead" time over total connect [ (dead*100)/(live+dead) ].

======================================================================

 dcdwest live  702 secs  dead   47 secs (6%)
   qubix live    9 secs  dead    2 secs (18%)
  ittral live  194 secs  dead   16 secs (7%)
  ittapp live  175 secs  dead    5 secs (2%)
   qumix live   22 secs  dead   23 secs (51%)
 lbl-csa live   46 secs  dead   26 secs (36%)
     sii live  660 secs  dead   25 secs (3%)
    duke live    0 secs  dead    0 secs (0%)
  wxlvax live  246 secs  dead  103 secs (29%)
  decvax live 1098 secs  dead  785 secs (41%)	(over 12 minutes today!)
   eosp1 live    0 secs  dead    0 secs (0%)
  bunker live 1270 secs  dead   50 secs (3%)
  atcvax live    2 secs  dead    0 secs (0%)

Note the figures for decvax: total connect time is 75% greater than
required for the transfer.  All systems to which we forward news get it
batched in 50K hunks; decvax sends us news one article at a time.

Over direct 9600 baud connections, it might be a different story.  We
connect to "wxlvax" over a 4800-baud line and sending stuff to them is
effectively 445 cps anyway, which is less than 10% below maximum line
rate.  If you want to optimize your line costs, you should batch
transmissions so you transfer fewer, but larger files.

One last observation:  I have always noticed on our summaries that
decvax is considerably slower than all the other systems.  I had
assumed this was due to its heavier UUCP load.  But this is the first
summary I've looked at that breaks transfers down into sent/received.
ALL the "received" rates are unrealistically low.  I suspect there is a
glitch in the UUCP accounting.  Since we receive 20 times from decvax
what we send, and send 20 times to all other systems what we receive,
the effect is to make decvax look slow.

	- Alan S. Watt
	{decvax,duke,lbl-csam,purdue}!ittvax!swatt